このページでは、Vertex AI の TensorFlow インテグレーションについて説明します。また、Vertex AI で TensorFlow を使用する方法を示すリソースについて紹介します。Vertex AI の TensorFlow インテグレーションを使用すると、本番環境での TensorFlow モデルのトレーニング、デプロイ、オーケストレーションを容易に行うことができます。
ノートブックでコードを実行する
Vertex AI には、ノートブックでコードを実行する方法として Colab Enterprise と Vertex AI Workbench という 2 つのオプションがあります。これらのオプションの詳細については、ノートブック ソリューションを選択するをご覧ください。
トレーニング用のビルド済みコンテナ
Vertex AI には、モデル トレーニング用にビルド済みの Docker コンテナ イメージが用意されています。これらのコンテナは ML フレームワークとフレームワーク バージョン別に編成され、トレーニング コードで必要となる一般的な依存関係が含まれています。
Vertex AI では、TensorFlow モデルの分散トレーニングを実行できます。マルチワーカー トレーニングの場合、Reduction Server を使用して、all-reduce グループ演算に合わせてパフォーマンスをさらに最適化できます。Vertex AI での分散トレーニングの詳細については、分散トレーニングをご覧ください。
推論用のビルド済みコンテナ
トレーニング用のビルド済みコンテナと同様に、Vertex AI では、Vertex AI 内外で作成した TensorFlow モデルから推論と説明を提供するためのビルド済みコンテナ イメージが用意されています。これらのイメージは、最小限の構成で推論を行うために使用できる HTTP 推論サーバーを提供します。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# TensorFlow integration\n\nThis page explains Vertex AI's TensorFlow integration and\nprovides resources that show you how to use TensorFlow\non Vertex AI. Vertex AI's TensorFlow integration\nmakes it easier for you to train, deploy, and orchestrate\nTensorFlow models in production.\n\nRun code in notebooks\n---------------------\n\nVertex AI provides two options for running your code in\nnotebooks, Colab Enterprise and Vertex AI Workbench.\nTo learn more about these options, see\n[choose a notebook solution](/vertex-ai/docs/workbench/notebook-solution).\n\nPrebuilt containers for training\n--------------------------------\n\nVertex AI provides prebuilt Docker container images for model training.\nThese containers are organized by machine learning frameworks and framework\nversions and include common dependencies that you might want to use in your\ntraining code.\n\nTo learn about which TensorFlow versions have\nprebuilt training containers and how to train models with\na prebuilt training container, see [Prebuilt containers for\ncustom training](/vertex-ai/docs/training/pre-built-containers#tensorflow).\n\nDistributed training\n--------------------\n\nYou can run distributed training of TensorFlow models on Vertex AI. For\nmulti-worker training, you can use Reduction Server to optimize performance\neven further for all-reduce collective operations. To learn more about\ndistributed training on Vertex AI, see\n[Distributed training](/vertex-ai/docs/training/distributed-training).\n\nPrebuilt containers for inference\n---------------------------------\n\nSimilar to prebuilt containers for training, Vertex AI provides\nprebuilt container images for serving inferences and explanations from\nTensorFlow models that you either created within or\noutside of Vertex AI. These images provide HTTP inference servers\nthat you can use to serve inferences with minimal configuration.\n\nTo learn about which TensorFlow versions have\nprebuilt training containers and how to train models with\na prebuilt training container, see [Prebuilt containers for\ncustom training](/vertex-ai/docs/predictions/pre-built-containers#tensorflow).\n\n### Optimized TensorFlow runtime\n\n|\n| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThe [optimized TensorFlow\nruntime](/vertex-ai/docs/predictions/optimized-tensorflow-runtime)\nuses model optimizations and new proprietary Google technologies to improve the\nspeed and lower the cost of inferences compared to Vertex AI's standard\nprebuilt inference containers for TensorFlow.\n\nTensorFlow Cloud Profiler integration\n-------------------------------------\n\nTrain models cheaper and faster by monitoring and optimizing the performance of\nyour training job using Vertex AI's TensorFlow Cloud Profiler\nintegration. TensorFlow Cloud Profiler helps you understand the\nresource consumption of training operations so you can identify and\neliminate performance bottlenecks.\n\nTo learn more about Vertex AI\nTensorFlow Cloud Profiler, see [Profile model training performance\nusing Profiler](/vertex-ai/docs/training/tensorboard-profiler).\n\nResources for using TensorFlow on Vertex AI\n-------------------------------------------\n\nTo learn more and start using TensorFlow in Vertex AI,\nsee the following resources.\n\n- [Prototype to\n Production](https://www.youtube.com/playlist?list=PLIivdWyY5sqJAyUJbbsc8ZyGLNT4isnuB):\n A video series that provides and end-to-end example of developing and\n deploying a custom TensorFlow model on Vertex AI.\n\n- [Optimize training performance with Reduction Server on\n Vertex AI](/blog/topics/developers-practitioners/optimize-training-performance-reduction-server-vertex-ai):\n A blog post on optimizing distributed training on Vertex AI by using\n Reduction Server.\n\n- [How to optimize training performance with the\n TensorFlow Cloud Profiler on\n Vertex AI](/blog/topics/developers-practitioners/how-optimize-training-performance-tensorflow-profiler-vertex-ai):\n A blog post that shows you how to identify performance bottlenecks in your\n training job by using Vertex AI TensorFlow Cloud Profiler.\n\n- [Custom model batch prediction with feature\n filtering](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/prediction/custom_batch_prediction_feature_filter.ipynb):\n A notebook tutorial that shows you how to use the Vertex AI SDK for Python to\n train a custom tabular classification model and perform batch inference with\n feature filtering.\n\n- [Vertex AI Pipelines: Custom training with prebuilt Google Cloud\n Pipeline Components](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/custom_model_training_and_batch_prediction.ipynb):\n A notebook tutorial that shows you how to use Vertex AI Pipelines with\n prebuilt Google Cloud Pipeline Components for custom training.\n\n- [Co-host TensorFlow models on the same VM for\n predictions](https://codelabs.developers.google.com/vertex-cohost-prediction#0):\n A codelab that shows you how to use the co-hosting model feature in\n Vertex AI to host multiple models on the same VM for online\n inferences."]]