[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# TensorFlow integration\n\nThis page explains Vertex AI's TensorFlow integration and\nprovides resources that show you how to use TensorFlow\non Vertex AI. Vertex AI's TensorFlow integration\nmakes it easier for you to train, deploy, and orchestrate\nTensorFlow models in production.\n\nRun code in notebooks\n---------------------\n\nVertex AI provides two options for running your code in\nnotebooks, Colab Enterprise and Vertex AI Workbench.\nTo learn more about these options, see\n[choose a notebook solution](/vertex-ai/docs/workbench/notebook-solution).\n\nPrebuilt containers for training\n--------------------------------\n\nVertex AI provides prebuilt Docker container images for model training.\nThese containers are organized by machine learning frameworks and framework\nversions and include common dependencies that you might want to use in your\ntraining code.\n\nTo learn about which TensorFlow versions have\nprebuilt training containers and how to train models with\na prebuilt training container, see [Prebuilt containers for\ncustom training](/vertex-ai/docs/training/pre-built-containers#tensorflow).\n\nDistributed training\n--------------------\n\nYou can run distributed training of TensorFlow models on Vertex AI. For\nmulti-worker training, you can use Reduction Server to optimize performance\neven further for all-reduce collective operations. To learn more about\ndistributed training on Vertex AI, see\n[Distributed training](/vertex-ai/docs/training/distributed-training).\n\nPrebuilt containers for inference\n---------------------------------\n\nSimilar to prebuilt containers for training, Vertex AI provides\nprebuilt container images for serving inferences and explanations from\nTensorFlow models that you either created within or\noutside of Vertex AI. These images provide HTTP inference servers\nthat you can use to serve inferences with minimal configuration.\n\nTo learn about which TensorFlow versions have\nprebuilt training containers and how to train models with\na prebuilt training container, see [Prebuilt containers for\ncustom training](/vertex-ai/docs/predictions/pre-built-containers#tensorflow).\n\n### Optimized TensorFlow runtime\n\n|\n| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThe [optimized TensorFlow\nruntime](/vertex-ai/docs/predictions/optimized-tensorflow-runtime)\nuses model optimizations and new proprietary Google technologies to improve the\nspeed and lower the cost of inferences compared to Vertex AI's standard\nprebuilt inference containers for TensorFlow.\n\nTensorFlow Cloud Profiler integration\n-------------------------------------\n\nTrain models cheaper and faster by monitoring and optimizing the performance of\nyour training job using Vertex AI's TensorFlow Cloud Profiler\nintegration. TensorFlow Cloud Profiler helps you understand the\nresource consumption of training operations so you can identify and\neliminate performance bottlenecks.\n\nTo learn more about Vertex AI\nTensorFlow Cloud Profiler, see [Profile model training performance\nusing Profiler](/vertex-ai/docs/training/tensorboard-profiler).\n\nResources for using TensorFlow on Vertex AI\n-------------------------------------------\n\nTo learn more and start using TensorFlow in Vertex AI,\nsee the following resources.\n\n- [Prototype to\n Production](https://www.youtube.com/playlist?list=PLIivdWyY5sqJAyUJbbsc8ZyGLNT4isnuB):\n A video series that provides and end-to-end example of developing and\n deploying a custom TensorFlow model on Vertex AI.\n\n- [Optimize training performance with Reduction Server on\n Vertex AI](/blog/topics/developers-practitioners/optimize-training-performance-reduction-server-vertex-ai):\n A blog post on optimizing distributed training on Vertex AI by using\n Reduction Server.\n\n- [How to optimize training performance with the\n TensorFlow Cloud Profiler on\n Vertex AI](/blog/topics/developers-practitioners/how-optimize-training-performance-tensorflow-profiler-vertex-ai):\n A blog post that shows you how to identify performance bottlenecks in your\n training job by using Vertex AI TensorFlow Cloud Profiler.\n\n- [Custom model batch prediction with feature\n filtering](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/prediction/custom_batch_prediction_feature_filter.ipynb):\n A notebook tutorial that shows you how to use the Vertex AI SDK for Python to\n train a custom tabular classification model and perform batch inference with\n feature filtering.\n\n- [Vertex AI Pipelines: Custom training with prebuilt Google Cloud\n Pipeline Components](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/custom_model_training_and_batch_prediction.ipynb):\n A notebook tutorial that shows you how to use Vertex AI Pipelines with\n prebuilt Google Cloud Pipeline Components for custom training.\n\n- [Co-host TensorFlow models on the same VM for\n predictions](https://codelabs.developers.google.com/vertex-cohost-prediction#0):\n A codelab that shows you how to use the co-hosting model feature in\n Vertex AI to host multiple models on the same VM for online\n inferences."]]