이 페이지에서는 Vertex AI의 TensorFlow 통합을 설명하고 Vertex AI에서 TensorFlow를 사용하는 방법을 보여주는 리소스를 제공합니다.
Vertex AI의 TensorFlow 통합을 사용하면 프로덕션에서 TensorFlow 모델을 더 쉽게 학습, 배포, 조정할 수 있습니다.
노트북에서 코드 실행
Vertex AI는 노트북에서 코드를 실행할 수 있는 두 가지 옵션인 Colab Enterprise 및 Vertex AI Workbench를 제공합니다.
이러한 옵션에 대한 자세한 내용은 노트북 솔루션 선택을 참조하세요.
학습용으로 사전 빌드된 컨테이너
Vertex AI는 모델 학습용으로 사전 빌드된 Docker 컨테이너 이미지를 제공합니다.
이러한 컨테이너는 머신러닝 프레임워크 및 프레임워크 버전으로 구성되며 학습 코드에 사용하려는 공통 종속 항목을 포함합니다.
사전 빌드된 학습 컨테이너가 있는 TensorFlow 버전과 함께 사전 빌드된 학습 컨테이너로 모델을 학습시키는 방법을 알아보려면 커스텀 학습용으로 사전 빌드된 컨테이너를 참조하세요.
분산형 학습
Vertex AI에서 TensorFlow 모델의 분산형 학습을 실행할 수 있습니다. 다중 작업자 학습의 경우 Reduction Server를 사용하여 올리듀스 집합 작업의 성능을 더욱 최적화할 수 있습니다. Vertex AI의 분산형 학습에 대한 자세한 내용은 분산형 학습을 참조하세요.
예측용으로 사전 빌드된 컨테이너
Vertex AI는 학습용으로 사전 빌드된 컨테이너와 마찬가지로 Vertex AI 내부 또는 외부에서 만든 TensorFlow 모델에서 예측 및 설명을 제공하기 위해 사전 빌드된 컨테이너 이미지를 제공합니다.
이러한 이미지는 최소한의 구성으로 예측을 제공하는 데 사용할 수 있는 HTTP 예측 서버를 제공합니다.
사전 빌드된 학습 컨테이너가 있는 TensorFlow 버전과 함께 사전 빌드된 학습 컨테이너로 모델을 학습시키는 방법을 알아보려면 커스텀 학습용으로 사전 빌드된 컨테이너를 참조하세요.
최적화된 TensorFlow 런타임
최적화된 TensorFlow 런타임은 모델 최적화 및 새로운 독점 Google 기술을 사용하여 TensorFlow를 위한 Vertex AI의 표준 사전 빌드된 예측 컨테이너에 비해 속도를 향상시키고 예측 비용을 절감합니다. TensorFlow용 컨테이너입니다.
TensorFlow 프로파일러 통합
Vertex AI의 TensorFlow 프로파일러 통합을 사용해 학습 작업의 성능을 모니터링하고 최적화하여 모델을 더 저렴하고 빠르게 학습시키세요.
TensorFlow 프로파일러는 학습 작업의 리소스 소비를 이해하는 데 도움이 되므로 성능 병목 현상을 식별하고 제거할 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# TensorFlow integration\n\nThis page explains Vertex AI's TensorFlow integration and\nprovides resources that show you how to use TensorFlow\non Vertex AI. Vertex AI's TensorFlow integration\nmakes it easier for you to train, deploy, and orchestrate\nTensorFlow models in production.\n\nRun code in notebooks\n---------------------\n\nVertex AI provides two options for running your code in\nnotebooks, Colab Enterprise and Vertex AI Workbench.\nTo learn more about these options, see\n[choose a notebook solution](/vertex-ai/docs/workbench/notebook-solution).\n\nPrebuilt containers for training\n--------------------------------\n\nVertex AI provides prebuilt Docker container images for model training.\nThese containers are organized by machine learning frameworks and framework\nversions and include common dependencies that you might want to use in your\ntraining code.\n\nTo learn about which TensorFlow versions have\nprebuilt training containers and how to train models with\na prebuilt training container, see [Prebuilt containers for\ncustom training](/vertex-ai/docs/training/pre-built-containers#tensorflow).\n\nDistributed training\n--------------------\n\nYou can run distributed training of TensorFlow models on Vertex AI. For\nmulti-worker training, you can use Reduction Server to optimize performance\neven further for all-reduce collective operations. To learn more about\ndistributed training on Vertex AI, see\n[Distributed training](/vertex-ai/docs/training/distributed-training).\n\nPrebuilt containers for inference\n---------------------------------\n\nSimilar to prebuilt containers for training, Vertex AI provides\nprebuilt container images for serving inferences and explanations from\nTensorFlow models that you either created within or\noutside of Vertex AI. These images provide HTTP inference servers\nthat you can use to serve inferences with minimal configuration.\n\nTo learn about which TensorFlow versions have\nprebuilt training containers and how to train models with\na prebuilt training container, see [Prebuilt containers for\ncustom training](/vertex-ai/docs/predictions/pre-built-containers#tensorflow).\n\n### Optimized TensorFlow runtime\n\n|\n| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThe [optimized TensorFlow\nruntime](/vertex-ai/docs/predictions/optimized-tensorflow-runtime)\nuses model optimizations and new proprietary Google technologies to improve the\nspeed and lower the cost of inferences compared to Vertex AI's standard\nprebuilt inference containers for TensorFlow.\n\nTensorFlow Cloud Profiler integration\n-------------------------------------\n\nTrain models cheaper and faster by monitoring and optimizing the performance of\nyour training job using Vertex AI's TensorFlow Cloud Profiler\nintegration. TensorFlow Cloud Profiler helps you understand the\nresource consumption of training operations so you can identify and\neliminate performance bottlenecks.\n\nTo learn more about Vertex AI\nTensorFlow Cloud Profiler, see [Profile model training performance\nusing Profiler](/vertex-ai/docs/training/tensorboard-profiler).\n\nResources for using TensorFlow on Vertex AI\n-------------------------------------------\n\nTo learn more and start using TensorFlow in Vertex AI,\nsee the following resources.\n\n- [Prototype to\n Production](https://www.youtube.com/playlist?list=PLIivdWyY5sqJAyUJbbsc8ZyGLNT4isnuB):\n A video series that provides and end-to-end example of developing and\n deploying a custom TensorFlow model on Vertex AI.\n\n- [Optimize training performance with Reduction Server on\n Vertex AI](/blog/topics/developers-practitioners/optimize-training-performance-reduction-server-vertex-ai):\n A blog post on optimizing distributed training on Vertex AI by using\n Reduction Server.\n\n- [How to optimize training performance with the\n TensorFlow Cloud Profiler on\n Vertex AI](/blog/topics/developers-practitioners/how-optimize-training-performance-tensorflow-profiler-vertex-ai):\n A blog post that shows you how to identify performance bottlenecks in your\n training job by using Vertex AI TensorFlow Cloud Profiler.\n\n- [Custom model batch prediction with feature\n filtering](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/prediction/custom_batch_prediction_feature_filter.ipynb):\n A notebook tutorial that shows you how to use the Vertex AI SDK for Python to\n train a custom tabular classification model and perform batch inference with\n feature filtering.\n\n- [Vertex AI Pipelines: Custom training with prebuilt Google Cloud\n Pipeline Components](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/custom_model_training_and_batch_prediction.ipynb):\n A notebook tutorial that shows you how to use Vertex AI Pipelines with\n prebuilt Google Cloud Pipeline Components for custom training.\n\n- [Co-host TensorFlow models on the same VM for\n predictions](https://codelabs.developers.google.com/vertex-cohost-prediction#0):\n A codelab that shows you how to use the co-hosting model feature in\n Vertex AI to host multiple models on the same VM for online\n inferences."]]