Vertex AI 會為您的工作設定資源。這項服務會根據您的工作設定分配一或多個虛擬機器 (稱為「訓練執行個體」)。如要設定訓練執行個體,請使用您在提交自訂訓練工作時,指定做為 WorkerPoolSpec 物件一部分的自訂容器。
Vertex AI 會透過您在建立訓練工作時指定的任何指令列引數來執行 Docker 映像檔。
訓練工作成功完成或發生無法復原的錯誤時,Vertex AI 會停止所有工作程序並清除資源。
自訂容器的優點
自訂容器可讓您指定和預先安裝所有應用程式需要的依附元件。
啟動時間更短。如果您使用已預先安裝依附元件的自訂容器,訓練應用程式啟動時就不用再花時間安裝。
使用自己選擇的機器學習架構。如果找不到內含所需機器學習架構的 Vertex AI 預先建構容器,您可以建構內含所選架構的自訂容器,並在 Vertex AI 上使用自訂容器執行工作。舉例來說,您可以使用客戶容器,透過 PyTorch 進行訓練。
分散式訓練的擴充支援。透過自訂容器,您可以使用任何 ML 框架來進行分散式訓練。
使用最新版本。您還可以使用最新版本或次要版本的 ML 框架。舉例來說,您可以建構自訂容器來使用 tf-nightly 進行訓練。
使用自訂容器進行超參數調整
如要在 Vertex AI 上進行超參數調整,請指定目標指標,以及是否要將每個指標最小化或最大化。舉例來說,您可以最大化模型的準確率,或選擇最小化模型的損失率。您也可以列出要調整的超參數,以及每個超參數可接受的值範圍。Vertex AI 會對您的訓練應用程式進行多次「試驗」,並在每次試驗後追蹤及調整超參數。超參數調整工作完成後,Vertex AI 會報告最有效率的超參數設定值,以及每次試驗的摘要資訊。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Custom containers overview\n\nA custom container is a Docker image that you create to run\nyour training application. By running your machine learning (ML) training job\nin a *custom container*, you can use ML frameworks, non-ML dependencies,\nlibraries, and binaries that are not otherwise supported\non Vertex AI.\n\nHow training with containers works\n----------------------------------\n\nYour training application, implemented in the ML framework of your choice,\nis the core of the training process.\n\n1. Create an application that trains your model, using the ML framework\n of your choice.\n\n2. Decide whether to use a custom container. There could be a\n [prebuilt container](/vertex-ai/docs/training/pre-built-containers) that already supports\n your dependencies. Otherwise, you need to [build a custom container for\n your training job](/vertex-ai/docs/training/create-custom-container). In your custom container, you\n pre-install your training application and all its dependencies onto an\n image that is used to run your training job.\n\n3. Store your training and verification data in a source that\n Vertex AI can access. To simplify authentication and reduce\n latency, store your data in Cloud Storage, Bigtable, or another\n Google Cloud storage service in the same Google Cloud project\n and region that you are using for Vertex AI. Learn more about\n [the ways Vertex AI can load your data](/vertex-ai/docs/training/code-requirements#loading-data).\n\n4. When your application is ready to run, you must build your Docker image and\n push it to Artifact Registry or Docker Hub, making sure that\n [Vertex AI can access your registry](/vertex-ai/docs/training/create-custom-container#manage-container-registry-permissions).\n\n5. Submit your custom training job by [creating a custom\n job](/vertex-ai/docs/training/create-custom-job) or [creating a custom training\n pipeline](/vertex-ai/docs/training/create-training-pipeline).\n\n6. Vertex AI sets up resources for your job. It allocates one or\n more virtual machines (called *training instances* ) based on your job\n configuration. You set up a training instance by using the custom container\n you specify as part of the [`WorkerPoolSpec`](/vertex-ai/docs/reference/rest/v1/CustomJobSpec#workerpoolspec) object when\n you [submit your custom training\n job](/vertex-ai/docs/training/create-custom-job).\n\n7. Vertex AI runs your Docker image, passing through any\n command-line arguments you specify when you create the training job.\n\n8. When your training job succeeds or encounters an unrecoverable error,\n Vertex AI halts all job processes and cleans up the\n resources.\n\nAdvantages of custom containers\n-------------------------------\n\nCustom containers let you specify and pre-install all the dependencies\nneeded for your application.\n\n- **Faster start-up time.** If you use a custom container with your dependencies pre-installed, you can save the time that your training application would otherwise take to install dependencies when starting up.\n- **Use the ML framework of your choice.** If you can't find an Vertex AI prebuilt container with the ML framework you want to use, you can build a custom container with your chosen framework and use it to run jobs on Vertex AI. For example, you can use a customer container to train with PyTorch.\n- **Extended support for distributed training.** With custom containers, you can do distributed training using any ML framework.\n- **Use the newest version.** You can also use the latest build or minor version of an ML framework. For example, you can build a custom container to train with `tf-nightly`.\n\nHyperparameter tuning with custom containers\n--------------------------------------------\n\nTo do [hyperparameter tuning](/vertex-ai/docs/training/hyperparameter-tuning-overview) on Vertex AI, you\nspecify goal metrics, along with whether to minimize or maximize each metric.\nFor example, you might want to maximize your model accuracy, or minimize your\nmodel loss. You also list the hyperparameters you'd like to tune, along with\nthe range of acceptable values for each hyperparameter. Vertex AI\ndoes multiple *trials* of your training application, tracking and adjusting the\nhyperparameters after each trial. When the hyperparameter tuning job is\ncomplete, Vertex AI reports values for the most effective\nconfiguration of your hyperparameters, and a summary for each trial.\n\nTo do hyperparameter tuning with custom containers, you need to make\nthe following adjustments:\n\n- In your Dockerfile: install [`cloudml-hypertune`](https://github.com/GoogleCloudPlatform/cloudml-hypertune).\n- In your training code:\n - Use `cloudml-hypertune` to report the results of each trial by calling its helper function, [`report_hyperparameter_tuning_metric`](https://github.com/GoogleCloudPlatform/cloudml-hypertune/blob/master/hypertune/hypertune.py#L49).\n - Add command-line arguments for each hyperparameter, and handle the argument parsing with an argument parser such as [`argparse`](https://docs.python.org/3/library/argparse.html).\n\nSee how to [configure a hyperparameter tuning job that uses custom\ncontainers](/vertex-ai/docs/training/using-hyperparameter-tuning) or learn more about\n[how hyperparameter tuning works on Vertex AI](/vertex-ai/docs/training/hyperparameter-tuning-overview).\n\nGPUs in custom containers\n-------------------------\n\nFor training with GPUs, your custom container needs to meet a few special\nrequirements. You must build a different Docker image than what you'd use for\ntraining with CPUs.\n\n- Pre-install the CUDA toolkit and cuDNN in your Docker image. The recommended way to build a custom container with support for GPUs is to use the [`nvidia/cuda`](https://hub.docker.com/r/nvidia/cuda/) image as your base image for your custom container. The `nvidia/cuda` container image has matching versions of CUDA toolkit and cuDNN pre-installed, and it helps you set up the related environment variables correctly.\n- Install your training application, along with your required ML framework and other dependencies in your Docker image.\n\nSee an [example Dockerfile for training with GPUs](https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/pytorch/containers/quickstart/mnist/Dockerfile-gpu).\n\nWhat's next\n-----------\n\n- Learn more about how to [create a custom container for your training\n job](/vertex-ai/docs/training/create-custom-container)."]]