プライベート IP を使用してトレーニング ジョブに接続すると、パブリック IP を使用する場合よりも、ネットワーク セキュリティが強化され、ネットワーク レイテンシが短縮されます。プライベート IP を使用するには、Virtual Private Cloud(VPC)を使用して Vertex AI カスタム トレーニング ジョブとネットワークをピアリングします。これにより、トレーニング コードからGoogle Cloud またはオンプレミス ネットワーク内のプライベート IP アドレスにアクセスできるようになります。
このガイドでは、ネットワークと Vertex AI CustomJob、HyperparameterTuningJob、またはカスタム TrainingPipeline リソースをピアリングするように VPC ネットワーク ピアリングを設定した後に、ネットワークでカスタム トレーニングを実行する方法について説明します。
サービス プロデューサーの IP 範囲を予約すると、その範囲は Vertex AI やその他のサービスで使用されます。次の表では、Vertex AI が範囲をほぼ独占的に使用していると仮定し、/16~/18 の範囲で実行可能な並列トレーニング ジョブの最大数を示しています。同じ範囲を使用して他のサービス プロデューサーと接続する場合は、IP が不足しないように、より大きな範囲を割り当てます。
このセクションでは、カスタム トレーニング リソースがネットワーク内のプライベート IP にアクセスできるかどうかテストする方法について説明します。
VPC ネットワーク内に Compute Engine インスタンスを作成します。
ファイアウォール ルールをチェックして、Vertex AI ネットワークからの上り(内向き)ルールが制限されていないことを確認します。制限されている場合、Vertex AI ネットワークで Vertex AI(と他のサービス プロデューサー)用に予約した IP 範囲にアクセスできるように、ルールを追加します。
Vertex AI CustomJob がアクセスするエンドポイントを作成するために、VM インスタンスにローカル サーバーを設定します。
Vertex AI で実行する Python トレーニング アプリケーションを作成します。モデルのトレーニング コードの代わりに、前の手順で設定したエンドポイントにアクセスするコードを作成します。
上記の例を使用して CustomJob を作成します。
一般的な問題
このセクションでは、Vertex AI との VPC ネットワーク ピアリングの構成に関する一般的な問題について説明します。
ネットワークを使用するように Vertex AI を構成する場合は、完全なネットワーク名を指定します。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Use a private IP for custom training\n\n| **Note:** Vertex AI Training doesn't support VPC Peering with H100-mega, H200 and B200 accelerators. A [network attachment with PSC-I](/vertex-ai/docs/training/psc-i-egress) can be used as an alternative for VPC Peering.\n\nUsing private IP to connect to your training jobs provides more\nnetwork security and lower network latency than using public IP. To use private\nIP, you use [Virtual Private Cloud (VPC)](/vpc/docs/vpc-peering) to peer your\nnetwork with any type of\n[Vertex AI custom training job](/vertex-ai/docs/training/custom-training-methods).\nThis allows your training code to access private IP addresses inside your\nGoogle Cloud or on-premises networks.\n\nThis guide shows how to run custom training jobs in your network after you have\nalready [set up VPC Network Peering](/vertex-ai/docs/general/vpc-peering) to peer your network\nwith a Vertex AI `CustomJob`, `HyperparameterTuningJob`, or custom\n`TrainingPipeline` resource.\n\nOverview\n--------\n\nBefore you submit a custom training job using private IP, you must [configure\nprivate services access to create peering connections between your network\nand Vertex AI](/vertex-ai/docs/general/vpc-peering). If you have already set this up,\nyou can use your existing peering connections.\n\nThis guide covers the following tasks:\n\n- Understanding which IP ranges to reserve for custom training.\n- Verify the status of your existing peering connections.\n- Perform Vertex AI custom training on your network.\n- Check for active training occurring on one network before training on another network.\n- Test that your training code can access private IPs in your network.\n\n### Reserve IP ranges for custom training\n\nWhen you reserve an IP range for service producers, the range can be used by\nVertex AI and other services. This table shows the maximum number\nof parallel training jobs that you can run with reserved ranges from /16 to /18,\nassuming the range is used almost exclusively by Vertex AI. If you\nconnect with other service producers using the same range, allocate a larger\nrange to accommodate them, in order to avoid IP exhaustion.\n\nLearn more about [configuring worker pools for distributed\ntraining](/vertex-ai/docs/training/distributed-training).\n\nCheck the status of existing peering connections\n------------------------------------------------\n\nIf you have existing peering connections you use with Vertex AI,\nyou can list them to check status: \n\n gcloud compute networks peerings list --network \u003cvar translate=\"no\"\u003eNETWORK_NAME\u003c/var\u003e\n\nYou should see that the state of your peering connections are `ACTIVE`.\nLearn more about [active peering connections](/vpc/docs/peer-two-networks#peering-becomes-active).\n\nPerform custom training\n-----------------------\n\nWhen you perform custom training, you must specify the name of the\nnetwork that you want Vertex AI to have access to.\n\nDepending on how you perform custom training, specify the network in one of the\nfollowing API fields:\n\n- **If you are creating a\n [`CustomJob`](/vertex-ai/docs/reference/rest/v1/projects.locations.customJobs),** specify the\n `CustomJob.jobSpec.network` field.\n\n If you are using the Google Cloud CLI, then you can use the `--config` flag on\n the [`gcloud ai custom-jobs create`\n command](/sdk/gcloud/reference/ai/custom-jobs/create) to specify the\n `network` field.\n\n Learn more about [creating a `CustomJob`](/vertex-ai/docs/training/create-custom-job).\n- **If you are creating a\n [`HyperparameterTuningJob`](/vertex-ai/docs/reference/rest/v1/projects.locations.hyperparameterTuningJobs),**\n specify the `HyperparameterTuningJob.trialJobSpec.network` field.\n\n If you are using the gcloud CLI, then you can use the `--config`\n flag on the [`gcloud ai hpt-tuning-jobs create`\n command](/sdk/gcloud/reference/ai/hp-tuning-jobs/create) to specify the\n `network` field.\n\n Learn more about\n [creating a\n `HyperparameterTuningJob`](/vertex-ai/docs/training/using-hyperparameter-tuning).\n- **If you are creating a\n [`TrainingPipeline`](/vertex-ai/docs/reference/rest/v1/projects.locations.trainingPipelines) without\n hyperparameter tuning,** specify the\n `TrainingPipeline.trainingTaskInputs.network` field.\n\n Learn more about [creating a custom\n `TrainingPipeline`](/vertex-ai/docs/training/create-training-pipeline).\n- **If you are creating a `TrainingPipeline` with hyperparameter tuning** ,\n specify the `TrainingPipeline.trainingTaskInputs.trialJobSpec.network` field.\n\nIf you don't specify a network name, then Vertex AI runs your\ncustom training without a peering connection, and without access to private IPs\nin your project.\n\n### Example: Creating a `CustomJob` with the gcloud CLI\n\nThe following example shows how to specify a network when you use the\ngcloud CLI to run a `CustomJob` that uses a prebuilt container. If\nyou are perform custom training in a different way, add the `network` field\n[as described for the type of custom training job you're using](#perform-custom-training).\n\n1. Create a `config.yaml` file to specify the network. If you're using\n Shared VPC, use your VPC host project number.\n\n Make sure the network name is formatted correctly: \n\n PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format=\"value(projectNumber)\")\n\n cat \u003c\u003cEOF \u003e config.yaml\n network: projects/\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e/global/networks/\u003cvar translate=\"no\"\u003eNETWORK_NAME\u003c/var\u003e\n EOF\n\n2. [Create a training application](/vertex-ai/docs/training/create-python-pre-built-container)\n to run on Vertex AI.\n\n3. Create the `CustomJob`, passing in your `config.yaml` file:\n\n gcloud ai custom-jobs create \\\n --region=\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e \\\n --display-name=\u003cvar translate=\"no\"\u003eJOB_NAME\u003c/var\u003e \\\n --python-package-uris=\u003cvar translate=\"no\"\u003ePYTHON_PACKAGE_URIS\u003c/var\u003e \\\n --worker-pool-spec=machine-type=\u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e,replica-count=\u003cvar translate=\"no\"\u003eREPLICA_COUNT\u003c/var\u003e,executor-image-uri=\u003cvar translate=\"no\"\u003ePYTHON_PACKAGE_EXECUTOR_IMAGE_URI\u003c/var\u003e,python-module=\u003cvar translate=\"no\"\u003ePYTHON_MODULE\u003c/var\u003e \\\n --config=config.yaml\n\nTo learn how to replace the placeholders in this command, read [Creating custom\ntraining jobs](/vertex-ai/docs/training/create-custom-job).\n\n### Run jobs on different networks\n\nYou can't perform custom training on a new network while you are still\nperforming custom training on another network. Before you switch to a different\nnetwork, you must wait for all submitted `CustomJob`, `HyperparameterTuningJob`,\nand custom `TrainingPipeline` resources to finish, or you must cancel them.\n\nTest training job access\n------------------------\n\nThis section explains how to test that a custom training resource can access\nprivate IPs in your network.\n\n1. Create a Compute Engine instance in your VPC network.\n2. [Check your firewall rules](/vpc/docs/using-firewalls#listing-rules-vm) to make sure that they don't restrict ingress from the Vertex AI network. If so, add a rule to ensure the Vertex AI network can access the IP range you reserved for Vertex AI (and other service producers).\n3. Set up a local server on the VM instance in order to create an endpoint for a Vertex AI `CustomJob` to access.\n4. Create a Python training application to run on Vertex AI. Instead of model training code, create code that accesses the endpoint you set up in the previous step.\n5. Follow the previous example to create a `CustomJob`.\n\nCommon problems\n---------------\n\nThis section lists some common issues for configuring VPC Network Peering with\nVertex AI.\n\n- When you configure Vertex AI to use your network, specify the\n full network name:\n\n \"projects/\u003cvar translate=\"no\"\u003eYOUR_PROJECT_NUMBER\u003c/var\u003e/global/networks/\u003cvar translate=\"no\"\u003eYOUR_NETWORK_NAME\u003c/var\u003e\"\n- Make sure you are not performing custom training on a network before\n performing custom training on a different network.\n\n- Make sure that you've allocated a sufficient IP range for all service\n producers your network connects to, including Vertex AI.\n\nFor additional troubleshooting information, refer to the\n[VPC Network Peering troubleshooting guide](/vpc/docs/using-vpc-peering#troubleshooting).\n\nWhat's next\n-----------\n\n- Learn more about [VPC Network Peering](/vpc/docs/vpc-peering).\n- See [reference architectures and best practices](/solutions/best-practices-vpc-design#shared-service) for VPC design."]]