You can configure Vertex AI to peer with Virtual Private Cloud (VPC) to connect directly with certain resources in Vertex AI, including:
- Custom training jobs
- Private prediction endpoints (preview)
- Vector matching online queries (preview)
This guide shows how to set up VPC Network Peering to peer your network with Vertex AI resources. This guide is recommended for networking administrators who are already familiar with Google Cloud networking concepts.
This guide covers the following tasks:
- Configure private services access for the VPC. This establishes a peering connection between your VPC and Google's shared VPC network.
- Consider the IP range you need to reserve for Vertex AI.
- If applicable, export custom routes so that Vertex AI can import them.
Before you begin
- Select a VPC that you want to peer with Vertex AI resources.
- Select or create a Google Cloud project to use for Vertex AI.
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
- Enable the Compute Engine API, Vertex AI API, and the Service Networking APIs.
- Optionally, you can use Shared VPC. If you use Shared VPC, you usually use Vertex AI in a separate Google Cloud project than your VPC host project. Enable the Compute Engine API and Service Networking APIs in both projects. Learn how to provision Shared VPC.
- Install the Cloud SDK if you want to run the
gcloudcommand-line examples in this guide.
If you are not a project owner or editor, make sure you have the
Network Admin role
includes the permissions you need to manage networking resources.
Peering with an on-premises network
For VPC Network Peering with an on-premises network, there are additional steps:
- Connect your on-premises network to your VPC. You can use a VPN tunnel or Interconnect.
- Set up custom routes from the VPC to your on-premises network.
- Export your custom routes so that Vertex AI can import them.
Set up private services access for your VPC
When you set up private services access, you establish a private connection between your network and a network owned by Google or a third party service (service producers). In this case, Vertex AI is a service producer. To set up private services access, you reserve an IP range for service producers, and then create a peering connection with Vertex AI.
- Set environment variables for your project ID, region name, the name of your
reserved range, and the name of your network.
- If you use Shared VPC, use the project ID of your VPC host project. Otherwise, use the project ID of Google Cloud project you use for Vertex AI.
- Select a region that Vertex AI supports.
- Enable the required APIs. If you use Shared VPC, make sure to enable the APIs in your VPC host project and the Google Cloud project you use for Vertex AI.
- Set a reserved range using
gcloud compute addresses create.
Establish a peering connection between your VPC host project and Google's Service Networking, using
gcloud services vpc-peerings connect.
PROJECT_ID=YOUR_PROJECT_ID gcloud config set project $PROJECT_ID REGION=YOUR_REGION # This is for display only; you can name the range anything. PEERING_RANGE_NAME=google-reserved-range NETWORK=YOUR_NETWORK_NAME # NOTE: `prefix-length=16` means a CIDR block with mask /16 will be # reserved for use by Google services, such as Vertex AI. gcloud compute addresses create $PEERING_RANGE_NAME \ --global \ --prefix-length=16 \ --description="peering range for Google service" \ --network=$NETWORK \ --purpose=VPC_PEERING # Create the VPC connection. gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --network=$NETWORK \ --ranges=$PEERING_RANGE_NAME \ --project=$PROJECT_ID
Learn more about private services access.
Reserve IP ranges for Vertex AI
When you reserve an IP range for service producers, the range can be used by Vertex AI and other services. If you connect with multiple service producers using the same range, allocate a larger range to accommodate them, in order to avoid IP exhaustion.
Export custom routes
If you use custom routes, you need to export them so that Vertex AI can import them. If you don't use custom routes, skip this section.
To export custom routes, you update the peering connection in your VPC. Exporting custom routes sends all eligible static and dynamic routes that are in your VPC network, such as routes to your on-premises network, to service producers' networks (Vertex AI in this case). This establishes the necessary connections and allows training jobs to send traffic back to your on-premises network.
Learn more about private connections with on-premises networks.
- Go to the VPC Network Peering page in the Google Cloud Console.
Go to the VPC Network Peering page
- Select the peering connection to update.
- Click Edit.
- Select Export custom routes.
Find the name of the peering connection to update. If you have multiple peering connections, omit the
gcloud services vpc-peerings list \ --network=$NETWORK \ --service=servicenetworking.googleapis.com \ --project=$PROJECT_ID \ --format "value(peering)"
Update the peering connection to export custom routes.
gcloud compute networks peerings update PEERING-NAME \ --network=$NETWORK \ --export-custom-routes \ --project=$PROJECT_ID
Check the status of your peering connections
To see that peering connections are active, you can list them using
gcloud compute networks peerings list --network $NETWORK
You should see that the state of the peering you just created is
Learn more about active peering connections.
This section lists some common issues for configuring VPC Network Peering with Vertex AI.
When you configure Vertex AI to use your network, specify the full network name:
Make sure that you've allocated a sufficient IP range for all service producers your network connects to, including Vertex AI.
For additional troubleshooting information, refer to the VPC Network Peering troubleshooting guide.
- Learn how to use private IP for custom training.
- Learn how to use private endpoints for prediction.
- Learn more about VPC Network Peering.
- See reference architectures and best practices for VPC design.