Set up VPC Network Peering

You can configure Vertex AI to peer with Virtual Private Cloud (VPC) to connect directly with certain resources in Vertex AI, including:

This guide shows how to set up VPC Network Peering to peer your network with Vertex AI resources. This guide is recommended for networking administrators who are already familiar with Google Cloud networking concepts.

Overview

This guide covers the following tasks:

  • Configure private services access for the VPC. This establishes a peering connection between your VPC and Google's shared VPC network.
  • Consider the IP range you need to reserve for Vertex AI.
  • If applicable, export custom routes so that Vertex AI can import them.

Before you begin

  • Select a VPC that you want to peer with Vertex AI resources.
  • Select or create a Google Cloud project to use for Vertex AI.
  • Make sure that billing is enabled for your Google Cloud project.

  • Enable the Compute Engine API, Vertex AI API, and Service Networking APIs.

    Enable the APIs

  • Optionally, you can use Shared VPC. If you use Shared VPC, you usually use Vertex AI in a separate Google Cloud project than your VPC host project. Enable the Compute Engine API and Service Networking APIs in both projects. Learn how to provision Shared VPC.
  • Install the gcloud CLI if you want to run the gcloud examples in this guide.

Required roles

If you are not a project owner or editor, make sure you have the Compute Network Admin role (roles/compute.networkAdmin), which includes the required roles you need to manage networking resources.

Peering with an on-premises network

For VPC Network Peering with an on-premises network, there are additional steps:

  1. Connect your on-premises network to your VPC. You can use a VPN tunnel or Interconnect.
  2. Set up custom routes from the VPC to your on-premises network.
  3. Export your custom routes so that Vertex AI can import them.

Set up private services access for your VPC

When you set up private services access, you establish a private connection between your network and a network owned by Google or a third party service (service producers). In this case, Vertex AI is a service producer. To set up private services access, you reserve an IP range for service producers, and then create a peering connection with Vertex AI.

If you already have a VPC with private services access configured, move on to exporting custom routes.

  1. Set environment variables for your project ID, the name of your reserved range, and the name of your network. If you use Shared VPC, use the project ID of your VPC host project. Otherwise, use the project ID of Google Cloud project you use for Vertex AI.
  2. Enable the required APIs. If you use Shared VPC, see Use Shared VPC with Vertex AI.
  3. Set a reserved range using gcloud compute addresses create.
  4. Establish a peering connection between your VPC host project and Google's Service Networking, using gcloud services vpc-peerings connect.

    For private prediction endpoints, we recommended reserving at least a /21 block for the subnet for model hosting. Reserving a smaller block can result in deployment errors due to insufficient IP addresses. You can choose to use non-RFC 1918 addresses for deployment.

    PROJECT_ID=YOUR_PROJECT_ID
    gcloud config set project $PROJECT_ID
    
    # This is for display only; you can name the range anything.
    PEERING_RANGE_NAME=google-reserved-range
    
    NETWORK=YOUR_NETWORK_NAME
    
    # NOTE: `prefix-length=16` means a CIDR block with mask /16 will be
    # reserved for use by Google services, such as Vertex AI.
    gcloud compute addresses create $PEERING_RANGE_NAME \
      --global \
      --prefix-length=16 \
      --description="peering range for Google service" \
      --network=$NETWORK \
      --purpose=VPC_PEERING
    
    # Create the VPC connection.
    gcloud services vpc-peerings connect \
      --service=servicenetworking.googleapis.com \
      --network=$NETWORK \
      --ranges=$PEERING_RANGE_NAME \
      --project=$PROJECT_ID
    

Learn more about private services access.

Use Shared VPC with Vertex AI

If you use Shared VPC in your project, see how to Provision Shared VPC and make sure to complete the following steps:

  1. Enable the Compute Engine API and Service Networking APIs in the host and service project. Vertex AI API must be enabled for the service project.

  2. Create the VPC Network Peering connection between your VPC and Google services within the host project.

  3. During Vertex AI creation, you must specify the name of the network that you want Vertex AI to have access to Shared VPC.

  4. Verify that the service or user account used has the role Compute Network User role (roles/compute.networkUser).

Reserve IP ranges for Vertex AI

When you reserve an IP range for service producers, the range can be used by Vertex AI and other services. If you connect with multiple service producers using the same range, allocate a larger range to accommodate them, in order to avoid IP exhaustion.

Read about estimated IP ranges to reserve for using private IP with different types of custom training jobs.

If a job is launched with the parameter below, it will launch in a Google-managed network that is peered to your VPC and other networks attached to it:

--network = "projects/${host_project}/global/networks/${network}"

Any jobs that don't need to access your networks can be launched without this declaration, thus preserving your IP allocations.

Export custom routes

If you use custom routes, you need to export them so that Vertex AI can import them. If you don't use custom routes, skip this section.

To export custom routes, you update the peering connection in your VPC. Exporting custom routes sends all eligible static and dynamic routes that are in your VPC network, such as routes to your on-premises network, to service producers' networks (Vertex AI in this case). This establishes the necessary connections and allows training jobs to send traffic back to your on-premises network.

Ensure that your on-premises network has routes back to the IP address ranges allocated for Vertex AI so that responses are correctly routed back to Vertex AI. For example, use Cloud Router custom route advertisements that include the Vertex AI IP address ranges.

Learn more about private connections with on-premises networks.

Console

  1. Go to the VPC Network Peering page in the Google Cloud console.
    Go to the VPC Network Peering page
  2. Select the peering connection to update.
  3. Click Edit.
  4. Select Export custom routes.

gcloud

  1. Find the name of the peering connection to update. If you have multiple peering connections, omit the --format flag.

    gcloud services vpc-peerings list \
      --network=$NETWORK \
      --service=servicenetworking.googleapis.com \
      --project=$PROJECT_ID \
      --format "value(peering)"
    
  2. Update the peering connection to export custom routes.

    gcloud compute networks peerings update PEERING-NAME \
        --network=$NETWORK \
        --export-custom-routes \
        --project=$PROJECT_ID
    

Check the status of your peering connections

To see that peering connections are active, you can list them using

gcloud compute networks peerings list --network $NETWORK

You should see that the state of the peering you just created is ACTIVE. Learn more about active peering connections.

Troubleshooting

This section lists some common issues for configuring VPC Network Peering with Vertex AI.

  • When you configure Vertex AI to use a Shared VPC network, specify the network URI in the following way.

    "projects/YOUR_SHARED_VPC_HOST_PROJECT/global/networks/YOUR_SHARED_VPC_NETWORK"

  • If you specify a Shared VPC network for Vertex AI to use, then make sure that any user or Service Account actors for Vertex AI in the service project have compute.networkUser role granted in your host project.

  • Make sure that you've allocated a sufficient IP range for all service producers your network connects to, including Vertex AI.

  • If you encounter the error messages IP_SPACE_EXHAUSTED, RANGES_EXHAUSTED, or PEERING_RANGE_EXHAUSTED you must increase the amount of available IP addresses for the servicenetworking reservation in your network. You can add a new range to the existing VPC Network Peering configuration or delete some Vertex AI resources to release allocated IP addresses.

  • Connection Timeouts: After exporting custom routes, connections from Vertex AI will be routed through your network to reach endpoints in other networks. However, those endpoints may not route through your network to send responses back to Vertex AI. Make sure that you also add static or dynamic routes in those networks for the return-path to the Vertex AI allocated IP range.

  • Connection Timeouts / Host Unreachable errors: Since transitive peering is not supported, connections from Vertex AI will not be able to reach endpoints in other networks that are directly peered to your network, even with "Export custom routes" enabled. Work with your network administrator to make sure that there are no attempts to route directly your network from one directly peered network to another. If needed, you may replace one of these peering hops with a solution that supports static or dynamic routes.

  • Host Unreachable DNS errors: If your Vertex AI job needs to resolve hostnames in your VPC, ensure that you have completed the configuration to Share private DNS zones with service producers.

For additional troubleshooting information, refer to VPC Network Peering troubleshooting guide.

What's next