Configure connectors in the Shared VPC host project

If your organization uses Shared VPC, you can set up a Serverless VPC Access connector in either the service project or the host project. This guide shows how to set up a connector in the host project.

If you need to set up a connector in a service project, see Configure connectors in service projects. To learn about the advantages of each method, see Connecting to a Shared VPC network.

Before you begin

  1. Check the Identity and Access Management (IAM) roles for the account you are currently using. The active account must have the following roles on the host project:

  2. Select the host project in your preferred environment.

Console

  1. Open the Google Cloud console dashboard.

    Go to Google Cloud console dashboard

  2. In the menu bar at the top of the dashboard, click the project dropdown menu and select the host project.

gcloud

Set the default project in the gcloud CLI to the host project by running the following in your terminal:

gcloud config set project HOST_PROJECT_ID

Replace the following:

  • HOST_PROJECT_ID: the ID of the Shared VPC host project

Create a Serverless VPC Access connector

To send requests to your VPC network and receive the corresponding responses, you must create a Serverless VPC Access connector. You can create a connector by using the Google Cloud console, Google Cloud CLI, or Terraform:

Console

  1. Enable the Serverless VPC Access API for your project.

    Enable API

  2. Go to the Serverless VPC Access overview page.

    Go to Serverless VPC Access

  3. Click Create connector.

  4. In the Name field, enter a name for your connector. The name must follow the Compute Engine naming convention and be less than 21 characters. Hyphens (-) count as two characters.

  5. In the Region field, select a region for your connector. This must match the region of your serverless service.

    If your service is in the region us-central or europe-west, use us-central1 or europe-west1.

  6. In the Network field, select the VPC network to attach your connector to.

  7. Click the Subnetwork pulldown menu:

    Select an unused /28 subnet.

    • Subnets must be used exclusively by the connector. They cannot be used by other resources such as VMs, Private Service Connect, or load balancers.
    • To confirm that your subnet is not used for Private Service Connect or Cloud Load Balancing, check that the subnet purpose is PRIVATE by running the following command in the gcloud CLI:
      gcloud compute networks subnets describe SUBNET_NAME
      
      Replace SUBNET_NAME with the name of your subnet.
  8. (Optional) To set scaling options for additional control over the connector, click Show Scaling Settings to display the scaling form.

    1. Set the minimum and maximum number of instances for your connector, or use the defaults, which are 2 (min) and 10 (max). The connector scales out to the maximum specified as traffic increases, but the connector does not scale back in when traffic decreases. You must use values between 2 and 10, and the MIN value must be less than the MAX value.
    2. In the Instance Type pulldown menu, choose the machine type to be used for the connector, or use the default e2-micro. Notice the cost sidebar on the right when you choose the instance type, which displays bandwidth and cost estimations.
  9. Click Create.

  10. A green check mark will appear next to the connector's name when it is ready to use.

gcloud

  1. Update gcloud components to the latest version:

    gcloud components update
    
  2. Enable the Serverless VPC Access API for your project:

    gcloud services enable vpcaccess.googleapis.com
    
  3. Create a Serverless VPC Access connector:

    gcloud compute networks vpc-access connectors create CONNECTOR_NAME \
    --region=REGION \
    --subnet=SUBNET \
    --subnet-project=HOST_PROJECT_ID \
    # Optional: specify minimum and maximum instance values between 2 and 10, default is 2 min, 10 max.
    --min-instances=MIN \
    --max-instances=MAX \
    # Optional: specify machine type, default is e2-micro
    --machine-type=MACHINE_TYPE
    

    Replace the following:

    • CONNECTOR_NAME: a name for your connector. The name must follow the Compute Engine naming convention and be less than 21 characters. Hyphens (-) count as two characters.
    • REGION: a region for your connector; this must match the region of your serverless service. If your service is in the region us-central or europe-west, use us-central1 or europe-west1.
    • SUBNET: the name of an unused /28 subnet.
      • Subnets must be used exclusively by the connector. They cannot be used by other resources such as VMs, Private Service Connect, or load balancers.
      • To confirm that your subnet is not used for Private Service Connect or Cloud Load Balancing, check that the subnet purpose is PRIVATE by running the following command in the gcloud CLI:
        gcloud compute networks subnets describe SUBNET_NAME
        
        Replace the following:
        • SUBNET_NAME: the name of your subnet
    • HOST_PROJECT_ID: the ID of the host project
    • MIN: the minimum number of instances to use for the connector. Use an integer between 2 and 9. Default is 2. To learn about connector scaling, see Throughput and scaling.
    • MAX: the maximum number of instances to use for the connector. Use an integer between 3 and 10. Default is 10. If traffic requires it, the connector scales out to [MAX] instances, but does not scale back in. To learn about connector scaling, see Throughput and scaling.
    • MACHINE_TYPE: f1-micro, e2-micro, or e2-standard-4. To learn about connector throughput, including machine type and scaling, see Throughput and scaling.

    For more details and optional arguments, see the gcloud reference.

  4. Verify that your connector is in the READY state before using it:

    gcloud compute networks vpc-access connectors describe CONNECTOR_NAME \
    --region=REGION
    

    Replace the following:

    • CONNECTOR_NAME: the name of your connector; this is the name that you specified in the previous step
    • REGION: the region of your connector; this is the region that you specified in the previous step

    The output should contain the line state: READY.

Terraform

You can use a Terraform resource to enable the vpcaccess.googleapis.com API.

resource "google_project_service" "vpcaccess-api" {
  project = var.project_id # Replace this with your project ID in quotes
  service = "vpcaccess.googleapis.com"
}

You can use Terraform modules to create a VPC network and subnet and then create the connector.

module "test-vpc-module" {
  source       = "terraform-google-modules/network/google"
  version      = "~> 9.0"
  project_id   = var.project_id # Replace this with your project ID in quotes
  network_name = "my-serverless-network"
  mtu          = 1460

  subnets = [
    {
      subnet_name   = "serverless-subnet"
      subnet_ip     = "10.10.10.0/28"
      subnet_region = "us-central1"
    }
  ]
}

module "serverless-connector" {
  source     = "terraform-google-modules/network/google//modules/vpc-serverless-connector-beta"
  version    = "~> 9.0"
  project_id = var.project_id
  vpc_connectors = [{
    name        = "central-serverless"
    region      = "us-central1"
    subnet_name = module.test-vpc-module.subnets["us-central1/serverless-subnet"].name
    # host_project_id = var.host_project_id # Specify a host_project_id for shared VPC
    machine_type  = "e2-standard-4"
    min_instances = 2
    max_instances = 7
    }
    # Uncomment to specify an ip_cidr_range
    #   , {
    #     name          = "central-serverless2"
    #     region        = "us-central1"
    #     network       = module.test-vpc-module.network_name
    #     ip_cidr_range = "10.10.11.0/28"
    #     subnet_name   = null
    #     machine_type  = "e2-standard-4"
    #     min_instances = 2
    #   max_instances = 7 }
  ]
  depends_on = [
    google_project_service.vpcaccess-api
  ]
}

Enable Cloud Run for the service project

Enable the Cloud Run API for the service project. This is necessary for adding IAM roles in subsequent steps and for the service project to use Cloud Run.

Console

  1. Open the page for the Cloud Run API.

    Cloud Run API

  2. In the menu bar at the top of the dashboard, click the project dropdown menu and select the service project.

  3. Click Enable.

gcloud

Run the following in your terminal:

gcloud services enable run.googleapis.com --project=SERVICE_PROJECT_ID

Replace the following:

  • SERVICE_PROJECT_ID: the ID of the service project

Provide access to the connector

Provide access to the connector by granting the service project Cloud Run Service Agent the Serverless VPC Access User IAM role on the host project.

Console

  1. Open the IAM page.

    Go to IAM

  2. Click the project dropdown menu and select the host project.

  3. Click Add.

  4. In the New principals field, enter the email address of the Cloud Run Service Agent for the Cloud Run service:

    service-SERVICE_PROJECT_NUMBER@serverless-robot-prod.iam.gserviceaccount.com

    Replace the following:

    • SERVICE_PROJECT_NUMBER: the project number associated with the service project. This is different than the project ID. You can find the project number on the service project's Project Settings page in the Google Cloud console.
  5. In the Role field, select Serverless VPC Access User.

  6. Click Save.

gcloud

Run the following in your terminal:

gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
--member=serviceAccount:service-SERVICE_PROJECT_NUMBER@serverless-robot-prod.iam.gserviceaccount.com \
--role=roles/vpcaccess.user

Replace the following:

  • HOST_PROJECT_ID: the ID of the Shared VPC host project
  • SERVICE_PROJECT_NUMBER: the project number associated with the service account. This is different than the project ID. You can find the project number by running the following:

    gcloud projects describe SERVICE_PROJECT_ID
    

Make the connector discoverable

On the host project's IAM policy, you must grant the following two predefined roles to the principals who deploy Cloud Run services:

Alternatively, you can use custom roles or other predefined roles that include all the permissions of the Serverless VPC Access Viewer (vpcaccess.viewer) role.

Console

  1. Open the IAM page.

    Go to IAM

  2. Click the project dropdown menu and select the host project.

  3. Click Add.

  4. In the New principals field, enter the email address of the principal that should be able to see the connector from the service project. You can enter multiple emails in this field.

  5. In the Role field, select both of the following roles:

    • Serverless VPC Access Viewer
    • Compute Network Viewer
  6. Click Save.

gcloud

Run the following commands in your terminal:

gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
--member=PRINCIPAL \
--role=roles/vpcaccess.viewer

gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
--member=PRINCIPAL \
--role=roles/compute.networkViewer

Replace the following:

  • HOST_PROJECT_ID: the ID of the Shared VPC host project
  • PRINCIPAL: the principal who deploys Cloud Run services. Learn more about the --member flag.

Configure your service to use the connector

For each Cloud Run service that requires access to your Shared VPC, you must specify the connector for the service. You can specify the connector by using the Google Cloud console, Google Cloud CLI, YAML file or Terraform when deploying a new service or updating an existing service.

Console

  1. In the Google Cloud console, go to Cloud Run:

    Go to Cloud Run

  2. Click Create Service if you are configuring a new service you are deploying to. If you are configuring an existing service, click the service, then click Edit and deploy new revision.

  3. If you are configuring a new service, fill out the initial service settings page as desired, then click Container(s), volumes, networking, security to expand the service configuration page.

  4. Click the Connections tab.

    image

    • In the VPC Connector field, select a connector to use or select None to disconnect your service from a VPC network.
  5. Click Create or Deploy.

gcloud

  1. Set the gcloud CLI to use the project containing the Cloud Run resource:

    gcloud config set project PROJECT_ID
    Replace the following:

    • PROJECT_ID: the ID of the project containing the Cloud Run resource that requires access to your Shared VPC. If the Cloud Run resource is in the host project, this is the host project ID. If the Cloud Run resource is in a service project, this is the service project ID.
  2. Use the --vpc-connector flag.

  • For existing services:
    gcloud run services update SERVICE --vpc-connector=CONNECTOR_NAME
  • For new services:
    gcloud run deploy SERVICE --image=IMAGE_URL --vpc-connector=CONNECTOR_NAME
    Replace the following:
    • SERVICE: the name of your service
    • IMAGE_URL: a reference to the container image, for example, us-docker.pkg.dev/cloudrun/container/hello:latest
    • CONNECTOR_NAME: the name of your connector. Use the fully qualified name when deploying from a Shared VPC service project (as opposed to the host project), for example:
      projects/HOST_PROJECT_ID/locations/CONNECTOR_REGION/connectors/CONNECTOR_NAME
      where HOST_PROJECT_ID is the ID of the host project, CONNECTOR_REGION is the region of your connector, and CONNECTOR_NAME is the name that you gave your connector.

YAML

Set the gcloud CLI to use the project containing the Cloud Run resource:

gcloud config set project PROJECT_ID

Replace the following:

  • PROJECT_ID: the ID of the project containing the Cloud Run resource that requires access to your Shared VPC. If the Cloud Run resource is in the host project, this is the host project ID. If the Cloud Run resource is in a service project, this is the service project ID.

You can download and view existing service configurations using the gcloud run services describe --format export command, which yields cleaned results in YAML format. You can then modify the fields described below and upload the modified YAML using the gcloud run services replace command. Make sure you only modify fields as documented.

  1. To view and download the configuration:

    gcloud run services describe SERVICE --format export > service.yaml
  2. Add or update the run.googleapis.com/vpc-access-connector attribute under the annotations attribute under the top-level spec attribute:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: SERVICE
    spec:
      template:
        metadata:
          annotations:
            run.googleapis.com/vpc-access-connector: CONNECTOR_NAME
          name: REVISION

    Replace the following:

    • SERVICE: the name of your Cloud Run service.
    • CONNECTOR_NAME: the name of your connector. Use the fully qualified name when deploying from a Shared VPC service project (as opposed to the host project), for example:
      projects/HOST_PROJECT_ID/locations/CONNECTOR_REGION/connectors/CONNECTOR_NAME
      where HOST_PROJECT_ID is the ID of the host project, CONNECTOR_REGION is the region of your connector, and CONNECTOR_NAME is the name that you gave your connector.
    • REVISION with a new revision name or delete it (if present). If you supply a new revision name, it must meet the following criteria:
      • Starts with SERVICE-
      • Contains only lowercase letters, numbers and -
      • Does not end with a -
      • Does not exceed 63 characters
  3. Replace the service with its new configuration using the following command:

    gcloud run services replace service.yaml

Terraform

You can use a Terraform resource to create a service and configure it to use your connector.

# Cloud Run service
resource "google_cloud_run_v2_service" "gcr_service" {
  name     = "mygcrservice"
  provider = google-beta
  location = "us-west1"

  template {
    containers {
      image = "us-docker.pkg.dev/cloudrun/container/hello"
      resources {
        limits = {
          cpu    = "1000m"
          memory = "512Mi"
        }
      }
      # the service uses this SA to call other Google Cloud APIs
      # service_account_name = myservice_runtime_sa
    }

    scaling {
      # Limit scale up to prevent any cost blow outs!
      max_instance_count = 5
    }

    vpc_access {
      # Use the VPC Connector
      connector = google_vpc_access_connector.connector.id
      # all egress from the service should go through the VPC Connector
      egress = "ALL_TRAFFIC"
    }
  }
}

Next steps