Configuring Serverless VPC Access

This page shows how to use Serverless VPC Access to connect your serverless environment directly to your VPC network, allowing access to Compute Engine VM instances, Memorystore instances, and any other resources with an internal IP address.

Before you begin

If you use Shared VPC, see the documentation for your serverless environment:

Create a Serverless VPC Access connector

To send requests to your VPC network and receive the corresponding responses without using the public internet, you must use a Serverless VPC Access connector.

You can create a connector by using the Google Cloud Console, gcloud command-line tool, or Terraform:

Console

  1. Ensure the Serverless VPC Access API is enabled for your project.

    Enable API

  2. Go to the Serverless VPC Access overview page.

    Go to Serverless VPC Access

  3. Click Create connector.

  4. In the Name field, enter a name for your connector.

  5. In the Region field, select a region for your connector. This must match the region of your serverless service.

    If your service is in the region us-central or europe-west, use us-central1 or europe-west1.

  6. In the Network field, select the VPC network to attach your connector to.

  7. Click the Subnetwork pulldown menu:

    • If you are using your own subnet (required for Shared VPC), select the /28 subnet you want to use for the connector.
    • If you are not using Shared VPC, and prefer to have the connector create a subnet instead of creating one explicitly, select Custom IP range from the pulldown menu, then in the IP range field, enter the first address in an unreserved CIDR /28 internal IP range. This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0 (/28) will work in most new projects.

  8. (Optional) To set scaling options for additional control over the connector, click Show Scaling Settings to display the scaling form.

    1. Set the minimum and maximum number of instances for your connector, or use the defaults, which are 2 (min) and 10 (max). The connector scales out to the maximum specified if traffic usage requires it, but the connector does not scale back in when traffic decreases. You must use values between 2 and 10.
    2. In the Instance Type pulldown menu, choose the machine type to be used for the connector, or use the default e2-micro. Notice the cost sidebar on the right when you choose the instance type, which displays bandwidth and cost estimations.
  9. Click Create.

  10. A green check mark will appear next to the connector's name when it is ready to use.

gcloud

  1. Update gcloud components to the latest version:

    gcloud components update
    
  2. Ensure the Serverless VPC Access API is enabled for your project:

    gcloud services enable vpcaccess.googleapis.com
    
  3. If you are using your own subnet (required for Shared VPC), create a connector with the command:

    gcloud compute networks vpc-access connectors create CONNECTOR_NAME \
    --region REGION \
    --subnet SUBNET \
    # If you are not using Shared VPC, omit the following line.
    --subnet-project HOST_PROJECT_ID \
    # Optional: specify minimum and maximum instance values between 2 and 10, default is 2 min, 10 max.
    --min-instances MIN \
    --max-instances MAX \
    # Optional: specify machine type, default is e2-micro
    --machine-type MACHINE_TYPE
    

    Replace the following:

    • CONNECTOR_NAME: a name for your connector
    • REGION: a region for your connector; this must match the region of your serverless service. If your service is in the region us-central or europe-west, use us-central1 or europe-west1.
    • SUBNET: your own '/28' dedicated subnet that is not used by any other resource; the value to be supplied is name of the subnet
    • HOST_PROJECT_ID: the ID of the host project; supply this only if you are using Shared VPC
    • MIN: the minimum number of instances to use for the connector. Use an integer between 2 and 10. Default is 2.
    • MAX: the maximum number of instances to use for the connector. Use an integer between 2 and 10. Default is 10. If traffic requires it, the connector scales out to [MAX] instances, but does not scale back in.
    • MACHINE_TYPE: f1-micro, e2-micro, or e2-standard-4

      Machine type Estimated throughput range in Mbps Price
      (connector instance plus network egress costs)
      f1-micro 100-500 f1-micro pricing
      e2-micro 200-1000 e2-micro pricing
      e2-standard-4 3200-16000 e2 standard pricing

    For example, if you set MACHINE_TYPE to f1-micro, the estimated throughput for your connector will be 100 Mbps at the default MIN and 500 Mbps at the default MAX.

    For more details and optional arguments, see the gcloud reference.

  4. If you are not using Shared VPC and want to supply a custom IP range instead of using a subnet, create a connector with the command:

    gcloud compute networks vpc-access connectors create CONNECTOR_NAME \
    --network VPC_NETWORK \
    --region REGION \
    --range IP_RANGE
    

    Replace the following:

    • CONNECTOR_NAME: a name for your connector
    • VPC_NETWORK: the VPC network to attach your connector to
    • REGION: a region for your connector. This must match the region of your serverless service. If your service is in the region us-central or europe-west, use us-central1 or europe-west1.
    • IP_RANGE: an unreserved internal IP network, and a '/28' of unallocated space is required. The value supplied is the network in CIDR notation (10.8.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0/28 works in most new projects.

    For more details and optional arguments such as throughput controls, see the gcloud reference.

  5. Verify that your connector is in the READY state before using it:

    gcloud compute networks vpc-access connectors describe CONNECTOR_NAME \
    --region REGION
    

    Replace the following:

    • CONNECTOR_NAME: the name of your connector; this is the name that you specified in the previous step
    • REGION: the region of your connector; this is the region that you specified in the previous step

    The output should contain the line state: READY.

Terraform

You can use a Terraform resource to enable the vpcaccess.googleapis.com API.

resource "google_project_service" "project" {
  project = var.project_id # Replace this with your project ID in quotes
  service = "vpcaccess.googleapis.com"
}

You can use Terraform modules to create a VPC network and subnet and then create the connector.

module "test-vpc-module" {
  source       = "terraform-google-modules/network/google"
  version      = "~> 3.3.0"
  project_id   = var.project_id # Replace this with your project ID in quotes
  network_name = "my-serverless-network"
  mtu          = 1460

  subnets = [
    {
      subnet_name   = "serverless-subnet"
      subnet_ip     = "10.10.10.0/28"
      subnet_region = "us-central1"
    }
  ]
}

module "serverless-connector" {
  source     = "terraform-google-modules/network/google//modules/vpc-serverless-connector-beta"
  project_id = var.project_id
  vpc_connectors = [{
    name        = "central-serverless"
    region      = "us-central1"
    subnet_name = module.test-vpc-module.subnets["us-central1/serverless-subnet"].name
    # host_project_id = var.host_project_id # Specify a host_project_id for shared VPC
    machine_type  = "e2-standard-4"
    min_instances = 2
    max_instances = 7
    }
    # Uncomment to specify an ip_cidr_range
    #   , {
    #     name          = "central-serverless2"
    #     region        = "us-central1"
    #     network       = module.test-vpc-module.network_name
    #     ip_cidr_range = "10.10.11.0/28"
    #     subnet_name   = null
    #     machine_type  = "e2-standard-4"
    #     min_instances = 2
    #   max_instances = 7 }
  ]
}

Configure your serverless environment to use a connector

After you create a Serverless VPC Access connector, configure your serverless environment to use the connector by following the instructions for your serverless environment:

Configure Cloud Run to use a connector

You can configure a service to use a connector from the Cloud Console, gcloud command-line tool, or YAML file when you create a new service or deploy a new revision:

Console

  1. Go to Cloud Run

  2. Click Create Service if you are configuring a new service you are deploying to. If you are configuring an existing service, click on the service, then click Edit and Deploy New Revision.

  3. If you are configuring a new service, fill out the initial service settings page as desired, then click Next > Advanced settings to reach the service configuration page.

  4. Click the Connections tab.

    image

  5. In the VPC Connector field, select a connector to use or select None to disconnect your service from a VPC network.

  6. Click Create or Deploy.

gcloud

To specify a connector during deployment, use the --vpc-connector flag:

gcloud run deploy SERVICE --image IMAGE_URL --vpc-connector CONNECTOR_NAME
  • Replace SERVICE with the name of your service.
  • Replace IMAGE_URL.
  • Replace CONNECTOR_NAME with the name of your connector.

To attach, update, or remove a connector for an existing service, use the gcloud run services update command with either of the following flags as needed:

For example to attach or update a connector:

gcloud run services update SERVICE --vpc-connector CONNECTOR_NAME
  • Replace SERVICE with the name of your service.
  • Replace CONNECTOR_NAME with the name of your connector.

YAML

You can download and view existing service configuration using the gcloud run services describe --format export command, which yields cleaned results in YAML format. You can then modify the fields described below and upload the modified YAML using the gcloud beta run services replace command. Make sure you only modify fields as documented.

  1. To view and download the configuration:

    gcloud run services describe SERVICE --format export > service.yaml
  2. Add or update the run.googleapis.com/vpc-access-connector attribute under the annotations attribute under the top-level spec attribute:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: SERVICE
    spec:
      template:
        metadata:
          annotations:
            run.googleapis.com/vpc-access-connector: CONNECTOR_NAME
    • Replace SERVICE with the name of your Cloud Run service.
    • Replace CONNECTOR_NAME with the name of your connector.
  3. Replace the service with its new configuration using the following command:

    gcloud beta run services replace service.yaml

Configure Cloud Functions to use a connector

You can configure a function to use a connector from the Google Cloud Console or the gcloud command-line tool:

Console

  1. Go to the Cloud Functions overview page in the Cloud Console:

    Go to Cloud Functions

  2. Click Create function. Alternatively, click an existing function to go to its details page, and click Edit.

  3. Expand the advanced settings by clicking RUNTIME, BUILD AND CONNECTIONS SETTINGS.

  4. In the Connections tab under Egress settings, enter the name of your connector in the VPC connector field.

gcloud

Use the gcloud functions deploy command to deploy the function and specify the --vpc-connector flag:

gcloud functions deploy FUNCTION_NAME \
--vpc-connector CONNECTOR_NAME \
FLAGS...

where:

  • FUNCTION_NAME is the name of your function.
  • CONNECTOR_NAME is the name of your connector.
  • FLAGS... refers to other flags you pass during function deployment.

For more control over which requests are routed through the connector, see Egress settings.

Configure App Engine to use a connector

Python 2

  1. Discontinue use of the App Engine URL Fetch service.

    By default, all requests are routed through URL Fetch service. This causes requests to your VPC network to fail. To disable this default, see Disabling URL Fetch from handling all outbound requests.

    You can still use the urlfetch library directly for individual requests if needed, however this is not recommended.

  2. Add the Serverless VPC Access field to your app.yaml file:

    vpc_access_connector:
     name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME
    

    Replace the following:

    • PROJECT_ID with your Cloud project ID
    • REGION with the region that your connector is in
    • CONNECTOR_NAME with the name of your connector
  3. Deploy the service:

    gcloud app deploy

    After you deploy your service, it is able to send requests to internal IP addresses in order to access resources in your VPC network.

Java 8

  1. Add the Serverless VPC Access element to your service's appengine-web.xml file:

    <vpc-access-connector>
    <name>projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME</name>
    </vpc-access-connector>
    

    Replace the following:

    • PROJECT_ID with your Cloud project ID
    • REGION with the region that your connector is in
    • CONNECTOR_NAME with the name of your connector
  2. Deploy the service:

    gcloud app deploy WEB-INF/appengine-web.xml

    After you deploy your service, it is able to send requests to internal IP addresses in order to access resources in your VPC network.

All other runtimes

  1. Add the Serverless VPC Access field to your app.yaml file:

    vpc_access_connector:
     name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME
    

    Replace the following:

    • PROJECT_ID with your Cloud project ID
    • REGION with the region that your connector is in
    • CONNECTOR_NAME with the name of your connector
  2. Deploy the service:

    gcloud app deploy

    After you deploy your service, it is able to send requests to internal IP addresses in order to access resources in your VPC network.

Restrict access to VPC resources

You can restrict your connector's access to your VPC network by using firewall rules.

When connecting to a Shared VPC network, firewall rules are not automatically created. A user with the Network Administrator role on the host project sets firewall rules when they configure the host project.

When connecting to a standalone VPC network, an implicit firewall rule with priority 1000 is automatically created on your VPC network to allow ingress from the connector's subnet or custom IP range to all destinations in the VPC network. The implicit firewall rule is not visible in the Google Cloud Console and exists only as long as the associated connector exists. If you don't want your connector to be able to reach all destinations in your VPC network, you can restrict its access.

You can restrict connector access by creating ingress rules on the destination resource, or by creating egress rules on the VPC connector.

Restrict access using ingress rules

Choose either network tags or CIDR ranges to control the incoming traffic to your VPC network.

Network tags

The following steps show how to create ingress rules that restrict a connector's access to your VPC network based on the connector network tags.

  1. Ensure that you have the required permissions to insert firewall rules. You must have one of the following Identity and Access Management (IAM) roles:

  2. Deny connector traffic across your VPC network.

    Create an ingress firewall rule with priority lower than 1000 on your VPC network to deny ingress from the connector network tag. This overrides the implicit firewall rule that Serverless VPC Access creates on your VPC network by default.

    gcloud compute firewall-rules create RULE_NAME \
    --action=DENY \
    --source-tags=VPC_CONNECTOR_NETWORK_TAG \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, deny-vpc-connector.
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector.
    • VPC_NETWORK: the name of your VPC network
    • PRIORITY: an integer from 1-999, inclusive. For example, 990.
  3. Allow connector traffic to the resource that should receive connector traffic.

    Use the allow and target-tags flags to create an ingress firewall rule targeting the resource in your VPC network that you want the VPC connector to access. Set the priority for this rule to be a lower value than the priority of the rule you made in the previous step.

    gcloud compute firewall-rules create RULE_NAME \
    --allow=PROTOCOLS \
    --source-tags=VPC_CONNECTOR_NETWORK_TAG \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --target-tags=RESOURCE_TAG \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, allow-vpc-connector-for-select-resources.
    • PROTOCOLS: the protocols you want to allow from your VPC connector. These can be one or more of the case-sensitive string values tcp, udp, icmp, esp, ah, sctp, or any IP protocol number. For port-based protocols—tcp, udp, and sctp—a list of destination ports or port ranges to which the rule applies may optionally be specified. For more information, see the documentation for the allow flag.
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector. If you used the unique network tag in the previous step, use the unique network tag.
    • VPC_NETWORK: the name of your VPC network
    • RESOURCE_TAG: the network tag for the VPC resource that you want your VPC connector to access
    • PRIORITY: an integer less than the priority you set in the previous step. For example, if you set the priority for the rule you created in the previous step to 990, try 980.

For more information about the required and optional flags for creating firewall rules, refer to the documentation for gcloud compute firewall-rules create.

CIDR range

The following steps show how to create ingress rules that restrict a connector's access to your VPC network based on the connector's CIDR range.

  1. Ensure that you have the required permissions to insert firewall rules. You must have one of the following Identity and Access Management (IAM) roles:

  2. Deny connector traffic across your VPC network.

    Create an ingress firewall rule with priority lower than 1000 on your VPC network to deny ingress from the connector's CIDR range. This overrides the implicit firewall rule that Serverless VPC Access creates on your VPC network by default.

    gcloud compute firewall-rules create RULE_NAME \
    --action=DENY \
    --source-ranges=VPC_CONNECTOR_CIDR_RANGE \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, deny-vpc-connector.
    • VPC_CONNECTOR_CIDR_RANGE: the CIDR range for the connector whose access you are restricting
    • VPC_NETWORK: the name of your VPC network
    • PRIORITY: an integer from 1-999. For example, 990.
  3. Allow connector traffic to the resource that should receive connector traffic.

    Use the allow and target-tags flags to create an ingress firewall rule targeting the resource in your VPC network that you want the VPC connector to access. Set the priority for this rule to be a lower value than the priority of the rule you made in the previous step.

    gcloud compute firewall-rules create RULE_NAME \
    --allow=PROTOCOLS \
    --source-ranges=VPC_CONNECTOR_CIDR_RANGE \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --target-tags=RESOURCE_TAG \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, allow-vpc-connector-for-select-resources.
    • PROTOCOLS: the protocols you want to allow from your VPC connector. These can be one or more of the case-sensitive string values tcp, udp, icmp, esp, ah, sctp, or any IP protocol number. For port-based protocols—tcp, udp, and sctp—a list of destination ports or port ranges to which the rule applies may optionally be specified. For more information, see the documentation for the allow flag.
    • VPC_CONNECTOR_CIDR_RANGE: the CIDR range for the connector you whose access you are restricting
    • VPC_NETWORK: the name of your VPC network
    • RESOURCE_TAG: the network tag for the VPC resource that you want your VPC connector to access
    • PRIORITY: an integer less than the priority you set in the previous step. For example, if you set the priority for the rule you created in the previous step to 990, try 980.

For more information about the required and optional flags for creating firewall rules, see the documentation for gcloud compute firewall-rules create.

Restrict access using egress rules

The following steps show how to create egress rules to restrict connector access.

  1. Ensure that you have the required permissions to insert firewall rules. You must have one of the following Identity and Access Management (IAM) roles:

  2. Deny egress traffic from your connector.

    Create an egress firewall rule on your Serverless VPC Access connector to prevent it from sending outgoing traffic.

    gcloud compute firewall-rules create RULE_NAME \
    --action=DENY \
    --direction=EGRESS \
    --target-tags=VPC_CONNECTOR_NETWORK_TAG \
    --network=VPC_NETWORK \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, deny-vpc-connector.
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector.
    • VPC_NETWORK: the name of your VPC network
    • PRIORITY: an integer from 1-999. For example, 990.
  3. Allow egress traffic when the destination is in the CIDR range that you want your connector to access.

    Use the allow and destination-ranges flags to create a firewall rule allowing egress traffic from your connector for a specific destination range. Set the destination range to the CIDR range of the resource in your VPC network that you want your connector to be able to access. Set the priority for this rule to be a lower value than the priority of the rule you made in the previous step.

    gcloud compute firewall-rules create RULE_NAME \
    --allow=PROTOCOLS \
    --destination-ranges=RESOURCE_CIDR_RANGE \
    --direction=EGRESS \
    --network=VPC_NETWORK \
    --target-tags=VPC_CONNECTOR_NETWORK_TAG \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, allow-vpc-connector-for-select-resources.
    • PROTOCOLS: the protocols you want to allow from your VPC connector. These can be one or more of the case-sensitive string values tcp, udp, icmp, esp, ah, sctp, or any IP protocol number. For port-based protocols—tcp, udp, and sctp—a list of destination ports or port ranges to which the rule applies may optionally be specified. For more information, see the documentation for the allow flag.
    • RESOURCE_CIDR_RANGE: the CIDR range for the connector whose access you are restricting
    • VPC_NETWORK: the name of your VPC network
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector. If you used the unique network tag in the previous step, use the unique network tag.
    • PRIORITY: an integer less than the priority you set in the previous step. For example, if you set the priority for the rule you created in the previous step to 990, try 980.

For more information about the required and optional flags for creating firewall rules, refer to the documentation for gcloud compute firewall-rules create.

Troubleshooting

Service account permissions

To perform operations in your Cloud project, Serverless VPC Access uses the Serverless VPC Access Service Agent service account. This service account's email address has the following form:

service-PROJECT_NUMBER@gcp-sa-vpcaccess.iam.gserviceaccount.com

By default, this service account has the Serverless VPC Access Service Agent role (roles/vpcaccess.serviceAgent). Serverless VPC Access operations may fail if you change this account's permissions.

Errors

If creating a connector results in an error, try the following:

  • Specify an RFC 1918 internal IP range that does not overlap with any existing IP address reservations in the VPC network.
  • Grant your project permission to use Compute Engine VM images from the project with ID serverless-vpc-access-images. See Setting image access constraints for information on how to update your organization policy accordingly.
  • Set the constraints/compute.vmCanIpForward organization policy to allow VMs to enable IP forwarding.

If you've specified a connector but still cannot access resources in your VPC network:

  • Make sure there are no firewall rules on your VPC network with a priority before 1000 that deny ingress from your connector's IP range.

Deleting a connector

Before you delete a connector, ensure that no services are still connected to it.

For Shared VPC users who set up connectors in the Shared VPC host project (no longer recommended), you can use the command gcloud compute networks vpc-access connectors describe to list the projects in which there are services that use a given connector.

To delete a connector, use the Cloud Console or the gcloud command-line tool:

Console

  1. Go to the Serverless VPC Access overview page in the Cloud Console:

    Go to Serverless VPC Access

  2. Select the connector you want to delete.

  3. Click Delete.

gcloud

Use the following gcloud command to delete a connector:

gcloud compute networks vpc-access connectors delete CONNECTOR_NAME --region=REGION

Replace the following:

  • CONNECTOR_NAME with the name of the connector you want to delete
  • REGION with the region where the connector is located

Next steps