Configuring Serverless VPC Access

Serverless VPC Access enables you to connect from a serverless environment on Google Cloud (Cloud Run (fully managed), Cloud Functions, or the App Engine standard environment) directly to your VPC network. This connection makes it possible for your serverless environment to access Compute Engine VM instances, Memorystore instances, and any other resources with an internal IP address. For example, this can be helpful in the following cases:

  • You use Memorystore to store data for a serverless service.
  • Your serverless workloads use third-party software that you run on a Compute Engine VM.
  • You run a backend service on a Managed Instance Group in Compute Engine and need your serverless environment to communicate with this backend without exposure to the public internet.
  • Your serverless environment needs to access data from your on-premises database through Cloud VPN.

Connection to a VPC network enables your serverless environment to send requests to internal DNS names and internal IP addresses as defined by RFC 1918 and RFC 6598. These internal addresses are only accessible from Google Cloud services. Using internal addresses avoids exposing resources to the public internet and improves the latency of communication between your services.

Serverless VPC Access only allows requests to be initiated by the serverless environment. Requests initiated by a VM must use the external address of your serverless service—see Private Google Access for more information.

Serverless VPC Access supports Shared VPC and communication to networks connected via Cloud Interconnect, Cloud VPN, and VPC Network Peering. Serverless VPC Access does not support legacy networks.

About Serverless VPC Access connectors

Serverless VPC Access is based on a resource called a connector. A connector handles traffic between your serverless environment and your VPC network. When you create a connector in your Google Cloud project, you attach it to a specific VPC network and region. You can then configure your serverless services to use the connector for outbound network traffic.

When you create a connector, you can use your own CIDR /28 subnet or use a custom IP range. Traffic sent through the connector into your VPC network will originate from your /28 subnet or from an address in the custom IP range. If you use a custom IP range, the IP range must be a CIDR /28 range that is not already reserved in your VPC network.

If the subnet is not a shared subnet, an implicit firewall rule with priority 1000 is created on your VPC network to allow ingress from the connector's subnet or custom IP range to all destinations in the network.

Serverless VPC Access automatically provisions throughput for a connector in 100 Mbps increments depending on the amount of traffic sent through the connector. Automatically provisioned throughput can only scale up and does not scale down. A connector always has at least 200 Mbps provisioned and can scale up to 1000 Mbps. You can configure throughput scaling limits when you create a connector; note that actual throughput through a connector may exceed the provisioned throughput, especially for short traffic bursts.

Serverless VPC Access connectors incur a monthly charge based on usage. See Pricing for details.

Serverless VPC Access example (click to enlarge)
Serverless VPC Access example (click to enlarge)

Note that:

  • A connector must be located in the same project as the serverless service (such as Cloud Run services, App Engine apps, or Cloud Functions) that connects to it. (Originally, when connecting to a Shared VPC network, the connector was created in the host project, but this is no longer recommended.)
  • A connector must be located in the same region as the serverless service that connects to it. See Supported regions for the list of regions in which you can create a connector.
  • Traffic to internal IP addresses and internal DNS names is routed through the connector. By default, traffic to external IP addresses is routed through the internet.
  • If you use Shared VPC:

  • You can use the same connector with multiple serverless services.

  • For resources (such as Google Cloud VM instances or GKE clusters) that allow cross-region access, a connector can be in a different region than the resource it is sending traffic to. You are billed for egress from the connector—see Pricing.

Creating a connector

To create a connector, use the Cloud Console or the gcloud command-line tool:

Console

  1. If you are using Shared VPC, make sure you have configured the host project.

  2. Ensure the Serverless VPC Access API is enabled for your project:

    Enable API

  3. Go to the Serverless VPC Access overview page.

    Go to Serverless VPC Access

  4. Click Create connector.

  5. In the Name field, enter a name for your connector.

  6. In the Region field, select a region for your connector. This must match the region of your serverless service—see Supported regions.

  7. In the Network field, select the VPC network to attach your connector to.

  8. Click the Subnetwork pulldown menu:

    • If you are using your own subnet (required for Shared VPC), select the /28 subnet you want to use for the connector.
    • If you are not using Shared VPC, and prefer to have the connector create a subnet instead of creating one explicitly, select Custom IP range from the pulldown menu, then in the IP range field, enter the first address in an unreserved CIDR /28 internal IP range. This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0 (/28) will work in most new projects.

  9. (Optional) For additional control over your connector's throughput, edit the Minimum throughput and Maximum throughput fields.

  10. Click Create.

  11. A green check mark will appear next to the connector's name when it is ready to use.

gcloud

  1. If you are using Shared VPC, make sure you have configured the host project.

  2. Update gcloud components to the latest version:

    gcloud components update
    
  3. Ensure the Serverless VPC Access API is enabled for your project:

    gcloud services enable vpcaccess.googleapis.com
    
  4. If you are using your own subnet (required for Shared VPC), create a connector with the command:

    gcloud beta compute networks vpc-access connectors create [CONNECTOR_NAME] \
    --region [REGION] \
    --subnet [SUBNET] \
    # If you are not using Shared VPC, omit the following line.
    --subnet-project [HOST-PROJECT-ID]
    

    Where:

    • [CONNECTOR_NAME] is a name for your connector.
    • [REGION] is a region for your connector. This must match the region of your serverless service—see Supported regions.
    • [SUBNET] is your own '/28' dedicated subnet that is not used by any other resource. The value to be supplied is name of the subnet.
    • [HOST-PROJECT-ID] is the ID of the host project. Supply this only if you are using Shared VPC.

    For more details and optional arguments such as throughput controls, see the gcloud reference.

  5. If you are not using Shared VPC and want to supply a custom IP range instead of using a subnet, create a connector with the command:

    gcloud compute networks vpc-access connectors create [CONNECTOR_NAME] \
    --network [VPC_NETWORK] \
    --region [REGION] \
    --range [IP_RANGE]
    

    Where:

    • [CONNECTOR_NAME] is a name for your connector.
    • [VPC_NETWORK] is the VPC network to attach your connector to.
    • [REGION] is a region for your connector. This must match the region of your serverless service—see Supported regions.
    • [IP_RANGE] is an unreserved internal IP network, and a '/28' of unallocated space is required. The value supplied is the network in CIDR notation (10.8.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0/28 works in most new projects.

    For more details and optional arguments such as throughput controls, see the gcloud reference.

  6. Verify that your connector is in the READY state before using it:

    gcloud compute networks vpc-access connectors describe [CONNECTOR_NAME] --region [REGION]
    

    The output should contain the line state: READY.

Configuring the host project if using Shared VPC

If you are creating a connector for Shared VPC, you must configure the host project as follows:

  • Add firewall rules to allow required IP ranges to access the connector.
  • Grant each service project the Compute Network User role in the host project.
  • Create a subnet in the host project to be used when creating a Shared VPC connector.

Adding firewall rules to allow IP ranges

These steps must be performed by a user with the Network Administrator role on the host project (Shared VPC Administrator).

You must create firewall rules to allow requests from the following IP ranges to reach the connector and to be reached by the connector:

  • NAT ranges
    • 107.178.230.64/26
    • 35.199.224.0/19
  • Health check ranges
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 108.170.220.0/23

These ranges are used by the Google infrastructure underlying Cloud Run, Cloud Functions, and App Engine Standard. All requests from these IPs are guaranteed to originate from Google infrastructure, which ensures that each Cloud Run, Cloud Functions, and App Engine service/function/app only communicates with the VPC Connector it is connected to.

For a simple configuration, apply the rules to allow serverless services in any service project connected to the Shared VPC network to send requests to any resource in the network.

To apply these rules:

  1. Run the following three commands to set the rules to allow requests from the serverless environment to reach all VPC Connectors in the network:

    gcloud compute firewall-rules create serverless-to-vpc-connector \
    --allow tcp:667,udp:665-666,icmp \
    --source-ranges 107.178.230.64/26,35.199.224.0/19 \
    --direction=INGRESS \
    --target-tags vpc-connector \
    --network=VPC-NETWORK
    gcloud compute firewall-rules create vpc-connector-to-serverless \
    --allow tcp:667,udp:665-666,icmp \
    --destination-ranges 107.178.230.64/26,35.199.224.0/19 \
    --direction=EGRESS \
    --target-tags vpc-connector \
    --network=VPC-NETWORK
    gcloud compute firewall-rules create vpc-connector-health-checks \
    --allow tcp:667 \
    --source-ranges 130.211.0.0/22,35.191.0.0/16,108.170.220.0/23 \
    --direction=INGRESS \
    --target-tags vpc-connector \
    --network=VPC-NETWORK

    Where VPC_NETWORK is the VPC network to attach your connector to.

  2. Run the following command to allow requests from any VPC connector to any other resource on the network:

    gcloud compute firewall-rules create \
    vpc-connector-egress \
    --allow tcp,udp,icmp \
    --direction=INGRESS \
    --source-tags vpc-connector \
    --network=VPC-NETWORK

    This rule gives the VPC connector access to every resource in the network. To allow VPC connectors to access only a narrower set of resources, specify a target for these firewall rules.

Creating firewall rules with narrower scope

Following the procedure in Adding firewall rules to allow IP ranges results in firewall rules that apply to all connectors, both current ones and ones created in the future. If you don't want this, but instead want to create rules for specific connectors only, you can scope the rules so that they apply only to those connectors.

To limit the scope of the rules to specific connectors, you can use one of the following mechanisms.

  • Network tags. Every connector has two network tags: vpc-connector and vpc-connector-<region>-<connector-name>. Use the latter format to limit the scope of your firewall rules to a specific connector.
  • IP ranges. Use this for the Egress rules only, because it doesn't work for Ingress. You can use the IP range of the connector subnet to limit the scope of your firewall rules to a single VPC connector.

Granting permissions to service accounts in your service projects

For each service project that will use VPC Connectors, a Shared VPC Admin must grant the Compute Network User role (compute.networkUser) in the host project to the service project cloudservices and vpcaccess service accounts.

To grant the role:

  1. Use these commands:

    gcloud projects add-iam-policy-binding HOST-PROJECT-ID \
    --role "roles/compute.networkUser" \
    --member "serviceAccount:service-SERVICE-PROJECT-NUMBER@gcp-sa-vpcaccess.iam.gserviceaccount.com"
    gcloud projects add-iam-policy-binding HOST-PROJECT-ID \
    --role "roles/compute.networkUser" \
    --member "serviceAccount:SERVICE-PROJECT-NUMBER@cloudservices.gserviceaccount.com"
  2. If the @gcp-sa-vpcaccess service account does not exist, turn on the Serverless VPC Access API in the service project and try again:

    ggcloud services enable vpcaccess.googleapis.com

If you prefer not to grant these service accounts access to the entire Shared VPC network and would rather only grant access to specific subnets, you can instead grant these roles to these service accounts on specific subnets only.

Creating a subnet

When using Shared VPC, the Shared VPC Admin must create a subnet for each connector. Follow the documentation on adding a subnet to add a /28 subnet to the Shared VPC network. This subnet must be in the same region as the serverless services that will use the connector.

Deleting a connector

Before you delete a connector, ensure that no services are still using it. See the relevant product documentation for information on disconnecting a connector from a service. Also note that you cannot delete a VPC network if a Serverless VPC Access connector is still attached to it. You must delete all attached connectors before deleting the VPC network.

To delete a connector, use the Cloud Console or the gcloud command-line tool:

Console

  1. Go to the Serverless VPC Access overview page.

    Go to Serverless VPC Access

  2. Select the connector you want to delete.

  3. Click Delete.

gcloud

Use the following gcloud command to delete a connector:

gcloud compute networks vpc-access connectors delete [CONNECTOR_NAME] --region [REGION]

Where:

  • [CONNECTOR_NAME] is the name of the connector you want to delete.
  • [REGION] is the region where the connector is located.

Configuring your service to use a connector

After creating a connector, you can configure your serverless services to use it. How you configure a service to use a connector depends on the product. For specific instructions, see the relevant guide:

After your service is connected to a VPC network, you can reach VM instances and other internal resources by sending requests to their internal IP addresses or DNS names.

Adding VPC Service Controls

Once you have created a connector and configured your service, you can mitigate the risk of data exfiltration and protect resources and data by using VPC Service Controls for the VPC Access API for Serverless.

For general information on enabling VPC Service Controls, see Creating a service perimeter.

Supported services

You can use Serverless VPC Access to reach a VPC network from the following services:

Supported regions

You can create a Serverless VPC Access connector in the following regions:

  • asia-east1
  • asia-east2
  • asia-northeast1
  • asia-northeast2
  • asia-northeast3
  • asia-south1
  • asia-southeast1
  • asia-southeast2
  • australia-southeast1
  • europe-north1
  • europe-west1
  • europe-west2
  • europe-west3
  • europe-west4
  • europe-west6
  • northamerica-northeast1
  • southamerica-east1
  • us-central1
  • us-east1
  • us-east4
  • us-west1
  • us-west2
  • us-west3
  • us-west4

Supported networking protocols

The following table describes the protocols supported for each Serverless VPC Access connector eggress setting. See Configuring network settings for more information regarding the available egress settings.

Protocol Route only requests to private IPs through the VPC connector Route all traffic through the VPC connector
TCP
UDP
ICMP Supported only for external IP addresses

Curated IAM roles

The following table describes the Identity and Access Management (IAM) roles associated with Serverless VPC Access. See Serverless VPC Access roles in the IAM documentation for a list of permissions associated with each role.

Role Description
Serverless VPC Access Admin
roles/vpcaccess.admin
Full access to all Serverless VPC Access resources
Serverless VPC Access User
roles/vpcaccess.user
User of Serverless VPC Access connectors
Serverless VPC Access Viewer
roles/vpcaccess.viewer
Viewer of all Serverless VPC Access resources

Service account

To perform operations in your Cloud project, the Serverless VPC Access service uses the Serverless VPC Access Service Agent service account. This service account's email address has the following form:

service-PROJECT_NUMBER@gcp-sa-vpcaccess.iam.gserviceaccount.com

By default, this service account has the Serverless VPC Access Service Agent role (roles/vpcaccess.serviceAgent). Serverless VPC Access operations may fail if you change this account's permissions.

Audit logging

See Serverless VPC Access audit logging information.

Pricing

For Serverless VPC Access pricing, see Serverless VPC Access on the VPC pricing page.

Troubleshooting

If creating a connector results in an error, try the following and re-create your connector:

  • Specify an RFC 1918 internal IP range that does not overlap with any existing IP address reservations in the VPC network.
  • Grant your project permission to use Compute Engine VM images from the project with ID serverless-vpc-access-images. See Setting image access constraints for information on how to update your organization policy accordingly.
  • Set the constraints/compute.vmCanIpForward organization policy to allow VMs to enable IP forwarding.

If you've specified a connector for a serverless service but still cannot access resources in your VPC network:

  • Make sure there are no firewall rules on your VPC network with a priority before 1000 that deny ingress from your connector's IP range.