Configuring Serverless VPC Access

Serverless VPC Access enables you to connect from a serverless environment on Google Cloud (Cloud Run, Cloud Functions, or the App Engine standard environment) directly to your VPC network. This connection makes it possible for your serverless environment to access Compute Engine VM instances, Memorystore instances, and any other resources with an internal IP address. For example, this can be helpful in the following cases:

  • You use Memorystore to store data for a serverless service.
  • Your serverless workloads use third-party software that you run on a Compute Engine VM.
  • You run a backend service on a Managed Instance Group in Compute Engine and need your serverless environment to communicate with this backend without exposure to the public internet.
  • Your serverless environment needs to access data from your on-premises database through Cloud VPN.

Connection to a VPC network enables your serverless environment to send requests to internal DNS names and internal IP addresses as defined by RFC 1918 and RFC 6598. These internal addresses are only accessible from Google Cloud services. Using internal addresses avoids exposing resources to the public internet and improves the latency of communication between your services.

Serverless VPC Access only allows requests to be initiated by the serverless environment. Requests initiated by a VM must use the external address of your serverless service—see Private Google Access for more information.

Serverless VPC Access supports Shared VPC and communication to networks connected via Cloud Interconnect, Cloud VPN, and VPC Network Peering. Serverless VPC Access does not support legacy networks.

About Serverless VPC Access connectors

Serverless VPC Access is based on a resource called a connector. A connector handles traffic between your serverless environment and your VPC network. When you create a connector in your Google Cloud project, you attach it to a specific VPC network and region. You can then configure your serverless services to use the connector for outbound network traffic.

When you create a connector, you can use your own CIDR /28 subnet or use a custom IP range. Traffic sent through the connector into your VPC network will originate from your /28 subnet or from an address in the custom IP range. If you use a custom IP range, the IP range must be a CIDR /28 range that is not already reserved in your VPC network.

If the subnet is not a shared subnet, an implicit firewall rule with priority 1000 is created on your VPC network to allow ingress from the connector's subnet or custom IP range to all destinations in the network.

Serverless VPC Access automatically provisions throughput for a connector in 100 Mbps increments depending on the amount of traffic sent through the connector. Automatically provisioned throughput can only scale up and does not scale down. A connector always has at least 100 Mbps provisioned and can scale up to 16 Gbps. You can configure throughput scaling limits when you create a connector; note that actual throughput through a connector may exceed the provisioned throughput, especially for short traffic bursts.

Serverless VPC Access connectors incur a monthly charge based on usage. See Pricing for details.

Serverless VPC Access example (click to enlarge)
Serverless VPC Access example (click to enlarge)

Note that:

  • A connector must be located in the same project as the serverless service (such as Cloud Run services, App Engine apps, or Cloud Functions) that connects to it.
  • A connector must be located in the same region as the serverless service that connects to it. See Supported regions for the list of regions in which you can create a connector.
  • Traffic to internal IP addresses and internal DNS names is routed through the connector. By default, traffic to external IP addresses is routed through the internet.
  • If you use Shared VPC:

  • You can use the same connector with multiple serverless services.

  • For resources (such as Google Cloud VM instances or GKE clusters) that allow cross-region access, a connector can be in a different region than the resource it is sending traffic to. You are billed for egress from the connector—see Pricing.

Serverless VPC Access network tags

Serverless VPC Access network tags let you to specify VPC connectors in firewall rules and routes.

Every Serverless VPC Access connector automatically receives two network tags (sometimes called instance tags):

  • Universal network tag: vpc-connector Applies to all existing connectors and any connectors made in the future
  • Unique network tag: vpc-connector-REGION-CONNECTOR_NAME Applies to the connector CONNECTOR_NAME in REGION

These network tags cannot be deleted. New network tags cannot be added.

Creating a connector

Console

  1. If you are using Shared VPC, make sure you have configured the host project.

  2. Ensure the Serverless VPC Access API is enabled for your project:

    Enable API

  3. Go to the Serverless VPC Access overview page.

    Go to Serverless VPC Access

  4. Click Create connector.

  5. In the Name field, enter a name for your connector.

  6. In the Region field, select a region for your connector. This must match the region of your serverless service—see Supported regions.

  7. In the Network field, select the VPC network to attach your connector to.

  8. Click the Subnetwork pulldown menu:

    • If you are using your own subnet (required for Shared VPC), select the /28 subnet you want to use for the connector.
    • If you are not using Shared VPC, and prefer to have the connector create a subnet instead of creating one explicitly, select Custom IP range from the pulldown menu, then in the IP range field, enter the first address in an unreserved CIDR /28 internal IP range. This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0 (/28) will work in most new projects.

  9. (Optional) To set scaling options for additional control over the connector, click Show Scaling Settings to display the scaling form:

    image

    1. Set the minimum and maximum number of instances for your connector, or use the defaults, which are 2 (min) and 10 (max). The connector scales out to the maximum specified if traffic usage requires it, but the connector does not scale back in when traffic decreases. You must use values between 2 and 10.
    2. In the Instance Type pulldown menu, choose the machine type to be used for the connector, or use the default e2-micro. Notice the cost sidebar on the left when you choose the instance type, which displays bandwidth and cost estimations.
  10. Click Create.

  11. A green check mark will appear next to the connector's name when it is ready to use.

gcloud

  1. If you are using Shared VPC, make sure you have configured the host project.

  2. Update gcloud components to the latest version:

    gcloud components update
    
  3. Ensure the Serverless VPC Access API is enabled for your project:

    gcloud services enable vpcaccess.googleapis.com
    
  4. If you are using your own subnet (required for Shared VPC), create a connector with the command:

    gcloud compute networks vpc-access connectors create CONNECTOR_NAME \
    --region REGION \
    --subnet SUBNET \
    # If you are not using Shared VPC, omit the following line.
    --subnet-project HOST_PROJECT_ID \
    # Optional: specify minimum and maximum instance values between 2 and 10, default is 2 min, 10 max.
    --min-instances MIN \
    --max-instances MAX \
    # Optional: specify machine type, default is e2-micro
    --machine-type MACHINE_TYPE
    

    Replace the following:

    • CONNECTOR_NAME: a name for your connector
    • REGION: a region for your connector; this must match the region of your serverless service—see Supported regions
    • SUBNET: your own '/28' dedicated subnet that is not used by any other resource; the value to be supplied is name of the subnet
    • HOST_PROJECT_ID: the ID of the host project; supply this only if you are using Shared VPC
    • MIN: the minimum number of instances to use for the connector. Use an integer between 2 and 10. Default is 2.
    • MAX: the maximum number of instances to use for the connector. Use an integer between 2 and 10. Default is 10. If traffic requires it, the connector scales out to [MAX] instances, but does not scale back in.
    • MACHINE_TYPE: f1-micro, e2-micro, or e2-standard-4

      Machine type Estimated throughput range in Mbps Price
      (connector instance plus network egress costs)
      f1-micro 100-500 f1-micro pricing
      e2-micro 200-1000 e2-micro pricing
      e2-standard-4 3200-16000 e2 standard pricing

    For example, if you set MACHINE_TYPE to f1-micro, the estimated throughput for your connector will be 100 Mbps at the default MIN and 500 Mbps at the default MAX.

    For more details and optional arguments, see the gcloud reference.

  5. If you are not using Shared VPC and want to supply a custom IP range instead of using a subnet, create a connector with the command:

    gcloud compute networks vpc-access connectors create CONNECTOR_NAME \
    --network VPC_NETWORK \
    --region REGION \
    --range IP_RANGE
    

    Replace the following:

    • CONNECTOR_NAME: a name for your connector
    • VPC_NETWORK: the VPC network to attach your connector to
    • REGION: a region for your connector; this must match the region of your serverless service—see Supported regions
    • IP_RANGE: an unreserved internal IP network, and a '/28' of unallocated space is required. The value supplied is the network in CIDR notation (10.8.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0/28 works in most new projects.

    For more details and optional arguments such as throughput controls, see the gcloud reference.

  6. Verify that your connector is in the READY state before using it:

    gcloud compute networks vpc-access connectors describe CONNECTOR_NAME \
    --region REGION
    

    Replace the following:

    • CONNECTOR_NAME: the name of your connector; this is the name that you specified in the previous step
    • REGION: the region of your connector; this is the region that you specified in the previous step

    The output should contain the line state: READY.

Terraform

You can use a Terraform resource to enable the vpcaccess.googleapis.com API.

resource "google_project_service" "project" {
  project = var.project_id # Replace this with your project ID in quotes
  service = "vpcaccess.googleapis.com"
}

You can use Terraform modules to create a VPC network and subnet and then create the connector.

module "test-vpc-module" {
  source       = "terraform-google-modules/network/google"
  version      = "~> 3.3.0"
  project_id   = var.project_id # Replace this with your project ID in quotes
  network_name = "my-serverless-network"
  mtu          = 1460

  subnets = [
    {
      subnet_name   = "serverless-subnet"
      subnet_ip     = "10.10.10.0/28"
      subnet_region = "us-central1"
    }
  ]
}

module "serverless-connector" {
  source     = "terraform-google-modules/network/google//modules/vpc-serverless-connector-beta"
  project_id = var.project_id
  vpc_connectors = [{
    name        = "central-serverless"
    region      = "us-central1"
    subnet_name = module.test-vpc-module.subnets["us-central1/serverless-subnet"].name
    # host_project_id = var.host_project_id # Specify a host_project_id for shared VPC
    machine_type  = "e2-standard-4"
    min_instances = 2
    max_instances = 7
    }
    # Uncomment to specify an ip_cidr_range
    #   , {
    #     name          = "central-serverless2"
    #     region        = "us-central1"
    #     network       = module.test-vpc-module.network_name
    #     ip_cidr_range = "10.10.11.0/28"
    #     subnet_name   = null
    #     machine_type  = "e2-standard-4"
    #     min_instances = 2
    #   max_instances = 7 }
  ]
}

Configuring the host project if using Shared VPC

If you are creating a connector for Shared VPC, you must configure the host project as follows:

  • Add firewall rules to allow required IP ranges to access the connector.
  • Grant each service project the Compute Network User role in the host project.
  • Create a subnet in the host project to be used when creating a Shared VPC connector.

Adding firewall rules to allow IP ranges

These steps must be performed by a user with one of the following roles on the host project:

You must create firewall rules to allow requests from the following IP ranges to reach the connector and to be reached by the connector:

  • NAT ranges
    • 107.178.230.64/26
    • 35.199.224.0/19
  • Health check ranges
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 108.170.220.0/23

These ranges are used by the Google infrastructure underlying Cloud Run, Cloud Functions, and App Engine Standard. All requests from these IPs are guaranteed to originate from Google infrastructure, which ensures that each Cloud Run, Cloud Functions, and App Engine service/function/app only communicates with the VPC Connector it is connected to.

For a simple configuration, apply the rules to allow serverless services in any service project connected to the Shared VPC network to send requests to any resource in the network.

To apply these rules:

  1. Run the following three commands to set the rules to allow requests from the serverless environment to reach all VPC Connectors in the network:

    gcloud compute firewall-rules create serverless-to-vpc-connector \
    --allow tcp:667,udp:665-666,icmp \
    --source-ranges 107.178.230.64/26,35.199.224.0/19 \
    --direction=INGRESS \
    --target-tags vpc-connector \
    --network=VPC_NETWORK
    gcloud compute firewall-rules create vpc-connector-to-serverless \
    --allow tcp:667,udp:665-666,icmp \
    --destination-ranges 107.178.230.64/26,35.199.224.0/19 \
    --direction=EGRESS \
    --target-tags vpc-connector \
    --network=VPC_NETWORK
    gcloud compute firewall-rules create vpc-connector-health-checks \
    --allow tcp:667 \
    --source-ranges 130.211.0.0/22,35.191.0.0/16,108.170.220.0/23 \
    --direction=INGRESS \
    --target-tags vpc-connector \
    --network=VPC_NETWORK

    Where VPC_NETWORK is the VPC network to attach your connector to.

  2. Run the following command to allow requests from any VPC connector to all resources on the network:

    gcloud compute firewall-rules create \
    vpc-connector-egress \
    --allow tcp,udp,icmp \
    --direction=INGRESS \
    --source-tags vpc-connector \
    --network=VPC_NETWORK

    This rule gives the VPC connector access to every resource in the network. To allow VPC connectors to access only a narrower set of resources, specify a target for these firewall rules. Note that if you specify a target, you must create a new set of firewall rules every time you create a new VPC connector.

Creating firewall rules with narrower scope

Following the procedure in Adding firewall rules to allow IP ranges results in firewall rules that apply to all connectors, both current ones and ones created in the future. If you don't want this, but instead want to create rules for specific connectors only, you can scope the rules so that they apply only to those connectors.

To limit the scope of the rules to specific connectors, you can use one of the following mechanisms.

  • Network tags. Every connector has two network tags: vpc-connector and vpc-connector-REGION-CONNECTOR_NAME. Use the latter format to limit the scope of your firewall rules to a specific connector.
  • IP ranges. Use this for the Egress rules only, because it doesn't work for Ingress. You can use the IP range of the connector subnet to limit the scope of your firewall rules to a single VPC connector.

Granting permissions to service accounts in your service projects

For each service project that will use VPC Connectors, a Shared VPC Admin must grant the Compute Network User role (compute.networkUser) in the host project to the service project cloudservices and vpcaccess service accounts.

To grant the role:

  1. Use these commands:

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
    --role "roles/compute.networkUser" \
    --member "serviceAccount:service-SERVICE_PROJECT_NUMBER@gcp-sa-vpcaccess.iam.gserviceaccount.com"
    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
    --role "roles/compute.networkUser" \
    --member "serviceAccount:SERVICE_PROJECT_NUMBER@cloudservices.gserviceaccount.com"
  2. If the @gcp-sa-vpcaccess service account does not exist, turn on the Serverless VPC Access API in the service project and try again:

    gcloud services enable vpcaccess.googleapis.com

If you prefer not to grant these service accounts access to the entire Shared VPC network and would rather only grant access to specific subnets, you can instead grant these roles to these service accounts on specific subnets only.

Creating a subnet

When using Shared VPC, the Shared VPC Admin must create a subnet for each connector. Follow the documentation on adding a subnet to add a /28 subnet to the Shared VPC network. This subnet must be in the same region as the serverless services that will use the connector.

Deleting a connector

Before you delete a connector, ensure that no services are still using it. See the relevant product documentation for information on disconnecting a connector from a service.

To delete a connector, use the Cloud Console or the gcloud command-line tool:

Console

  1. Go to the Serverless VPC Access overview page.

    Go to Serverless VPC Access

  2. Select the connector you want to delete.

  3. Click Delete.

gcloud

Use the following gcloud command to delete a connector:

gcloud compute networks vpc-access connectors delete CONNECTOR_NAME \
--region REGION

Replace the following:

  • CONNECTOR_NAME is the name of the connector you want to delete.
  • REGION is the region where the connector is located.

Configuring your service to use a connector

After creating a connector, you can configure your serverless services to use it. How you configure a service to use a connector depends on the product. For specific instructions, see the relevant guide:

After your service is connected to a VPC network, you can reach VM instances and other internal resources by sending requests to their internal IP addresses or DNS names.

Restricting access to VPC resources

You can restrict your connector's access to your VPC network by using firewall rules.

When connecting to a Shared VPC network, firewall rules are not automatically created. A user with the Network Administrator role on the host project sets firewall rules when they configure the host project. If you use Shared VPC, see Creating firewall rules with narrower scope to learn about restricting access.

When connecting to a standalone VPC network, an implicit firewall rule with priority 1000 is automatically created on your VPC network to allow ingress from the connector's subnet or custom IP range to all destinations in the VPC network. The implicit firewall rule is not visible in the Google Cloud Console and exists only as long as the associated connector exists. If you don't want your connector to be able to reach all destinations in your VPC network, you can restrict its access.

You can restrict connector access by creating ingress rules on the destination resource, or by creating egress rules on the VPC connector.

Restricting access using ingress rules

Choose either network tags or CIDR ranges to control the incoming traffic to your VPC network:

Network tags

The following steps show how to create ingress rules that restrict a connector's access to your VPC network based on the connector network tags.

  1. Ensure that you have the required permissions to insert firewall rules. You must have one of the following Identity and Access Management (IAM) roles:

  2. Deny connector traffic across your VPC network.

    Create an ingress firewall rule with priority lower than 1000 on your VPC network to deny ingress from the connector network tag. This overrides the implicit firewall rule that Serverless VPC Access creates on your VPC network by default.

    gcloud compute firewall-rules create RULE_NAME \
    --action=DENY \
    --source-tags=VPC_CONNECTOR_NETWORK_TAG \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, deny-vpc-connector.
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector.
    • VPC_NETWORK: the name of your VPC network
    • PRIORITY: an integer from 1-999, inclusive. For example, 990.
  3. Allow connector traffic to the resource that should receive connector traffic.

    Use the allow and target-tags flags to create an ingress firewall rule targeting the resource in your VPC network that you want the VPC connector to access. Set the priority for this rule to be a lower value than the priority of the rule you made in the previous step.

    gcloud compute firewall-rules create RULE_NAME \
    --allow=PROTOCOLS \
    --source-tags=VPC_CONNECTOR_NETWORK_TAG \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --target-tags=RESOURCE_TAG \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, allow-vpc-connector-for-select-resources.
    • PROTOCOLS: the protocols you want to allow from your VPC connector. These can be one or more of the case-sensitive string values tcp, udp, icmp, esp, ah, sctp, or any IP protocol number. For port-based protocols—tcp, udp, and sctp—a list of destination ports or port ranges to which the rule applies may optionally be specified. For more information, see the documentation for the allow flag.
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector. If you used the unique network tag in the previous step, use the unique network tag.
    • VPC_NETWORK: the name of your VPC network
    • RESOURCE_TAG: the network tag for the VPC resource that you want your VPC connector to access
    • PRIORITY: an integer less than the priority you set in the previous step. For example, if you set the priority for the rule you created in the previous step to 990, try 980.

For more information about the required and optional flags for creating firewall rules, refer to the documentation for gcloud compute firewall-rules create.

CIDR range

The following steps show how to create ingress rules that restrict a connector's access to your VPC network based on the connector's CIDR range.

  1. Ensure that you have the required permissions to insert firewall rules. You must have one of the following Identity and Access Management (IAM) roles:

  2. Deny connector traffic across your VPC network.

    Create an ingress firewall rule with priority lower than 1000 on your VPC network to deny ingress from the connector's CIDR range. This overrides the implicit firewall rule that Serverless VPC Access creates on your VPC network by default.

    gcloud compute firewall-rules create RULE_NAME \
    --action=DENY \
    --source-ranges=VPC_CONNECTOR_CIDR_RANGE \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, deny-vpc-connector.
    • VPC_CONNECTOR_CIDR_RANGE: the CIDR range for the connector whose access you are restricting
    • VPC_NETWORK: the name of your VPC network
    • PRIORITY: an integer from 1-999. For example, 990.
  3. Allow connector traffic to the resource that should receive connector traffic.

    Use the allow and target-tags flags to create an ingress firewall rule targeting the resource in your VPC network that you want the VPC connector to access. Set the priority for this rule to be a lower value than the priority of the rule you made in the previous step.

    gcloud compute firewall-rules create RULE_NAME \
    --allow=PROTOCOLS \
    --source-ranges=VPC_CONNECTOR_CIDR_RANGE \
    --direction=INGRESS \
    --network=VPC_NETWORK \
    --target-tags=RESOURCE_TAG \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, allow-vpc-connector-for-select-resources.
    • PROTOCOLS: the protocols you want to allow from your VPC connector. These can be one or more of the case-sensitive string values tcp, udp, icmp, esp, ah, sctp, or any IP protocol number. For port-based protocols—tcp, udp, and sctp—a list of destination ports or port ranges to which the rule applies may optionally be specified. For more information, see the documentation for the allow flag.
    • VPC_CONNECTOR_CIDR_RANGE: the CIDR range for the connector you whose access you are restricting
    • VPC_NETWORK: the name of your VPC network
    • RESOURCE_TAG: the network tag for the VPC resource that you want your VPC connector to access
    • PRIORITY: an integer less than the priority you set in the previous step. For example, if you set the priority for the rule you created in the previous step to 990, try 980.

For more information about the required and optional flags for creating firewall rules, see the documentation for gcloud compute firewall-rules create.

Restricting access using egress rules

The following steps show how to create egress rules to restrict connector access.

  1. Ensure that you have the required permissions to insert firewall rules. You must have one of the following Identity and Access Management (IAM) roles:

  2. Deny egress traffic from your connector.

    Create an egress firewall rule on your Serverless VPC Access connector to prevent it from sending outgoing traffic.

    gcloud compute firewall-rules create RULE_NAME \
    --action=DENY \
    --direction=EGRESS \
    --target-tags=VPC_CONNECTOR_NETWORK_TAG \
    --network=VPC_NETWORK \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, deny-vpc-connector.
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector.
    • VPC_NETWORK: the name of your VPC network
    • PRIORITY: an integer from 1-999. For example, 990.
  3. Allow egress traffic when the destination is in the CIDR range that you want your connector to access.

    Use the allow and destination-ranges flags to create a firewall rule allowing egress traffic from your connector for a specific destination range. Set the destination range to the CIDR range of the resource in your VPC network that you want your connector to be able to access. Set the priority for this rule to be a lower value than the priority of the rule you made in the previous step.

    gcloud compute firewall-rules create RULE_NAME \
    --allow=PROTOCOLS \
    --destination-ranges=RESOURCE_CIDR_RANGE \
    --direction=EGRESS \
    --network=VPC_NETWORK \
    --target-tags=VPC_CONNECTOR_NETWORK_TAG \
    --priority=PRIORITY
    

    Replace the following:

    • RULE_NAME: the name of your new firewall rule. For example, allow-vpc-connector-for-select-resources.
    • PROTOCOLS: the protocols you want to allow from your VPC connector. These can be one or more of the case-sensitive string values tcp, udp, icmp, esp, ah, sctp, or any IP protocol number. For port-based protocols—tcp, udp, and sctp—a list of destination ports or port ranges to which the rule applies may optionally be specified. For more information, see the documentation for the allow flag.
    • RESOURCE_CIDR_RANGE: the CIDR range for the connector whose access you are restricting
    • VPC_NETWORK: the name of your VPC network
    • VPC_CONNECTOR_NETWORK_TAG: the universal VPC connector network tag if you want the rule to apply to all existing VPC connectors and any VPC connectors made in the future. Or, the unique VPC connector network tag if you want to control a specific connector. If you used the unique network tag in the previous step, use the unique network tag.
    • PRIORITY: an integer less than the priority you set in the previous step. For example, if you set the priority for the rule you created in the previous step to 990, try 980.

For more information about the required and optional flags for creating firewall rules, refer to the documentation for gcloud compute firewall-rules create.

Adding VPC Service Controls

Once you have created a connector and configured your service, you can mitigate the risk of data exfiltration and protect resources and data by using VPC Service Controls for the VPC Access API for Serverless.

For general information on enabling VPC Service Controls, see Creating a service perimeter.

Supported services

You can use Serverless VPC Access to reach a VPC network from the following services:

Supported regions

You can create a Serverless VPC Access connector in the following regions:

  • asia-east1
  • asia-east2
  • asia-northeast1
  • asia-northeast2
  • asia-northeast3
  • asia-south1
  • asia-southeast1
  • asia-southeast2
  • australia-southeast1
  • europe-central2
  • europe-north1
  • europe-west1
  • europe-west2
  • europe-west3
  • europe-west4
  • europe-west6
  • northamerica-northeast1
  • southamerica-east1
  • us-central1
  • us-east1
  • us-east4
  • us-west1
  • us-west2
  • us-west3
  • us-west4

Supported networking protocols

The following table describes the protocols supported for each Serverless VPC Access connector eggress setting. See Configuring network settings for more information regarding the available egress settings.

Protocol Route only requests to private IPs through the VPC connector Route all traffic through the VPC connector
TCP
UDP
ICMP Supported only for external IP addresses

Curated IAM roles

The following table describes the Identity and Access Management (IAM) roles associated with Serverless VPC Access. See Serverless VPC Access roles in the IAM documentation for a list of permissions associated with each role.

Role Description
Serverless VPC Access Admin
roles/vpcaccess.admin
Full access to all Serverless VPC Access resources
Serverless VPC Access User
roles/vpcaccess.user
User of Serverless VPC Access connectors
Serverless VPC Access Viewer
roles/vpcaccess.viewer
Viewer of all Serverless VPC Access resources

Service account

To perform operations in your Cloud project, the Serverless VPC Access service uses the Serverless VPC Access Service Agent service account. This service account's email address has the following form:

service-PROJECT_NUMBER@gcp-sa-vpcaccess.iam.gserviceaccount.com

By default, this service account has the Serverless VPC Access Service Agent role (roles/vpcaccess.serviceAgent). Serverless VPC Access operations may fail if you change this account's permissions.

Audit logging

See Serverless VPC Access audit logging information.

Pricing

For Serverless VPC Access pricing, see Serverless VPC Access on the VPC pricing page.

Troubleshooting

If creating a connector results in an error, try the following and re-create your connector:

  • Specify an RFC 1918 internal IP range that does not overlap with any existing IP address reservations in the VPC network.
  • Grant your project permission to use Compute Engine VM images from the project with ID serverless-vpc-access-images. See Setting image access constraints for information on how to update your organization policy accordingly.
  • Set the constraints/compute.vmCanIpForward organization policy to allow VMs to enable IP forwarding.

If you've specified a connector for a serverless service but still cannot access resources in your VPC network:

  • Make sure there are no firewall rules on your VPC network with a priority before 1000 that deny ingress from your connector's IP range or network tags.