Stay organized with collections Save and categorize content based on your preferences.

Provision Shared VPC

Shared VPC allows you to export subnets from a VPC network in a host project to other service projects in the same organization. Instances in the service projects can have network connections in the shared subnets of the host project. This page describes how to set up and use Shared VPC, including some necessary administrative preparation for your organization.

Shared VPC supports exporting both IPv4-only (single-stack) and IPv4 and IPv6 (dual-stack) subnets.

For information about detaching service projects or removing the Shared VPC configuration completely, see Deprovision Shared VPC.

Quotas, limits, and eligible resources

Before you begin, make sure that you are familiar with Shared VPC and IAM, specifically:

Prepare your organization

Administrators and IAM

Preparing your organization, setting up Shared VPC host projects, and using Shared VPC networks involves a minimum of three different administrative Identity and Access Management (IAM) roles. For more details about each role and information about optional ones, see the administrators and IAM section of the Shared VPC overview.

Organization policy constraints

Organization policy constraints can protect Shared VPC resources at the project, folder, or organization level. The following sections describe each policy.

Prevent accidental deletion of host projects

The accidental deletion of a host project would lead to outages in all service projects attached to it. When a project is configured to be a Shared VPC host project, a special lock—called a lien—is placed upon it. As long as the lien is present, it prevents the project from being deleted accidentally. The lien is automatically removed from the host project when it is no longer configured for Shared VPC.

A user with the orgpolicy.policyAdmin role can define an organization-level policy constraint (constraints/compute.restrictXpnProjectLienRemoval) that limits the removal of liens to just the following roles:

  • Users with roles/owner or roles/resourcemanager.lienModifier at the organization level
  • User with custom roles that include the resourcemanager.projects.get and resourcemanager.projects.updateLiens permissions at the organization level

This effectively prevents a project owner who does not have the roles/owner role at the organization level or the resourcemanager.lienModifier role at the organization level from being able to accidentally delete a Shared VPC host project. For more information about the permissions associated with the resourcemanager.lienModifier role, refer to Placing a lien on a project in the Resource Manager documentation.

Because an organization policy applies to all projects in the organization, you only need to follow these steps once to restrict lien removal.

  1. Authenticate to gcloud as an Organization Admin or IAM principal with the orgpolicy.policyAdmin role. Replace ORG_ADMIN with the name of an Organization Admin:

    gcloud auth login ORG_ADMIN
    
  2. Determine your organization ID number by looking at the output of this command.

    gcloud organizations list
    
  3. Enforce the compute.restrictXpnProjectLienRemoval policy for your organization by running this command. Replace ORG_ID with the number you determined from the previous step.

    gcloud resource-manager org-policies enable-enforce \
        --organization ORG_ID compute.restrictXpnProjectLienRemoval
    
  4. Log out of gcloud if you are finished performing tasks as an Organization Admin to protect your account.

    gcloud auth revoke ORG_ADMIN
    

Constrain host project attachments

By default, a Shared VPC Admin can attach a non-host to any host project in the same organization. An organization policy administrator can limit the set of hosts projects to which a non-host project or non-host projects in a folder or organization can be attached. For more information, refer to the constraints/compute.restrictSharedVpcHostProjects constraint.

Constrain which subnets in the host project that a service project can use

By default, after you configure Shared VPC, IAM principals in service projects can use any subnet in the host project if they have the appropriate IAM permissions. In addition to managing individual user permissions, an organization policy administrator can set a policy to define the set of subnets that can be accessed by a particular project or by projects in a folder or organization. For more information, refer to the constraints/compute.restrictSharedVpcSubnetworks constraint.

Prevent accidental shutdown of host projects

Disconnecting billing on a Shared VPC network can lead to a complete shutdown of all dependent resources including service projects. To prevent a possible occurrence of an accidental Shared VPC shutdown due to inactive or disabled billing, secure the link between the host project and its billing account.

Nominate Shared VPC Admins

An Organization Admin can grant one or more IAM principals the Shared VPC Admin and Project IAM Admin roles.

The Project IAM Admin role grants Shared VPC Admins permission to share all existing and future subnets, not just individual subnets. This grant creates a binding at the organization or folder level, not the project level. So the IAM principals must be defined in the organization, not just a project therein.

Console

To grant the Shared VPC Admin role at the organization level

  1. Log into the Google Cloud console as an Organization Admin, then go to the IAM page.
    Go to the IAM page
  2. From the project menu, select your organization.
    If you select a project, you will not see the correct entries in the Roles menu.
  3. Click Add.
  4. Enter the email addresses of the New principals.
  5. In the Roles drop down, select Compute Engine > Compute Shared VPC Admin.

  6. Click Add another role.

  7. In the Roles drop down, select Resource Manager > Project IAM Admin.

  8. Click Save.

To grant the Shared VPC Admin role at the folder level

  1. Log into the Google Cloud console as an Organization Admin, then go to the IAM page.
    Go to the IAM page
  2. From the project menu, select your folder.
    If you select a project or organization, you will not see the correct options.
  3. Click Add.
  4. Enter the email addresses of the New principals.
  5. Under Select a role, select Compute Engine > Compute Shared VPC Admin.
  6. Click Add another role.
  7. In the Roles drop down, select Resource Manager > Project IAM Admin.
  8. Click Add another role.
  9. In the Roles drop down, select Resource Manager > Compute Network Viewer.
  10. Click Save.

gcloud

  1. Authenticate to gcloud as an Organization Admin. Replace ORG_ADMIN with the name of an Organization Admin:

    gcloud auth login ORG_ADMIN
    
  2. Determine your organization ID number by looking at the output of this command.

    gcloud organizations list
    
  3. If you wish to assign the Shared VPC Admin role at the organization level, do the following:

    1. Apply Shared VPC Admin role to an existing IAM principal. Replace ORG_ID with the organization ID number from the previous step, and EMAIL_ADDRESS with the email address of the user to whom you are granting the Shared VPC Admin role.

      gcloud organizations add-iam-policy-binding ORG_ID \
        --member='user:EMAIL_ADDRESS' \
        --role="roles/compute.xpnAdmin"
      
      gcloud organizations add-iam-policy-binding ORG_ID \
        --member='user:EMAIL_ADDRESS' \
        --role="roles/resourcemanager.projectIamAdmin"
      
  4. If you wish to assign the Shared VPC Admin role at the folder level, do the following:

    1. Determine your folder ID by looking at the output of this command.

      gcloud resource-manager folders list --organization=ORG_ID
      
    2. Apply Shared VPC Admin role to an existing IAM principal. Replace ORG_ID with the organization ID number from the previous step, and EMAIL_ADDRESS with the email address of the user to whom you are granting the Shared VPC Admin role.

      gcloud resource-manager folders add-iam-policy-binding FOLDER_ID \
         --member='user:EMAIL_ADDRESS' \
         --role="roles/compute.xpnAdmin"
      
      gcloud resource-manager folders add-iam-policy-binding FOLDER_ID \
         --member='user:EMAIL_ADDRESS' \
         --role="roles/resourcemanager.projectIamAdmin"
      
      gcloud resource-manager folders add-iam-policy-binding FOLDER_ID \
         --member='user:EMAIL_ADDRESS' \
         --role="roles/compute.networkViewer"
      
  5. Revoke your Organization Admin account token for in the gcloud command-line tool when you are finished performing tasks to protect your account.

    gcloud auth revoke ORG_ADMIN
    

API

  • To assign the Shared VPC Admin role at the organization level, use the following procedure:

    1. Determine your organization ID number.

      POST https://cloudresourcemanager.googleapis.com/v1/organizations
      
    2. Describe and then record the details of your existing organization policy.

      POST https://cloudresourcemanager.googleapis.com/v1/organizations/ORG_ID:getIamPolicy
      

      Replace ORG_ID with the ID of your organization.

    3. Assign the Shared VPC Admin role.

      POST https://cloudresourcemanager.googleapis.com/v1/organizations/ORG_ID:setIamPolicy
      {
        "bindings": [
          ...copy existing bindings
          {
            "members": [
              "user:EMAIL_ADDRESS"
            ],
            "role": "roles/compute.xpnAdmin"
          },
          {
            "members": [
              "user:EMAIL_ADDRESS"
            ],
            "role": "roles/resourcemanager.projectIamAdmin"
          }
        ],
        "etag": "ETAG",
        "version": 1,
        ...other existing policy details
      }
      

      Replace the placeholders with valid values:

      • ORG_ID is the ID of the organization that contains the user who you're granting the Shared VPC Admin role.
      • EMAIL_ADDRESS is the email address of the user.
      • ETAG is a unique identifier that you got when you described the existing policy. It prevents collisions if multiple updates requests are sent at the same time.

      For more information, refer to the organizations.setIamPolicy method.

  • To assign the Shared VPC Admin role at the folder level, use the following request:

    1. Determine your organization ID number.

      POST https://cloudresourcemanager.googleapis.com/v1/organizations
      
    2. Find your folder ID.

      GET https://cloudresourcemanager.googleapis.com/v2/folders?parent=organizations/ORG_ID
      

      Replace ORG_ID with the ID of your organization.

    3. Describe and then record the details of your existing folder policy.

      POST https://cloudresourcemanager.googleapis.com/v2/folders/FOLDER_ID:getIamPolicy
      

      Replace FOLDER_ID with the ID of your folder.

    4. Assign the Shared VPC Admin role.

      POST https://cloudresourcemanager.googleapis.com/v1/organizations/FOLDER_ID:setIamPolicy
      {
        "bindings": [
          ...copy existing bindings
          {
            "members": [
              "user:EMAIL_ADDRESS"
            ],
            "role": "roles/compute.xpnAdmin"
          },
          {
            "members": [
              "user:EMAIL_ADDRESS"
            ],
            "role": "roles/resourcemanager.projectIamAdmin"
          },
          {
            "members": [
              "user:EMAIL_ADDRESS"
            ],
            "role": "roles/compute.networkViewer"
          }
        ],
        "etag": "ETAG",
        "version": 1,
        ...other existing policy details
      }
      

      Replace the placeholders with valid values:

      • FOLDER_ID is the ID of the organization that contains the user to whom you're granting the Shared VPC Admin role.
      • EMAIL_ADDRESS is the email address of the user.
      • ETAG is a unique identifier that you got when you described the existing policy. It prevents collisions if multiple updates requests are sent at the same time.

      For more information, refer to the folders.setIamPolicy method.

Set up Shared VPC

All tasks in this section must be performed by a Shared VPC Admin.

Enable a host project

Within an organization, Shared VPC Admins can designate projects as Shared VPC host projects, subject to quotas and limits, by following this procedure. Shared VPC Admins can also create and delete projects if they have the Project Creator role and Project Deleter role (roles/resourcemanager.projectCreator and roles/resourcemanager.projectDeleter) for your organization.

When you enable a host project, the project's network resources are not automatically shared with service projects. You need to attach service projects to the host project to share selected networks and subnets with the service projects.

Console

If you do not yet have the Shared VPC Admin role, then you will not be able to view this page in the Google Cloud console.

  1. Go to the Shared VPC page in the Google Cloud console.
    Go to the Shared VPC page
  2. Log in as a Shared VPC Admin.
  3. Select the project you want to enable as a Shared VPC host project from the project picker.
  4. Click Set up Shared VPC.
  5. On the next page, click Save & continue under Enable host project.
  6. Under Select subnets, do one of the following:
    1. Click Share all subnets (project-level permissions) if you need to share all current and future subnets in the VPC networks of the host project with service projects and Service Project Admins specified in the next steps.
    2. Click Individual subnets (subnet-level permissions) if you need to selectively share subnets from the VPC networks of the host project with service projects and Service Project Admins. Then, select Subnets to share.
  7. Click Continue.
    The next screen is displayed.
  8. In Project names, specify the service projects to attach to the host project. Note that attaching service projects does not define any Service Project Admins; that is done in the next step.
  9. In the Select users by role section, add Service Project Admins. These users will be granted the IAM role of compute.networkUser for the shared subnets. Only Service Project Admins can create resources in the subnets of the Shared VPC host project.
  10. Click Save.

gcloud

  1. Authenticate to gcloud as a Shared VPC Admin. Replace SHARED_VPC_ADMIN with the name of the Shared VPC Admin:

    gcloud auth login SHARED_VPC_ADMIN
    
  2. Enable Shared VPC for the project that you need to become a host project. Replace HOST_PROJECT_ID with the ID of the project.

    gcloud compute shared-vpc enable HOST_PROJECT_ID
    
  3. Confirm that the project is listed as a host project for your organization. Replace ORG_ID with your organization ID (determined by gcloud organizations list).

    gcloud compute shared-vpc organizations list-host-projects ORG_ID
    
  4. If you only needed to enable a host project, you can log out of gcloud to protect your Shared VPC Admin account credentials. Otherwise, skip this step and continue with the steps to attach service projects.

    gcloud auth revoke SHARED_VPC_ADMIN
    

API

  1. Enable Shared VPC for the project by using credentials with Shared VPC Admin permissions.

    POST https://compute.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/enableXpnHost
    

    Replace HOST_PROJECT_ID with the ID of the project that will be a Shared VPC host project.

    For more information, refer to the projects.enableXpnHost method.

  2. Confirm that the project is listed as a host project.

    POST https://compute.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/listXpnHosts
    

    Replace HOST_PROJECT_ID with the ID of the Shared VPC host project.

    For more information, refer to the projects.listXpnHosts method.

Terraform

You can use a Terraform resource to enable a host project.

resource "google_compute_shared_vpc_host_project" "host" {
  project = var.project # Replace this with your host project ID in quotes
}

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

Attach service projects

A service project must attach to a host project before its Service Project Admins can use the Shared VPC. A Shared VPC Admin must perform the following steps to complete the attachment.

A service project can only attach to one host project, but a host project supports multiple service project attachments. Refer to limits specific to Shared VPC on the VPC quotas page for details.

Console

  1. Log into the Google Cloud console as a Shared VPC Admin.
  2. Go to the Shared VPC page in the Google Cloud console.
    Go to the Shared VPC page
  3. Click the Attached projects tab.
  4. Under the Attached projects tab, click the Attach projects button.
  5. Check the boxes for the service projects to attach in the Project names section. Note that attaching service projects does not define any Service Project Admins; that is done in the next step.
  6. In the VPC network permissions section, select the roles whose principals will get the compute.networkUser role. IAM principals are granted the Network User role for the entire host project or certain subnets in the host project, based on the VPC network sharing mode. These principals are known as Service Project Admins in their respective service projects.
  7. In the VPC network sharing mode section, select one of the following:
    1. Click Share all subnets (project-level permissions) to share all current and future subnets in VPC networks of the host project with all service projects and Service Project Admins.
    2. Click Individual subnets (subnet-level permissions) if you need to selectively share subnets from VPC networks of the host project with service projects and Service Project Admins. Then, select Subnets to share.
  8. Click Save.

gcloud

  1. If you have not already, authenticate to gcloud as a Shared VPC Admin. Replace SHARED_VPC_ADMIN with the name of the Shared VPC Admin:

    gcloud auth login SHARED_VPC_ADMIN
    
  2. Attach a service project to a previously-enabled host project. Replace SERVICE_PROJECT_ID with the project ID for the service project and HOST_PROJECT_ID with the project ID for the host project.

    gcloud compute shared-vpc associated-projects add SERVICE_PROJECT_ID \
        --host-project HOST_PROJECT_ID
    
  3. Confirm that the service project has been attached.

    gcloud compute shared-vpc get-host-project SERVICE_PROJECT_ID
    
  4. Optionally, you can list the service projects that are attached to the host project:

    gcloud compute shared-vpc list-associated-resources HOST_PROJECT_ID
    
  5. If you only needed to attach a service project, you can log out of gcloud to protect your Shared VPC Admin account credentials. Otherwise, skip this step and define Service Project Admins for all subnets or for just some subnets.

    gcloud auth revoke SHARED_VPC_ADMIN
    

API

  1. Attach a service project to the Shared VPC host project.

    POST https://compute.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/enableXpnResource
    {
      "xpnResource": {
        "id": "SERVICE_PROJECT"
      }
    }
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the Shared VPC host project.
    • SERVICE_PROJECT is the ID of the service project to attach.

    For more information, refer to the projects.enableXpnResource method.

  2. Confirm that the service projects are attached to the host project.

    GET https://compute.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/getXpnResources
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the Shared VPC host project.

    For more information, refer to the projects.getXpnResources method.

Terraform

You can use a Terraform resource to attach a service project.

resource "google_compute_shared_vpc_service_project" "service1" {
  host_project    = google_compute_shared_vpc_host_project.host.project
  service_project = var.service_project # Replace this with your service project ID in quotes
}

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

Service Project Admins for all subnets

A Shared VPC Admin can assign an IAM principal from a service project to be a Service Project Admin with access to all subnets in the host project. Service Project Admins of this type are granted the role of compute.networkUser for the whole host project. This means that they have access to all of the currently defined and future subnets in the host project.

A user who has the compute.networkUser role in the host project can see all subnets within attached service projects.

Console

To define an IAM principal from a service project as Service Project Admin with access to all subnets in a host project using the Google Cloud console, see the attach service projects section.

gcloud

These steps cover defining an IAM principal from a service project as a Service Project Admin with access to all subnets in a host project. Before you can perform these steps, you must have enabled a host project and attached the service project to the host project.

  1. If you have not already, authenticate to gcloud as a Shared VPC Admin. Replace SHARED_VPC_ADMIN with the name of the Shared VPC Admin:

    gcloud auth login SHARED_VPC_ADMIN
    
  2. Create a policy binding to make an IAM principal from the service project a Service Project Admin. Replace HOST_PROJECT_ID with the project ID for the host project and SERVICE_PROJECT_ADMIN with the email address of the Service Project Admin user.

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
    --member "user:SERVICE_PROJECT_ADMIN" \
    --role "roles/compute.networkUser"
    

    You can specify different types of principals by changing the format of the --member argument:

    • Use group: to specify a Google group (by email address) as a principal.
    • Use domain: to specify a Google domain as a principal.
    • Use serviceAccount: to specify a service account. Refer to Service Accounts as Service Project Admins for more information about this use case.
  3. Repeat the previous step for each additional Service Project Admin you need to define.

  4. If you are finished defining Service Project Admins, you can log out of gcloud to protect your Shared VPC Admin account credentials.

    gcloud auth revoke SHARED_VPC_ADMIN
    

API

  1. Describe and then record the details of your existing project policy. You'll need the existing policy and etag value.

    POST https://cloudresourcemanager.googleapis.com/v2/projects/HOST_PROJECT_ID:getIamPolicy
    

    Replace HOST_PROJECT_ID with the ID of the host project that contains the Shared VPC network.

  2. Create a policy binding to designate IAM principals in the service project as Service Project Admins.

    POST https://cloudresourcemanager.googleapis.com/v1/projects/HOST_PROJECT_ID:setIamPolicy
    {
      "bindings": [
        ...copy existing bindings
        {
          "members": [
            PRINCIPAL,
            ...additional principals
          ],
          "role": "roles/compute.networkUser"
        },
      ],
      "etag": "ETAG",
      "version": 1,
      ...other existing policy details
    }
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the host project that contains the Shared VPC network.
    • PRINCIPAL is an identity that the role is associated with, such as a user, group, domain, or service account. For more information, see the members field in the Resource Manager documentation.
    • ETAG is a unique identifier that you got when you described the existing policy. It prevents collisions if multiple updates requests are sent at the same time.

    For more information, refer to the projects.setIamPolicy method.

Service Project Admins for some subnets

A Shared VPC Admin can assign an IAM principal from a service project to be a Service Project Admin with access to only some of the subnets in the host project. This option provides a more granular means to define Service Project Admins by granting them the compute.networkUser role for only some subnets in the host project.

A user who has the compute.networkUser role in the host project can see all subnets within attached service projects.

Console

To define an IAM principal from a service project as Service Project Admin with access to only some subnets in a host project using the Google Cloud console, see the attach service projects section.

gcloud

These steps cover defining IAM principals from a service project as Service Project Admins with access to only some subnets in a host project. Before you can define them, you must have enabled a host project and attached the service project to the host project.

  1. If you have not already, authenticate to gcloud as a Shared VPC Admin. Replace SHARED_VPC_ADMIN with the name of the Shared VPC Admin:

    gcloud auth login SHARED_VPC_ADMIN
    
  2. Choose the subnet in the host project to which the Service Project Admins should have access. Get its current IAM policy in JSON format. Replace SUBNET_NAME with the name of the subnet in the host project and HOST_PROJECT_ID with the project ID for the host project.

    gcloud compute networks subnets get-iam-policy SUBNET_NAME \
        --region SUBNET_REGION \
        --project HOST_PROJECT_ID \
        --format json
    
  3. Copy the JSON output from the previous step and save it to a file. For instructional clarity, these steps save it to a file named subnet-policy.json.

  4. Modify the subnet-policy.json file, adding the IAM principals who will become Service Project Admins with access to the subnet. Replace each SERVICE_PROJECT_ADMIN with the email address of an IAM user from the service project.

    {
      "bindings": [
      {
         "members": [
               "user:[SERVICE_PROJECT_ADMIN]",
               "user:[SERVICE_PROJECT_ADMIN]"
            ],
            "role": "roles/compute.networkUser"
      }
      ],
      "etag": "[ETAG_STRING]"
    }
    

    Note that you can specify different types of IAM principals (other than users) in the policy:

    • Switch user: with group: to specify a Google group (by email address) as a principal.
    • Switch user: with domain: to specify a Google domain as a principal.
    • Use serviceAccount: to specify a service account. Refer to Service Accounts as Service Project Admins for more information for this use case.
  5. Update the policy binding for the subnet using the contents of the subnet-policy.json file.

    gcloud compute networks subnets set-iam-policy SUBNET_NAME subnet-policy.json \
        --region SUBNET_REGION \
        --project HOST_PROJECT_ID
    
  6. If you are finished defining Service Project Admins, you can log out of gcloud to protect your Shared VPC Admin account credentials.

    gcloud auth revoke SHARED_VPC_ADMIN
    

API

  1. Describe and then record the details of your existing subnet policy. You'll need the existing policy and etag value.

    GET https://compute.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/SUBNET_REGION/subnetworks/SUBNET_NAME/getIamPolicy
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the host project that contains the Shared VPC network.
    • SUBNET_NAME is the name of the subnet to share.
    • SUBNET_REGION is the region in which the subnet is located.
  2. Grant Service Project Admins access to subnets in the host project by updating the subnet policy.

    POST https://compute.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/SUBNET_REGION/subnetworks/SUBNET_NAME/setIamPolicy
    {
      "bindings": [
        ...copy existing bindings
        {
          "members": [
            PRINCIPAL,
            ...additional principals
          ],
          "role": "roles/compute.networkUser"
        },
      ],
      "etag": "ETAG",
      "version": 1,
      ...other existing policy details
    }
    

    Replace the placeholders with valid values:

    • ETAG is a unique identifier that you got when you described the existing policy. It prevents collisions if multiple updates requests are sent at the same time.
    • HOST_PROJECT_ID is the ID of the host project that contains the Shared VPC network.
    • PRINCIPAL is an identity that the role is associated with, such as a user, group, domain, or service account. For more information, see the members field in the Resource Manager documentation.
    • SUBNET_NAME is the name of the subnet to share.
    • SUBNET_REGION is the region in which the subnet is located.

    For more information, refer to the subnetworks.setIamPolicy method.

Service Accounts as Service Project Admins

A Shared VPC Admin can also define service accounts from service projects as Service Project Admins. This section illustrates how to define two different types of service accounts as Service Project Admins:

The Service Project Admin role (compute.networkUser) can be granted for all subnets or only some subnets of the host project. However, for instructional simplicity, this section only illustrates how to define each of the two service account types as Service Project Admins for all subnets of the host project.

User-managed service accounts as Service Project Admins

These directions describe how to define a user-managed service account as a Service Project Admin for all subnets of the Shared VPC host project.

Console

  1. Log into the Google Cloud console as a Shared VPC Admin.
  2. Go to the Settings page in the Google Cloud console.
    Go to the Settings page
  3. Change the project to the service project containing the service account that needs to be defined as a Service Project Admin.
  4. Copy the Project ID of the service project. For instructional clarity, this procedure refers to the service project ID as SERVICE_PROJECT_ID.
  5. Change the project to the Shared VPC host project.
  6. Go to the IAM page in the Google Cloud console.
    Go to the IAM page
  7. Click Add.
  8. Add SERVICE_ACCOUNT_NAME@SERVICE_PROJECT_ID.iam.gserviceaccount.com to the Principals field, replacing SERVICE_ACCOUNT_NAME with the name of the service account.
  9. Select Compute Engine > Compute Network User from the Roles menu.
  10. Click Add.

gcloud

  1. If you have not already, authenticate to gcloud as a Shared VPC Admin. Replace SHARED_VPC_ADMIN with the name of the Shared VPC Admin:

    gcloud auth login SHARED_VPC_ADMIN
    
  2. If you don't know the project ID for the service project, you can list all projects in your organization. This list shows the project ID for each.

    gcloud projects list
    
  3. Create a policy binding to make the service account a Service Project Admin. Replace HOST_PROJECT_ID with the project ID for the host project,SERVICE_ACCOUNT_NAME with the name of the service account, and SERVICE_PROJECT_ID with the service project ID.

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
        --member "serviceAccount:SERVICE_ACCOUNT_NAME@SERVICE_PROJECT_ID.iam.gserviceaccount.com" \
        --role "roles/compute.networkUser"
    

API

  1. Describe and then record the details of your existing project policy. You'll need the existing policy and etag value.

    POST https://cloudresourcemanager.googleapis.com/v2/projects/HOST_PROJECT_ID:getIamPolicy
    

    Replace HOST_PROJECT_ID with the ID of the host project that contains the Shared VPC network.

  2. Create a policy binding to designate service accounts as Service Project Admins.

    POST https://cloudresourcemanager.googleapis.com/v1/projects/HOST_PROJECT_ID:setIamPolicy
    {
      "bindings": [
        ...copy existing bindings
        {
          "members": [
            "serviceAccount:SERVICE_ACCOUNT_NAME@SERVICE_PROJECT_ID.iam.gserviceaccount.com",
            ...include additional service accounts
          ],
          "role": "roles/compute.networkUser"
        },
      ],
      "etag": "ETAG",
      "version": 1,
      ...other existing policy details
    }
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the host project that contains the Shared VPC network.
    • SERVICE_ACCOUNT_NAME is the name of the service account.
    • SERVICE_PROJECT_ID is the ID of the service project that contains the service account.
    • ETAG is a unique identifier that you got when you described the existing policy. It prevents collisions if multiple updates requests are sent at the same time.

    For more information, refer to the projects.setIamPolicy method.

Google APIs service account as a Service Project Admin

These directions describe how to define the Google APIs service account as a Service Project Admin for all subnets of the Shared VPC host project. Making the Google APIs service account a Service Project Admin is a requirement for managed instance groups used with Shared VPC because tasks like instance creation are performed by this type of service account. For more information about this relationship, see Managed Instance Groups and IAM.

Console

  1. Log into the Google Cloud console as a Shared VPC Admin.
  2. Go to the Settings page in the Google Cloud console.
    Go to the Settings page
  3. Change the project to the service project containing the service account that needs to be defined as a Service Project Admin.
  4. Copy the Project number of the service project. For instructional clarity, this procedure refers to the service project number as SERVICE_PROJECT_NUMBER.
  5. Change the project to the Shared VPC host project.
  6. Go to the IAM page in the Google Cloud console.
    Go to the IAM page
  7. Click Add.
  8. Add SERVICE_PROJECT_NUMBER@cloudservices.gserviceaccount.com to the Members field.
  9. Select Compute Engine > Compute Network User from the Roles menu.
  10. Click Add.

gcloud

  1. If you have not already, authenticate to gcloud as a Shared VPC Admin. Replace SHARED_VPC_ADMIN with the name of the Shared VPC Admin:

    gcloud auth login SHARED_VPC_ADMIN
    
  2. Determine the project number for the service project. For instructional clarity, this procedure refers to the service project number as SERVICE_PROJECT_NUMBER. Replace SERVICE_PROJECT_ID with the project ID for the service project.

    gcloud projects describe SERVICE_PROJECT_ID --format='get(projectNumber)'
    
    • If you don't know the project ID for the service project, you can list all projects in your organization. This list shows the project number for each.

      gcloud projects list
      
  3. Create a policy binding to make the service account a Service Project Admin. Replace HOST_PROJECT_ID with the project ID for the host project and SERVICE_PROJECT_NUMBER with the service project number.

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
        --member "serviceAccount:SERVICE_PROJECT_NUMBER@cloudservices.gserviceaccount.com" \
        --role "roles/compute.networkUser"
    

API

  1. Describe and then record the details of your existing project policy. You'll need the existing policy and etag value.

    POST https://cloudresourcemanager.googleapis.com/v2/projects/HOST_PROJECT_ID:getIamPolicy
    

    Replace HOST_PROJECT_ID with the ID of the host project that contains the Shared VPC network.

  2. List your project to find its project number.

    GET https://cloudresourcemanager.googleapis.com/v1/projects?filter=projectId="SERVICE_PROJECT_ID"
    

    Replace SERVICE_PROJECT_ID with the ID of the service project where the service account is located.

  3. Create a policy binding to designate service accounts as Service Project Admins.

    POST https://cloudresourcemanager.googleapis.com/v1/projects/HOST_PROJECT_ID:setIamPolicy
    {
      "bindings": [
        ...copy existing bindings
        {
          "members": [
            "serviceAccount:SERVICE_PROJECT_NUMBER@cloudservices.gserviceaccount.com"
          ],
          "role": "roles/compute.networkUser"
        },
      ],
      "etag": "ETAG",
      "version": 1,
      ...other existing policy details
    }
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the host project that contains the Shared VPC network.
    • SERVICE_PROJECT_NUMBER is the number of the service project that contains the service account.
    • ETAG is a unique identifier that you got when you described the existing policy. It prevents collisions if multiple updates requests are sent at the same time.

    For more information, refer to the projects.setIamPolicy method.

Use Shared VPC

After a Shared VPC Admin completes the tasks of enabling a host project, attaching the necessary service projects to it, and defining Service Project Admins for all or some of the host project subnets, the Service Project Admins can create instances, templates, and internal load balancers in the service projects using the subnets of the host project.

All tasks in this section must be performed by a Service Project Admin.

It's important to note that a Shared VPC Admin only grants the Service Project Admins the compute.networkUser role (to either the whole host project or just some of its subnets). Service Project Admins should also have other roles necessary to administer their respective service projects. For example, a Service Project Admin could also be a project owner or should at least have the compute.instanceAdmin role for the project.

List available subnets

Service Project Admins can list the subnets to which they have been given permission by following these steps.

Console

Go to the Shared VPC page in the Google Cloud console.
Go to the Shared VPC page

gcloud

  1. If you have not already, authenticate to gcloud as a Service Project Admin. Replace SERVICE_PROJECT_ADMIN with the name of the Service Project Admin:

    gcloud auth login SERVICE_PROJECT_ADMIN
    
  2. Run the following command, replacing HOST_PROJECT_ID with the project ID of the Shared VPC host project:

    gcloud compute networks subnets list-usable --project HOST_PROJECT_ID
    

    The following example lists the available subnets in the project-1 host project:

    $ gcloud compute networks subnets list-usable --project project-1
    
    PROJECT    REGION       NETWORK  SUBNET    RANGE          SECONDARY_RANGES
    project-1  us-west1     net-1    subnet-1  10.138.0.0/20
    project-1  us-central1  net-1    subnet-2  10.128.0.0/20  r-1 192.168.2.0/24
                                                              r-2 192.168.3.0/24
    project-1  us-east1     net-1    subnet-3  10.142.0.0/20
    

For more information, see the list-usable command in the SDK documentation.

API

List the available subnets in the host project. Make the request as a Service Project Admin.

GET https://compute.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/aggregated/subnetworks/listUsable

Replace HOST_PROJECT_ID with the ID of the host project that contains the Shared VPC network.

For more information, refer to the subnetworks.listUsable method.

Reserve a static internal IPv4 or IPv6 address

Service Project Admins can reserve an internal IPv4 or IPv6 address in a subnet of a Shared VPC network. The IP address configuration object is created in the service project, while its value comes from the range of available addresses in the chosen shared subnet.

gcloud

  1. If you have not already, authenticate to gcloud as a Service Project Admin. Replace SERVICE_PROJECT_ADMIN with the name of the Service Project Admin:

    gcloud auth login SERVICE_PROJECT_ADMIN
    
  2. Run the compute addresses create command.

    • Reserve IPv4 addresses:

      gcloud compute addresses create IP_ADDR_NAME \
          --project SERVICE_PROJECT_ID \
          --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \
          --region=REGION
          --ip-version=IPV4
      
    • Reserve IPv6 addresses:

      gcloud compute addresses create IP_ADDR_NAME \
          --project SERVICE_PROJECT_ID \
          --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \
          --region=REGION
          --ip-version=IPV6
      

      Replace the following:

      • IP_ADDR_NAME: a name for the IPv4 address object.
      • SERVICE_PROJECT_ID: the ID of the service project.
      • HOST_PROJECT_ID: the ID of the Shared VPC host project.
      • REGION: the region that contains the shared subnet.
      • SUBNET: the name of the shared subnet.

Additional details for creating IP addresses are published in the SDK documentation.

API

Reserve a static internal IPv4 address as a Service Project Admin.

POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/regions/REGION/addresses
{
  "name": "ADDRESS_NAME",
  "subnetwork": "projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME",
  "addressType": "INTERNAL"
}

Replace the placeholders with valid values:

  • ADDRESS_NAME is a name for the reserved internal IP address.
  • HOST_PROJECT_ID is the ID of the project that contains the Shared VPC network.
  • REGION the region where the reserved IPv4 address will be located and where the shared subnet is located.
  • SERVICE_PROJECT_ID is the ID of the service project where you are reserving the IPv4 address.
  • SUBNET_NAME is the name of the shared subnet.

For more information, refer to the addresses.insert method.

Terraform

You can use a Terraform data block to specify the host subnet information. Then use a Terraform resource to reserve a static internal IPv4 address. If you omit the optional address argument, an available IPv4 address is selected and reserved.

Specify the host subnet:

data "google_compute_subnetwork" "subnet" {
  name    = "my-subnet-123"
  project = var.project
  region  = "us-central1"
}

Reserve an IPv4 address from the host project's subnet to use in the service project:

resource "google_compute_address" "internal" {
  project      = var.service_project
  region       = "us-central1"
  name         = "int-ip"
  address_type = "INTERNAL"
  address      = "10.0.0.8"
  subnetwork   = data.google_compute_subnetwork.subnet.self_link
}

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

Create an instance

Keep the following in mind when creating an instance using Shared VPC:

  • The standard process for creating an instance involves selecting a zone, a network, and a subnet. Both the selected subnet and the selected zone must be in the same region. When a Service Project Admin creates an instance using a subnet from a Shared VPC network, the zone selected for that instance must be one in the same region as the selected subnet.

    • When creating an instance with a reserved static internal IPv4 address, the subnet (and region) were already selected when the static IPv4 address was created. A gcloud example for creating an instance with a static internal IPv4 address is given in this section.
  • Service Project Admins can only create instances using subnets to which they have been granted permission. See listing available subnets to determine which subnets are available.

  • When Google Cloud receives a request to create an instance in a subnet of a Shared VPC network, it checks to see if the IAM principal making the request has permission to use that shared subnet. If the check fails, the instance will not be created, and Google Cloud will return a permissions error. Contact the Shared VPC Admin for assistance.

  • You can create a dual-stack instance if you create the instance in a dual-stack subnet. Dual-stack subnets are supported on custom mode VPC networks only. The IPv6 access type of the subnet determines whether the IPv6 address assigned to the VM is an internal or external IPv6 address.

    To create dual-stack instances in a shared subnet, use the Google Cloud CLI or the API. You can't create a dual-stack instance in a shared subnet with the Google Cloud console.

Console

  1. Go to the VM instances page in the Google Cloud console.
    Go to the VM instances page
  2. Click Create.
  3. Specify a Name for the instance.
  4. Click Management, security, disks, networking, sole tenancy.
  5. Click Networking.
  6. Under Network interfaces, click the default network.
  7. Click the Networks shared with me radio button.
  8. Select the Shared subnet where you want to create the instance.
  9. Specify any other necessary parameters for the instance.
  10. Click Create.

gcloud

  • To create an instance with an ephemeral internal IPv4 address in a shared subnet of a Shared VPC network:

    gcloud compute instances create INSTANCE_NAME \
    --project SERVICE_PROJECT_ID \
    --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \
    --zone ZONE
    

    Where you would replace the following:

    • INSTANCE_NAME with the name of the instance
    • SERVICE_PROJECT_ID with the ID of the service project
    • HOST_PROJECT_ID with the ID of the Shared VPC host project
    • REGION with the region containing the shared subnet
    • SUBNET with the name of the shared subnet
    • ZONE with a zone in the specified region
  • To create an instance with a reserved static internal IPv4 address in a Shared VPC network:

    1. Reserve a static internal IPv4 address in the host project.
    2. Create the instance:

      gcloud compute instances create INSTANCE_NAME \
      --project SERVICE_PROJECT_ID \
      --private-network-ip IP_ADDR_NAME \
      --zone ZONE \
      --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET
      

      Where you would replace the following:

      • INSTANCE_NAME with the name of the instance
      • SERVICE_PROJECT_ID with the ID of the service project
      • HOST_PROJECT_ID with the ID of the Shared VPC host project
      • IP_ADDR_NAME with the name of the static IP address
      • ZONE with a zone in the same region as IP_ADDR_NAME
      • SUBNET with the name of the shared subnet that's associated with the static internal IPv4 address.
  • To create an instance with an ephemeral internal IPv4 address and an ephemeral IPv6 address:

    gcloud compute instances create INSTANCE_NAME \
    --project SERVICE_PROJECT_ID \
    --stack-type IPV4_IPV6 \
    --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \
    --zone ZONE
    

    Where you would replace the following:

    • INSTANCE_NAME with the name of the instance
    • SERVICE_PROJECT_ID with the ID of the service project
    • HOST_PROJECT_ID with the ID of the Shared VPC host project
    • REGION with the region containing the shared subnet
    • SUBNET with the name of the shared subnet
    • ZONE with a zone in the specified region

API

  • To create an instance with an ephemeral internal IPv4 address, specify only the subnet.

    POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/zones/ZONE/instances
    {
      "machineType": "MACHINE_TYPE",
      "name": "INSTANCE_NAME",
      "networkInterfaces": [
        {
          "subnetwork": "projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
        }
      ],
      "disks": [
        {
          "boot": true,
          "initializeParams": {
            "sourceImage": "SOURCE_IMAGE"
          }
        }
      ]
    }
    

    Replace the placeholders with valid values:

    • INSTANCE_NAME is a name for the instance.
    • HOST_PROJECT_ID is the ID of the project that contains the Shared VPC network.
    • MACHINE_TYPE is a machine type for the instance.
    • REGION is the region containing the shared subnet.
    • SERVICE_PROJECT_ID is the ID of the service project.
    • SOURCE_IMAGE is an image for the instance.
    • SUBNET is the name of the shared subnet.
    • ZONE is a zone in the specified region.

    For more information, refer to the instances.insert method.

  • To create an instance with a reserved internal IPv4 address, specify the subnet and the name of the reserved IPv4 address.

    POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/zones/ZONE/instances
    {
      "machineType": "MACHINE_TYPE",
      "name": "INSTANCE_NAME",
      "networkInterfaces": [
        {
          "subnetwork": "projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME",
          "networkIP": "projects/SERVICE_PROJECT_ID/regions/REGION/addresses/ADDRESS_NAME"
        }
      ],
      "disks": [
        {
          "boot": true,
          "initializeParams": {
            "sourceImage": "SOURCE_IMAGE"
          }
        }
      ]
    }
    

    Replace the placeholders with valid values:

    • ADDRESS_NAME is the name of the reserved internal IPv4 address.
    • INSTANCE_NAME is a name for the instance.
    • HOST_PROJECT_ID is the ID of the project that contains the Shared VPC network.
    • MACHINE_TYPE is a machine type for the instance.
    • REGION is the region containing the shared subnet.
    • SERVICE_PROJECT_ID is the ID of the service project.
    • SOURCE_IMAGE is an image for the instance.
    • SUBNET is the name of the shared subnet.
    • ZONE is a zone in the specified region.

    For more information, refer to the instances.insert method.

  • To create an instance with an ephemeral internal IPv4 address and an ephemeral IPv6 address, specify the subnet and the stack type:

    POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/zones/ZONE/instances
    {
      "machineType": "MACHINE_TYPE",
      "name": "INSTANCE_NAME",
      "networkInterfaces": [
        {
          "subnetwork": "projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME",
          "stackType": "IPv4_IPv6"
        }
      ],
      "disks": [
        {
          "boot": true,
          "initializeParams": {
            "sourceImage": "SOURCE_IMAGE"
          }
        }
      ]
    }
    

    Replace the placeholders with valid values:

    • INSTANCE_NAME is a name for the instance.
    • HOST_PROJECT_ID is the ID of the project that contains the Shared VPC network.
    • MACHINE_TYPE is a machine type for the instance.
    • REGION is the region containing the shared subnet.
    • SERVICE_PROJECT_ID is the ID of the service project.
    • SOURCE_IMAGE is an image for the instance.
    • SUBNET is the name of the shared subnet.
    • ZONE is a zone in the specified region.

    For more information, refer to the instances.insert method.

Terraform

You can use a Terraform data block to specify the host subnet information. Then use a Terraform resource to create a VM instance in a service project.

Specify the host subnet:

data "google_compute_subnetwork" "subnet" {
  name    = "my-subnet-123"
  project = var.project
  region  = "us-central1"
}

Create a VM instance in a service project with an ephemeral IPv4 address from the host project's shared subnet:

resource "google_compute_instance" "ephemeral_ip" {
  project      = var.service_project
  zone         = "us-central1-a"
  name         = "my-vm"
  machine_type = "e2-medium"
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-9"
    }
  }
  network_interface {
    subnetwork = data.google_compute_subnetwork.subnet.self_link
  }
}

Create a VM instance in a service project with a reserved static IPv4 address from the host project's shared subnet:

resource "google_compute_instance" "reserved_ip" {
  project      = var.service_project
  zone         = "us-central1-a"
  name         = google_compute_address.internal.self_link
  machine_type = "e2-medium"
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-9"
    }
  }
  network_interface {
    subnetwork = data.google_compute_subnetwork.subnet.self_link
    network_ip = "int-ip"
  }
}

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

Create an instance template

Keep the following in mind when creating an instance template using Shared VPC:

  • The process for creating an instance template involves selecting a network and a subnet.

    • Templates created for use in a custom mode Shared VPC network must specify both the network and a subnet.

    • Templates created for use in an auto mode Shared VPC network may optionally defer selecting a subnet. In these cases, a subnet will be automatically selected in the same region as any managed instance group that uses the template. (Auto mode networks have a subnet in every region by definition.)

  • When an IAM principal creates an instance template, Google Cloud does not perform a permissions check to see if the principal can use the specified subnet. This permissions check is always deferred to when a managed instance group using the template is requested.

  • You can create a dual-stack instance template if you create the template in a dual-stack subnet. Dual-stack subnets are supported on custom mode VPC networks only. The IPv6 access type of the subnet determines whether the IPv6 address assigned to the VM is an internal or external IPv6 address.

    To create dual-stack instance template in a shared subnet, use the Google Cloud CLI or the API. You can't create a dual-stack instance template in a shared subnet with the Google Cloud console.

Console

  1. Go to the Instance templates page in the Google Cloud console.
    Go to the Instance templates page
  2. Click Create instance template.
  3. Specify a Name for the instance template.
  4. Click Management, security, disks, networking, sole tenancy.
  5. Click Networking.
  6. Under Network interfaces, click the default network.
  7. Click the Networks shared with me radio button.
  8. Select the Shared subnet where you want to create the instance template.
  9. Specify any other necessary parameters for the instance template.
  10. Click Create.

gcloud

  • To create an IPv4-only instance template for use in any automatically-created subnet of an auto mode Shared VPC network:

    gcloud compute instance-templates create TEMPLATE_NAME \
    --project SERVICE_PROJECT_ID \
    --network projects/HOST_PROJECT_ID/global/networks/NETWORK
    

    Where you would replace the following:

    • TEMPLATE_NAME with the name of the template
    • SERVICE_PROJECT_ID with the ID of the service project
    • HOST_PROJECT_ID with the ID of the Shared VPC host project
    • NETWORK with the name of the Shared VPC network
  • To create an IPv4-only instance template for a manually-created subnet in a Shared VPC network (either auto or custom mode):

    gcloud compute instance-templates create TEMPLATE_NAME \
    --project SERVICE_PROJECT_ID \
    --region REGION \
    --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET
    

    Where you would replace the following:

    • TEMPLATE_NAME with the name of the template
    • SERVICE_PROJECT_ID with the ID of the service project
    • HOST_PROJECT_ID with the ID of the Shared VPC host project
    • REGION with the region containing the shared subnet
    • SUBNET with the name of the shared subnet
  • To create a dual-stack instance template that uses a subnet in a custom mode Shared VPC network:

    gcloud compute instance-templates create TEMPLATE_NAME \
    --project SERVICE_PROJECT_ID \
    --stack-type IPV4_IPV6 \
    --region REGION \
    --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET
    

    Where you would replace the following:

    • TEMPLATE_NAME with the name of the template
    • SERVICE_PROJECT_ID with the ID of the service project
    • HOST_PROJECT_ID with the ID of the Shared VPC host project
    • REGION with the region containing the shared subnet
    • SUBNET with the name of the shared subnet

API

  • To create an IPv4-only instance template that uses any automatically-created subnet of an auto mode Shared VPC network, specify the VPC network.

    POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/global/instanceTemplates
    {
    "properties": {
      "networkInterfaces": [
        {
          "network": "projects/HOST_PROJECT_ID/global/networks/NETWORK"
        }
      ]
    ...
    }
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the project that contains the Shared VPC network.
    • SERVICE_PROJECT_ID is the ID of the service project.
    • NETWORK is the name of the Shared VPC network.

    For more information, refer to the instanceTemplates.insert method.

  • To create an IPv4-only instance template that uses a manually-created subnet in a Shared VPC network (auto or custom mode), specify the subnet.

    POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/global/instanceTemplates
    {
    "properties": {
      "networkInterfaces": [
        {
          "subnetwork": "projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME"
        }
      ]
    ...
    }
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the project that contains the Shared VPC network.
    • REGION is the region containing the shared subnet.
    • SERVICE_PROJECT_ID is the ID of the service project.
    • SUBNET_NAME is the name of the shared subnet.

    For more information, refer to the instanceTemplates.insert method.

  • To create a dual-stack instance template that uses a subnet in a custom mode Shared VPC network, specify the subnet and the stack type.

    POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/global/instanceTemplates
    {
    "properties": {
      "networkInterfaces": [
        {
          "subnetwork": "projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME",
          "stackType": "IPV4_IPV6"
        }
      ]
    ...
    }
    

    Replace the placeholders with valid values:

    • HOST_PROJECT_ID is the ID of the project that contains the Shared VPC network.
    • REGION is the region containing the shared subnet.
    • SERVICE_PROJECT_ID is the ID of the service project.
    • SUBNET_NAME is the name of the shared subnet.

    For more information, refer to the instanceTemplates.insert method.

Terraform

You can use a Terraform data block to specify the host subnet information. Then use a Terraform resource to create a VM instance template. The IPv4 addresses for the VMs come from the host project's shared subnet.

The subnet must exist in the same region where the VM instances will be created.

Specify the host subnet:

data "google_compute_subnetwork" "subnet" {
  name    = "my-subnet-123"
  project = var.project
  region  = "us-central1"
}

Create a VM instance template in the service project:

resource "google_compute_instance_template" "default" {
  project      = var.service_project
  name         = "appserver-template"
  description  = "This template is used to create app server instances."
  machine_type = "n1-standard-1"
  disk {
    source_image = "debian-cloud/debian-9"
  }
  network_interface {
    subnetwork = data.google_compute_subnetwork.subnet.self_link
  }
}

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

Create a managed instance group

Keep the following in mind when creating a managed instance group using Shared VPC:

  • Managed instance groups used with Shared VPC require making the Google APIs service account a Service Project Admin because tasks like automatic instance creation via autoscaling are performed by that service account.

  • The standard process for creating a managed instance group involves selecting a zone or region, depending on the group type, and an instance template. (Network and subnet details are tied to the instance template.) Eligible instance templates are restricted to those that reference subnets in the same region used by the managed instance group.

  • Service Project Admins can only create managed instance groups whose member instances use subnets to which they have been granted permission. Because the network and subnet details are tied to the instance template, Service Project Admins can only use templates that reference subnets that they are authorized to use.

  • When Google Cloud receives a request to create a managed instance group, it checks to see if the IAM principal making the request has permission to use the subnet (in the same region as the group) specified in the instance template. If the check fails, the managed instance group is not created, and Google Cloud returns an error: Required 'compute.subnetworks.use' permission for 'projects/SUBNET_NAME.

    List available subnets to determine which ones can be used, and contact the Shared VPC Admin if the service account needs additional access. For more information, see Service Accounts as Service Project Admins.

For more information, refer to Creating Groups of Managed Instances in the Compute Engine documentation.

Create an HTTP(S) load balancer

There are many ways to configure HTTP(S) Load Balancing within a Shared VPC network. Regardless of the type of deployment, all the components of the load balancer must be in the same organization and the same Shared VPC network.

To learn more about supported Shared VPC architectures, see the following:

Create an internal TCP/UDP load balancer

The following example illustrates what you must consider when creating an internal TCP/UDP load balancer in a Shared VPC network. Service Project Admins can create an internal TCP/UDP load balancer that uses a subnet (in the host project) to which they have access. The load balancer's internal forwarding rule is defined in the service project, but its subnet reference points to a subnet in a Shared VPC network of the host project.

Before you create an internal TCP/UDP load balancer in a Shared VPC environment, see Shared VPC architecture.

Console

  1. Go to the Load balancing page in the Google Cloud console.
    Go to the Load balancing page

  2. Create your internal TCP/UDP load balancer, making the following adjustment: In the Configure frontend services section, select the Shared VPC subnet you need from the Networks shared by other projects section of the Subnet menu.

  3. Finish creating the load balancer.

gcloud

When you create the internal forwarding rule, specify a subnet in the host project with the --subnet flag:

gcloud compute forwarding-rules create FR_NAME \
    --project SERVICE_PROJECT_ID \
    --load-balancing-scheme internal \
    --region REGION \
    --ip-protocol IP_PROTOCOL \
    --ports PORT,PORT,... \
    --backend-service BACKEND_SERVICE_NAME \
    --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \
    --address INTERNAL_IP

Where you would replace the following:

  • FR_NAME with the name of the forwarding rule
  • SERVICE_PROJECT_ID with the ID of the service project
  • REGION with the region containing the shared subnet
  • IP_PROTOCOL with either TCP or UDP, matching the protocol of the load balancer's backend service
  • PORT with the numeric port or list of ports for the load balancer
  • BACKEND_SERVICE_NAME with the name of the backend service (created already as part of the general procedure for creating an internal TCP/UDP load balancer)
  • HOST_PROJECT_ID with the ID of the Shared VPC host project
  • SUBNET with the name of the shared subnet
  • INTERNAL_IP with an internal IP address in the shared subnet (if unspecified, an available one will be selected)

Refer to this page for more options to use with gcloud compute forwarding-rules create.

API

Create the internal forwarding rule and specify a subnet in the host project.

POST https://compute.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/regions/REGION/forwardingRules
{
  "name": "FR_NAME",
  "IPAddress": "IP_ADDRESS",
  "IPProtocol": "PROTOCOL",
  "ports": [ "PORT", ... ],
  "loadBalancingScheme": "INTERNAL",
  "subnetwork": "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET",
  "network": "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME",
  "backendService": "https://www.googleapis.com/compute/v1/projects/SERVICE_PROJECT_ID/regions/us-west1/backendServices/BE_NAME",
  "networkTier": "PREMIUM"
}

Replace the placeholders with valid values:

  • BE_NAME is the name of the backend service (created already as part of the general procedure for creating an internal TCP/UDP load balancer).
  • FR_NAME is a name for the forwarding rule.
  • HOST_PROJECT_ID is the ID of the Shared VPC host project.
  • IP_ADDRESS is an internal IP address in the shared subnet.
  • IP_PROTOCOL is either TCP or UDP, matching the protocol of the load balancer's backend service
  • PORT is the numeric port or list of ports for the load balancer.
  • REGION is the region that contains the shared subnet.
  • SERVICE_PROJECT_ID is the ID of the service project.
  • SUBNET is the name of the shared subnet

For more information, refer to the forwardingRules.insert method.

Terraform

You can use a Terraform data block to specify the host subnet and host network. Then use a Terraform resource to create the forwarding rule.

Specify the host network:

data "google_compute_network" "network" {
  name    = "my-network-123"
  project = var.project
}

Specify the host subnet:

data "google_compute_subnetwork" "subnet" {
  name    = "my-subnet-123"
  project = var.project
  region  = "us-central1"
}

In the service project, create a forwarding rule in the host project's network and subnet:

resource "google_compute_forwarding_rule" "default" {
  project               = var.service_project
  name                  = "l4-ilb-forwarding-rule"
  backend_service       = google_compute_region_backend_service.default.id
  region                = "europe-west1"
  ip_protocol           = "TCP"
  load_balancing_scheme = "INTERNAL"
  all_ports             = true
  allow_global_access   = true
  network               = data.google_compute_network.network.self_link
  subnetwork            = data.google_compute_subnetwork.subnet.self_link
}

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

What's next