Setting up clusters with Shared VPC


This guide shows how to create two Google Kubernetes Engine (GKE) clusters, in separate projects, that use a Shared VPC. For general information about GKE networking, visit the Network overview.

Overview

With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project. You create networks, subnets, secondary address ranges, firewall rules, and other network resources in the host project. Then you share selected subnets, including secondary ranges, with the service projects. Components running in a service project can use the Shared VPC to communicate with components running in the other service projects.

You can use Shared VPC with Autopilot clusters and with zonal and regional Standard clusters.

Standard clusters that use Shared VPC cannot use legacy networks and must have VPC-native traffic routing enabled. Autopilot clusters always enable VPC-native traffic routing.

You can configure Shared VPC when you create a new cluster. GKE does not support converting existing clusters to the Shared VPC model.

With Shared VPC, certain quotas and limits apply. For example there is a quota for the number of networks in a project, and there is a limit on the number of service projects that can be attached to a host project. For details, see Quotas and limits.

About the examples

The examples in this guide set up the infrastructure for a two-tier web application, as described in Shared VPC overview.

Before you begin

Before you start to set up a cluster with Shared VPC:

Before you perform the exercises in this guide:

  • Choose one of your projects to be the host project.
  • Choose two of your projects to be service projects.

Each project has a name, an ID, and a number. In some cases, the name and the ID are the same. This guide uses the following friendly names and placeholders to refer to your projects:

Friendly name Project ID
placeholder
Project number
placeholder
Your host project HOST_PROJECT_ID HOST_PROJECT_NUM
Your first service project SERVICE_PROJECT_1_ID SERVICE_PROJECT_1_NUM
Your second service project SERVICE_PROJECT_2_ID SERVICE_PROJECT_2_NUM

Finding your project IDs and numbers

You can find your project ID and numbers by using the gcloud CLI or the Google Cloud console.

Console

  1. Go to the Home page of the Google Cloud console.

    Go to the Home page

  2. In the project picker, select the project that you have chosen to be the host project.

  3. Under Project info, you can see the project name, project ID, and project number. Make a note of the ID and number for later.

  4. Do the same for each of the projects that you have chosen to be service projects.

gcloud

List your projects with the following command:

gcloud projects list

The output shows your project names, IDs and numbers. Make a note of the ID and number for later:

PROJECT_ID        NAME        PROJECT_NUMBER
host-123          host        1027xxxxxxxx
srv-1-456         srv-1       4964xxxxxxxx
srv-2-789         srv-2       4559xxxxxxxx

Enabling the GKE API in your projects

Before you continue with the exercises in this guide, make sure that the GKE API is enabled in all three of your projects. Enabling the API in a project creates a GKE service account for the project. To perform the remaining tasks in this guide, each of your projects must have a GKE service account.

You can enable the GKE API using the Google Cloud console or the Google Cloud CLI.

Console

  1. Go to the APIs & Services page in the Google Cloud console.

    Go to APIs & Services

  2. In the project picker, select the project that you have chosen to be the host project.

  3. If Kubernetes Engine API is in the list of APIs, it is already enabled, and you don't need to do anything. If it is not in the list, click Enable APIs and Services. Search for Kubernetes Engine API. Click the Kubernetes Engine API card, and click Enable.

  4. Repeat these steps for each projects that you have chosen to be a service project. Each operation may take some time to complete.

gcloud

Enable the GKE API for your three projects. Each operation may take some time to complete:

gcloud services enable container.googleapis.com --project HOST_PROJECT_ID
gcloud services enable container.googleapis.com --project SERVICE_PROJECT_1_ID
gcloud services enable container.googleapis.com --project SERVICE_PROJECT_2_ID

Creating a network and two subnets

In this section, you will perform the following tasks:

  1. In your host project, create a network named shared-net.
  2. Create two subnets named tier-1 and tier-2.
  3. For each subnet, create two secondary address ranges: one for Services, and one for Pods.

Console

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the project picker, select your host project.

  3. Click Create VPC Network.

  4. For Name, enter shared-net.

  5. Under Subnet creation mode, select Custom.

  6. In the New subnet box, for Name, enter tier-1.

  7. For Region, select a region.

  8. Under IP stack type, select IPv4 (single-stack).

  9. For IPv4 range, enter 10.0.4.0/22.

  10. Click Create secondary IPv4 range. For Subnet range name, enter tier-1-services, and for Secondary IPv4 range, enter 10.0.32.0/20.

  11. Click Add IP range. For Subnet range name, enter tier-1-pods, and for Secondary IPv4 range, enter 10.4.0.0/14.

  12. Click Add subnet.

  13. For Name, enter tier-2.

  14. For Region, select the same region that you selected for the previous subnet.

  15. For IPv4 range, enter 172.16.4.0/22.

  16. Click Create secondary IPv4 range. For Subnet range name, enter tier-2-services, and for Secondary IPv4 range, enter 172.16.16.0/20.

  17. Click Add IP range. For Subnet range name, enter tier-2-pods, and for Secondary IPv4 range, enter 172.20.0.0/14.

  18. Click Create.

gcloud

In your host project, create a network named shared-net:

gcloud compute networks create shared-net \
    --subnet-mode custom \
    --project HOST_PROJECT_ID

In your new network, create a subnet named tier-1:

gcloud compute networks subnets create tier-1 \
    --project HOST_PROJECT_ID \
    --network shared-net \
    --range 10.0.4.0/22 \
    --region COMPUTE_REGION \
    --secondary-range tier-1-services=10.0.32.0/20,tier-1-pods=10.4.0.0/14

Create another subnet named tier-2:

gcloud compute networks subnets create tier-2 \
    --project HOST_PROJECT_ID \
    --network shared-net \
    --range 172.16.4.0/22 \
    --region COMPUTE_REGION \
    --secondary-range tier-2-services=172.16.16.0/20,tier-2-pods=172.20.0.0/14

Replace COMPUTE_REGION with a Compute Engine region.

Determining the names of service accounts in your service projects

You have two service projects, each of which has several service accounts. This section is concerned with your GKE service accounts and your Google APIs service accounts. You need the names of these service accounts for the next section.

The following table lists the names of the GKE and Google APIs service accounts in your two service projects:

Service account type Service account name
GKE service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com
Google APIs SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com
SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com

Enabling Shared VPC and granting roles

To perform the tasks in this section, ensure that your organization has defined a Shared VPC Admin role.

In this section, you will perform the following tasks:

  1. In your host project, enable Shared VPC.
  2. Attach your two service projects to the host project.
  3. Grant the appropriate IAM roles to service accounts that belong to your service projects:
    • In your first service project, grant two service accounts the Compute Network User role on the tier-1 subnet of your host project.
    • In your second service project, grant two service accounts the Compute Network User role on the tier-2 subnet of your host project.

Console

Perform the following steps to enable Shared VPC, attach service projects, and grant roles:

  1. Go to the Shared VPC page in the Google Cloud console.

    Go to Shared VPC

  2. In the project picker, select your host project.

  3. Click Set up Shared VPC. The Enable host project screen displays.

  4. Click Save & continue. The Select subnets page displays.

  5. Under Sharing mode, select Individual subnets.

  6. Under Subnets to share, check tier-1 and tier-2. Clear all other checkboxes.

  7. Click Continue. The Give permissions page displays.

  8. Under Attach service projects, check your first service project and your second service project. Clear all the other checkboxes under Attach service projects.

  9. Under Kubernetes Engine access, check Enabled.

  10. Click Save. A new page displays.

  11. Under Individual subnet permissions, check tier-1.

  12. In the right pane, delete any service accounts that belong to your second service project. That is, delete any service accounts that contain SERVICE_PROJECT_2_NUM.

  13. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your first service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.

  14. In the center pane, under Individual subnet permissions, check tier-2, and clear tier-1.

  15. In the right pane, delete any service accounts that belong to your first service project. That is, delete any service accounts that contain SERVICE_PROJECT_1_NUM.

  16. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your second service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.

gcloud

  1. Enable Shared VPC in your host project. The command that you use depends on the required administrative role that you have.

    If you have Shared VPC Admin role at the organizational level:

    gcloud compute shared-vpc enable HOST_PROJECT_ID
    

    If you have Shared VPC Admin role at the folder level:

    gcloud beta compute shared-vpc enable HOST_PROJECT_ID
    
  2. Attach your first service project to your host project:

    gcloud compute shared-vpc associated-projects add SERVICE_PROJECT_1_ID \
        --host-project HOST_PROJECT_ID
    
  3. Attach your second service project to your host project:

    gcloud compute shared-vpc associated-projects add SERVICE_PROJECT_2_ID \
        --host-project HOST_PROJECT_ID
    
  4. Get the IAM policy for the tier-1 subnet:

    gcloud compute networks subnets get-iam-policy tier-1 \
       --project HOST_PROJECT_ID \
       --region COMPUTE_REGION
    

    The output contains an etag field. Make a note of the etag value.

  5. Create a file named tier-1-policy.yaml that has the following content:

    bindings:
    - members:
      - serviceAccount:SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com
      - serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com
      role: roles/compute.networkUser
    etag: ETAG_STRING
    

    Replace ETAG_STRING with the etag value that you noted previously.

  6. Set the IAM policy for the tier-1 subnet:

    gcloud compute networks subnets set-iam-policy tier-1 \
        tier-1-policy.yaml \
        --project HOST_PROJECT_ID \
        --region COMPUTE_REGION
    
  7. Get the IAM policy for the tier-2 subnet:

    gcloud compute networks subnets get-iam-policy tier-2 \
        --project HOST_PROJECT_ID \
        --region COMPUTE_REGION
    

    The output contains an etag field. Make a note of the etag value.

  8. Create a file named tier-2-policy.yaml that has the following content:

    bindings:
    - members:
      - serviceAccount:SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com
      - serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com
      role: roles/compute.networkUser
    etag: ETAG_STRING
    

    Replace ETAG_STRING with the etag value that you noted previously.

  9. Set the IAM policy for the tier-2 subnet:

    gcloud compute networks subnets set-iam-policy tier-2 \
        tier-2-policy.yaml \
        --project HOST_PROJECT_ID \
        --region COMPUTE_REGION
    

Managing firewall resources

If you want a GKE cluster in a service project to create and manage the firewall resources in your host project, the service project's GKE service account must be granted the appropriate IAM permissions using one of the following strategies:

Console

  1. In the Google Cloud console, go to the IAM page.

    Go to IAM

  2. Select the host project.

  3. Click Grant access, then enter the service project's GKE service account principal, service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com.

  4. Select the Compute Security Admin role from the drop-down list.

  5. Click Save.

gcloud

Grant the service project's GKE service account the Compute Security Admin role within the host project:

gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
    --member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com \
    --role=roles/compute.securityAdmin

Replace the following:

  • HOST_PROJECT_ID: the shared VPC host project ID
  • SERVICE_PROJECT_NUM: the ID of the service project containing the GKE service account
  • For a finer grained approach, create a custom IAM role that includes only the following permissions: compute.networks.updatePolicy, compute.firewalls.list, compute.firewalls.get, compute.firewalls.create, compute.firewalls.update, and compute.firewalls.delete. Grant the service project's GKE service account that custom role to the host project.

Console

Create a custom role within the host project containing the IAM permissions mentioned earlier:

  1. In the Google Cloud console, go to the Roles page.

    Go to the Roles page

  2. Using the drop-down list at the top of the page, select the host project.

  3. Click Create Role.

  4. Enter a Title, Description, ID and Role launch stage for the role. The role name cannot be changed after the role is created.

  5. Click Add Permissions.

  6. Filter for compute.networks and select the IAM permissions mentioned previously.

  7. Once all required permissions are selected, click Add.

  8. Click Create.

Grant the service project's GKE service account the newly created custom role within the host project:

  1. In the Google Cloud console, go to the IAM page.

    Go to IAM

  2. Select the host project.

  3. Click Grant access, then enter the service project's GKE service account principal, service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com.

  4. Filter for the Title of the newly created custom role and select it.

  5. Click Save.

gcloud

  1. Create a custom role within the host project containing the IAM permissions mentioned earlier:

    gcloud iam roles create ROLE_ID \
        --title="ROLE_TITLE" \
        --description="ROLE_DESCRIPTION" \
        --stage=LAUNCH_STAGE \
        --permissions=compute.networks.updatePolicy,compute.firewalls.list,compute.firewalls.get,compute.firewalls.create,compute.firewalls.update,compute.firewalls.delete \
        --project=HOST_PROJECT_ID
    
  2. Grant the service project's GKE service account the newly created custom role within the host project:

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
        --member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com \
        --role=projects/HOST_PROJECT_ID/roles/ROLE_ID
    

    Replace the following:

    • ROLE_ID: the name of the role, such as gkeFirewallAdmin
    • ROLE_TITLE: a friendly title for the role, such as GKE Firewall Admin
    • ROLE_DESCRIPTION: a short description of the role, such as GKE service account FW permissions
    • LAUNCH_STAGE: the launch stage of the role in its lifecycle, such as ALPHA, BETA, or GA
    • HOST_PROJECT_ID: the shared VPC host project ID
    • SERVICE_PROJECT_NUM: the ID of the service project containing the GKE service account

If you have clusters in more than one service project, you must choose one of the strategies and repeat it for each service project's GKE service account.

Summary of roles granted on subnets

Here's a summary of the roles granted on the subnets:

Service account Role Subnet
service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com Compute Network User tier-1
SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com Compute Network User tier-1
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com Compute Network User tier-2
SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com Compute Network User tier-2

Kubernetes Engine access

When attaching a service project, enabling Kubernetes Engine access grants the service project's GKE service account the permissions to perform network management operations in the host project.

If a service project was attached without enabling Kubernetes Engine access, assuming the Kubernetes Engine API has already been enabled in both the host and service project, you can manually assign the permissions to the service project's GKE service account by adding the following IAM role bindings in the host project:

Member Role Resource
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com Compute Network User Specific subnet or whole host project
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com Host Service Agent User GKE service account in the host project

Granting the Host Service Agent User role

Each service project's GKE service account must have a binding for the Host Service Agent User role on the host project. The GKE service account takes the following form:

service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com

Where SERVICE_PROJECT_NUM is the project number of your service project.

This binding allows the service project's GKE service account to perform network management operations in the host project, as if it were the host project's GKE service account. This role can only be granted to a service project's GKE service account.

Console

If you have been using the Google Cloud console, you do not have to grant the Host Service Agent User role explicitly. That was done automatically when you used the Google Cloud console to attach service projects to your host project.

gcloud

  1. For your first project, grant the Host Service Agent User role to the project's GKE Service Account. This role is granted on your host project:

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
        --member serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com \
        --role roles/container.hostServiceAgentUser
    
  2. For your second project, grant the Host Service Agent User role to the project's GKE Service Account. This role is granted on your host project:

    gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
        --member serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com \
        --role roles/container.hostServiceAgentUser
    

Verifying usable subnets and secondary IP address ranges

When creating a cluster, you must specify a subnet and the secondary IP address ranges to be used for the cluster's Pods and Services. There are several reasons that an IP address range might not be available for use. Whether you are creating the cluster with the Google Cloud console or the gcloud CLI, you should specify usable IP address ranges.

An IP address range is usable for the new cluster's Services if the range isn't already in use. The IP address range that you specify for the new cluster's Pods can either be an unused range, or it can be a range that's shared with Pods in your other clusters. IP address ranges that are created and managed by GKE can't be used by your cluster.

You can list a project's usable subnets and secondary IP address ranges by using the gcloud CLI.

gcloud

gcloud container subnets list-usable \
    --project SERVICE_PROJECT_ID \
    --network-project HOST_PROJECT_ID

Replace SERVICE_PROJECT_ID with the project ID of the service project.

If you omit the --project or --network-project option, the gcloud CLI command uses the default project from your active configuration. Because the host project and network project are distinct, you must specify one or both of --project and --network-project.

The output is similar to the following:

PROJECT: xpn-host
REGION: REGION_NAME
NETWORK: shared-net
SUBNET: tier-2
RANGE: 172.16.4.0/22

SECONDARY_RANGE_NAME: tier-2-services
IP_CIDR_RANGE: 172.20.0.0/14
STATUS: usable for pods or services

SECONDARY_RANGE_NAME: tier-2-pods
IP_CIDR_RANGE: 172.16.16.0/20
STATUS: usable for pods or services

PROJECT: xpn-host
REGION: REGION_NAME
NETWORK: shared-net
SUBNET: tier-1
RANGE: 10.0.4.0/22

SECONDARY_RANGE_NAME: tier-1-services
IP_CIDR_RANGE: 10.0.32.0/20
STATUS: usable for pods or services

SECONDARY_RANGE_NAME: tier-1-pods
IP_CIDR_RANGE: 10.4.0.0/14
STATUS: usable for pods or services

The list-usable command returns an empty list in the following situations:

  • When the service project's Kubernetes Engine service account does not have the Host Service Agent User role to the host project.
  • When the Kubernetes Engine service account in the host project does not exist (for example, if you've deleted that account accidentally).
  • When Kubernetes Engine API is not enabled in the host project, which implies the Kubernetes Engine service account in the host project is missing.

For more information, see the troubleshooting section.

Notes about secondary ranges

You can create 30 secondary ranges in a given subnet. For each cluster, you need two secondary ranges: one for Pods and one for Services.

Creating a cluster in your first service project

To create a cluster in your first service project, perform the following steps using the gcloud CLI or the Google Cloud console.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your first service project.

  3. Click Create.

  4. In the Autopilot or Standard section, click Configure.

  5. For Name, enter tier-1-cluster.

  6. In the Region drop-down list, select the same region that you used for the subnets.

  7. From the navigation pane, click Networking.

  8. Select Networks shared with me (from host project).

  9. For Network, select shared-net.

  10. For Node subnet, select tier-1.

  11. For Pod secondary CIDR range, select tier-1-pods.

  12. For Services secondary CIDR range, select tier-1-services.

  13. Click Create.

  14. When the creation is complete, in the list of clusters, click tier-1-cluster.

  15. On the Cluster details page, click the Nodes tab

  16. Under Node Pools, click the name of the node pool you want to inspect.

  17. Under Instance groups, click the name of the instance group you want to inspect. For example, gke-tier-1-cluster-default-pool-5c5add1f-grp.

  18. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

gcloud

Create a cluster named tier-1-cluster in your first service project:

gcloud container clusters create-auto tier-1-cluster \
    --project=SERVICE_PROJECT_1_ID \
    --location=COMPUTE_REGION \
    --network=projects/HOST_PROJECT_ID/global/networks/shared-net \
    --subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-1 \
    --cluster-secondary-range-name=tier-1-pods \
    --services-secondary-range-name=tier-1-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

gcloud compute instances list --project SERVICE_PROJECT_1_ID

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-1-cluster-...  ZONE_NAME      ... 10.0.4.2
gke-tier-1-cluster-...  ZONE_NAME      ... 10.0.4.3
gke-tier-1-cluster-...  ZONE_NAME      ... 10.0.4.4

Creating a cluster in your second service project

To create a cluster in your second service project, perform the following steps using the gcloud CLI or the Google Cloud console.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your second service project.

  3. Click Create.

  4. In the Standard or Autopilot section, click Configure.

  5. For Name, enter tier-2-cluster.

  6. In the Region drop-down list, select the same region that you used for the subnets.

  7. From the navigation pane, click Networking.

  8. For Network, select shared-net.

  9. For Node subnet, select tier-2.

  10. For Pod secondary CIDR range, select tier-2-pods.

  11. For Services secondary CIDR range, select tier-2-services.

  12. Click Create.

  13. When the creation is complete, in the list of clusters, click tier-2-cluster.

  14. On the Cluster details page, click the Nodes tab

  15. Under Node Pools, click the name of the node pool you want to inspect.

  16. Under Instance groups, click the name of the instance group you want to inspect. For example, gke-tier-2-cluster-default-pool-5c5add1f-grp.

  17. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

gcloud

Create a cluster named tier-2-cluster in your second service project:

gcloud container clusters create-auto tier-2-cluster \
    --project=SERVICE_PROJECT_2_ID \
    --location=COMPUTE_REGION \
    --network=projects/HOST_PROJECT_ID/global/networks/shared-net \
    --subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-2 \
    --cluster-secondary-range-name=tier-2-pods \
    --services-secondary-range-name=tier-2-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

gcloud compute instances list --project SERVICE_PROJECT_2_ID

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-2-cluster-...  ZONE_NAME      ... 172.16.4.2
gke-tier-2-cluster-...  ZONE_NAME      ... 172.16.4.3
gke-tier-2-cluster-...  ZONE_NAME      ... 172.16.4.4

Creating firewall rules

To allow traffic into the network and between the clusters within the network, you need to create firewalls. The following sections demonstrate how to create and update firewall rules:

Creating a firewall rule to enable SSH connection to a node

In your host project, create a firewall rule for the shared-net network. Allow traffic to enter on TCP port 22, which permits you to connect to your cluster nodes using SSH.

Console

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the project picker, select your host project.

  3. From the VPC Networking menu, click Create Firewall Rule.

  4. For Name, enter my-shared-net-rule.

  5. For Network, select shared-net.

  6. For Direction of traffic, select Ingress.

  7. For Action on match, select Allow.

  8. For Targets, select All instances in the network.

  9. For Source filter, select IP ranges.

  10. For Source IP ranges, enter 0.0.0.0/0.

  11. For Protocols and ports, select Specified protocols and ports. In the box, enter tcp:22.

  12. Click Create.

gcloud

Create a firewall rule for your shared network:

gcloud compute firewall-rules create my-shared-net-rule \
    --project HOST_PROJECT_ID \
    --network shared-net \
    --direction INGRESS \
    --allow tcp:22

Connecting to a node using SSH

After creating the firewall that allows ingress traffic on TCP port 22, connect to the node using SSH.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your first service project.

  3. Click tier-1-cluster.

  4. On the Cluster details page, click the Nodes tab.

  5. Under Node Pools, click the name of your node pool.

  6. Under Instance groups, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-faf87d48-grp.

  7. In the list of instances, make a note of the internal IP addresses of the nodes. These addresses are in the 10.0.4.0/22 range.

  8. For one of your nodes, click SSH. This succeeds because SSH uses TCP port 22, which is allowed by your firewall rule.

gcloud

List the nodes in your first service project:

gcloud compute instances list --project SERVICE_PROJECT_1_ID

The output includes the names of the nodes in your cluster:

NAME                                           ...
gke-tier-1-cluster-default-pool-faf87d48-3mf8  ...
gke-tier-1-cluster-default-pool-faf87d48-q17k  ...
gke-tier-1-cluster-default-pool-faf87d48-x9rk  ...

Connect to one of your nodes using SSH:

gcloud compute ssh NODE_NAME \
    --project SERVICE_PROJECT_1_ID \
    --zone COMPUTE_ZONE

Replace the following:

  • NODE_NAME: the name of one of your nodes.
  • COMPUTE_ZONE: the name of a Compute Engine zone within the region.

Updating the firewall rule to ping between nodes

  1. In your SSH command-line window, start the CoreOS Toolbox:

    /usr/bin/toolbox
    
  2. In the toolbox shell, ping one of your other nodes in the same cluster. For example:

    ping 10.0.4.4
    

    The ping command succeeds, because your node and the other node are both in the 10.0.4.0/22 range.

  3. Now, try to ping one of the nodes in the cluster in your other service project. For example:

    ping 172.16.4.3
    

    This time the ping command fails, because your firewall rule does not allow Internet Control Message Protocol (ICMP) traffic.

  4. At an ordinary command prompt, not your toolbox shell, Update your firewall rule to allow ICMP:

    gcloud compute firewall-rules update my-shared-net-rule \
        --project HOST_PROJECT_ID \
        --allow tcp:22,icmp
    
  5. In your toolbox shell, ping the node again. For example:

    ping 172.16.4.3
    

    This time the ping command succeeds.

Creating additional firewall rules

You can create additional firewall rules to allow communication between nodes, Pods, and Services in your clusters.

For example, the following rule allows traffic to enter from any node, Pod, or Service in tier-1-cluster on any TCP or UDP port:

gcloud compute firewall-rules create my-shared-net-rule-2 \
    --project HOST_PROJECT_ID \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 10.0.4.0/22,10.4.0.0/14,10.0.32.0/20

The following rule allows traffic to enter from any node, Pod, or Service in tier-2-cluster on any TCP or UDP port:

gcloud compute firewall-rules create my-shared-net-rule-3 \
    --project HOST_PROJECT_ID \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 172.16.4.0/22,172.20.0.0/14,172.16.16.0/20

Kubernetes will also try to create and manage firewall resources when necessary, for example when you create a load balancer service. If Kubernetes finds itself unable to change the firewall rules due to a permission issue, a Kubernetes Event will be raised to guide you on how to make the changes.

If you want to grant Kubernetes permission to change the firewall rules, see Managing firewall resources.

For Ingress Load Balancers, if Kubernetes can't change the firewall rules due to insufficient permission, a firewallXPNError event is emitted every several minutes. In GLBC 1.4 and later, you can mute the firewallXPNError event by adding networking.gke.io/suppress-firewall-xpn-error: "true" annotation to the ingress resource. You can always remove this annotation to unmute.

Creating a private cluster in a Shared VPC

You can use Shared VPC with private clusters.

This requires that you grant the following permissions on the host project, either to the user account or to the service account, used to create the cluster:

  • compute.networks.get

  • compute.networks.updatePeering

You must also ensure that the control plane IP address range does not overlap with other reserved ranges in the shared network.

For private clusters created prior to January 15, 2020, the maximum number of private GKE clusters you can have per VPC network is limited to the number of peering connections from a single VPC network. New private clusters reuse VPC Network Peering connections which removes this limitation. To enable VPC Network Peering reuse on older private clusters, you can delete a cluster and recreate it. Upgrading a cluster does not cause it to reuse an existing VPC Network Peering connection.

In this section, you create a VPC-native cluster named private-cluster-vpc in a predefined shared VPC network.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. In the Autopilot or Standard section, click Configure.

  4. For Name, enter private-cluster-vpc.

  5. From the navigation pane, click Networking.

  6. Select Private cluster.

  7. (Optional for Autopilot): Set Control plane IP range to 172.16.0.16/28.

  8. In the Network drop-down list, select the VPC network you created previously.

  9. In the Node subnet drop-down list, select the shared subnet you created previously.

  10. Configure your cluster as needed.

  11. Click Create.

gcloud

Run the following command to create a cluster named private-cluster-vpc in a predefined Shared VPC:

gcloud container clusters create-auto private-cluster-vpc \
    --project=PROJECT_ID \
    --location=COMPUTE_REGION \
    --network=projects/HOST_PROJECT/global/networks/shared-net \
    --subnetwork=SHARED_SUBNETWORK \
    --cluster-secondary-range-name=tier-1-pods \
    --services-secondary-range-name=tier-1-services \
    --enable-private-nodes \
    --master-ipv4-cidr=172.16.0.0/28

Reserving IP addresses

You can reserve internal and external IP addresses for your Shared VPC clusters. Ensure that the IP addresses are reserved in the service project.

For internal IP addresses, you must provide the subnetwork where the IP address belongs. To reserve an IP address across projects, use the full resource URL to identify the subnetwork.

You can use the following command in the Google Cloud CLI to reserve an internal IP address:

gcloud compute addresses create RESERVED_IP_NAME \
    --region=COMPUTE_REGION \
    --subnet=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNETWORK_NAME \
    --addresses=IP_ADDRESS \
    --project=SERVICE_PROJECT_ID

To call this command, you must have the compute.subnetworks.use permission added to the subnetwork. You can either grant the caller a compute.networkUser role on the subnetwork, or you can grant the caller a customized role with compute.subnetworks.use permission at the project level.

Cleaning up

After completing the exercises in this guide, perform the following tasks to remove the resources to prevent unwanted charges incurring on your account:

Deleting the clusters

Delete the two clusters you created.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your first service project.

  3. Select the tier-1-cluster, and click Delete.

  4. In the project picker, select your second service project.

  5. Select the tier-2-cluster, and click Delete.

gcloud

gcloud container clusters delete tier-1-cluster \
    --project SERVICE_PROJECT_1_ID \
    --zone COMPUTE_ZONE

gcloud container clusters delete tier-2-cluster \
    --project SERVICE_PROJECT_2_ID \
    --zone COMPUTE_ZONE

Disabling Shared VPC

Disable Shared VPC in your host project.

Console

  1. Go to the Shared VPC page in the Google Cloud console.

    Go to Shared VPC

  2. In the project picker, select your host project.

  3. Click Disable Shared VPC.

  4. Enter the HOST_PROJECT_ID in the field, and click Disable.

gcloud

gcloud compute shared-vpc associated-projects remove SERVICE_PROJECT_1_ID \
    --host-project HOST_PROJECT_ID

gcloud compute shared-vpc associated-projects remove SERVICE_PROJECT_2_ID \
    --host-project HOST_PROJECT_ID

gcloud compute shared-vpc disable HOST_PROJECT_ID

Deleting your firewall rules

Remove the firewall rules you created.

Console

  1. Go to the Firewall page in the Google Cloud console.

    Go to Firewall

  2. In the project picker, select your host project.

  3. In the list of rules, select my-shared-net-rule, my-shared-net-rule-2, and my-shared-net-rule-3.

  4. Click Delete.

gcloud

Delete your firewall rules:

gcloud compute firewall-rules delete \
    my-shared-net-rule \
    my-shared-net-rule-2 \
    my-shared-net-rule-3 \
    --project HOST_PROJECT_ID

Deleting the shared network

Delete the shared network you created.

Console

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the project picker, select your host project.

  3. In the list of networks, select shared-net.

  4. Click Delete VPC Network.

gcloud

gcloud compute networks subnets delete tier-1 \
    --project HOST_PROJECT_ID \
    --region COMPUTE_REGION

gcloud compute networks subnets delete tier-2 \
    --project HOST_PROJECT_ID \
    --region COMPUTE_REGION

gcloud compute networks delete shared-net --project HOST_PROJECT_ID

Removing the Host Service Agent User role

Remove the Host Service Agent User roles from your two service projects.

Console

  1. Go to the IAM page in the Google Cloud console.

    Go to IAM

  2. In the project picker, select your host project.

  3. In the list of members, select the row that shows service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.

  4. Select the row that shows service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.

  5. Click Remove access.

gcloud

  1. Remove the Host Service Agent User role from the GKE service account of your first service project:

    gcloud projects remove-iam-policy-binding HOST_PROJECT_ID \
        --member serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com \
        --role roles/container.hostServiceAgentUser
    
  2. Remove the Host Service Agent User role from the GKE service account of your second service project:

    gcloud projects remove-iam-policy-binding HOST_PROJECT_ID \
        --member serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com \
        --role roles/container.hostServiceAgentUser
    

Troubleshooting

Permission Denied

Symptom:

Failed to get metadata from network project. GCE_PERMISSION_DENIED: Google
Compute Engine: Required 'compute.projects.get' permission for
'projects/HOST_PROJECT_ID
Possible reasons:

  • The Kubernetes Engine API has not been enabled in the host project.

  • The host project's GKE service account does not exist. For example, it might have been deleted.

  • The host project's GKE service account does not have the Kubernetes Engine Service Agent (container.serviceAgent) role in the host project. The binding might have been accidentally removed.

  • The service project's GKE service account does not have the Host Service Agent User role in the host project.

To fix the problem: Determine whether the host project's GKE service account exists. If it does not, do the following:

  • If the Kubernetes Engine API is not enabled in the host project, enable it. This creates the host project's GKE service account and grants the host project's GKE service account the Kubernetes Engine Service Agent (container.serviceAgent) role in the host project.

  • If the Kubernetes Engine API is enabled in the host project, this means that either the host project's GKE service account has been deleted or it does not have the Kubernetes Engine Service Agent (container.serviceAgent) role in the host project. To restore the GKE service account or the role binding, you must disable then re-enable the Kubernetes Engine API. For more information, refer to Google Kubernetes Engine Troubleshooting.

What's next