Setting up clusters with Shared VPC

This guide shows how to create two Google Kubernetes Engine (GKE) clusters, in separate projects, that use a Shared VPC. For general information about GKE networking, visit the Network overview.

Overview

With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project. You create networks, subnets, secondary address ranges, firewall rules, and other network resources in the host project. Then you share selected subnets, including secondary ranges, with the service projects. Components running in a service project can use the Shared VPC to communicate with components running in the other service projects.

You can use Shared VPC with both zonal and regional clusters. Clusters that use Shared VPC cannot use legacy networks and must have Alias IPs enabled.

You can configure Shared VPC when you create a new cluster. Google Kubernetes Engine does not support converting existing clusters to the Shared VPC model.

With Shared VPC, certain quotas and limits apply. For example there is a quota for the number of networks in a project, and there is a limit on the number of service projects that can be attached to a host project. For details, see Quotas and limits.

About the examples

The examples in this guide set up the infrastructure for a two-tier web application, as described in Shared VPC overview.

The examples in this guide use specific names and address ranges to illustrate general procedures. If you like, you can change the names and address ranges to suit your needs. Also, the exercises use the us-central1 region and the us-central1-a zone. If you like, you can change the region and zone to suit your needs.

Before you begin

Before you start to set up a cluster with Shared VPC:

Before you perform the exercises in this guide:

  • Choose one of your projects to be the host project.
  • Choose two of your projects to be service projects.

Each project has a name, an ID, and a number. In some cases, the name and the ID are the same. This guide uses the following friendly names and placeholders to refer to your projects:

Friendly name Project ID
placeholder
Project number
placeholder
Your host project host-project-id host-project-num
Your first service project service-project-1-id service-project-1-num
Your second service project service-project-2-id service-project-2-num

Finding your project IDs and numbers

You can find your project ID and numbers by using the gcloud tool or the Google Cloud Console.

Console

  1. Visit the Home page in the Google Cloud Console.

    Visit the Home page

  2. In the project picker, select the project that you have chosen to be the host project.

  3. Under Project info, you can see the project name, project ID, and project number. Make a note of the ID and number for later.

  4. Do the same for each of the projects that you have chosen to be service projects.

gcloud

List your projects with the following command:

gcloud projects list

The output shows your project names, IDs and numbers. Make a note of the ID and number for later:

PROJECT_ID        NAME        PROJECT_NUMBER
host-123          host        1027xxxxxxxx
srv-1-456         srv-1       4964xxxxxxxx
srv-2-789         srv-2       4559xxxxxxxx

Enabling the Google Kubernetes Engine API in your projects

Before you continue with the exercises in this guide, make sure that the Google Kubernetes Engine API is enabled in all three of your projects. Enabling the API in a project creates a GKE service account for the project. To perform the remaining tasks in this guide, each of your projects must have a GKE service account.

You can enable the Google Kubernetes Engine API using the Google Cloud Console or the gcloud tool.

Console

  1. Visit the APIs & Services dashboard in the Cloud Console.
    Visit the APIs Dashboard
  2. In the project picker, select the project that you have chosen to be the host project.
  3. If Kubernetes Engine API is in the list of APIs, it is already enabled, and you don't need to do anything. If it is not in the list, click Enable APIs and Services. Search for Kubernetes Engine API. Click the Kubernetes Engine API card, and click Enable.
  4. Repeat these steps for each projects that you have chosen to be a service project. Each operation may take some time to complete.

gcloud

Enable the Google Kubernetes Engine API for your three projects. Each operation may take some time to complete:

gcloud services enable container.googleapis.com --project host-project-id
gcloud services enable container.googleapis.com --project service-project-1-id
gcloud services enable container.googleapis.com --project service-project-2-id

Creating a network and two subnets

In this section, you will perform the following tasks:

  1. In your host project, create a network named shared-net.
  2. Create two subnets named tier-1 and tier-2.
  3. For each subnet, create two secondary address ranges: one for Services, and one for Pods.

Console

  1. Visit the VPC networks page in the Cloud Console.
    Visit the VPC networks page
  2. In the project picker, select your host project.
  3. Click Create VPC Network.
  4. For Name, enter shared-net.
  5. Under Subnet creation mode, select Custom.
  6. In the New subnet box, for Name, enter tier-1.
  7. For Region, select us-central1.
  8. For IP address range, enter 10.0.4.0/22.
  9. Click Create secondary IP range. For Subnet range name, enter tier-1-services, and for Secondary IP range, enter 10.0.32.0/20.
  10. Click Add IP range. For Subnet range name, enter tier-1-pods, and for Secondary IP range, enter 10.4.0.0/14.
  11. Click + Add subnet.
  12. For Name, enter tier-2.
  13. For Region, select us-central1.
  14. For IP address range, enter 172.16.4.0/22.
  15. Click Create secondary IP range. For Subnet range name, enter tier-2-services, and for Secondary IP range, enter 172.16.16.0/20.
  16. Click Add IP range. For Subnet range name, enter tier-2-pods, and for Secondary IP range, enter 172.20.0.0/14.
  17. Click Create.

gcloud

In your host project, create a network named shared-net:

gcloud compute networks create shared-net \
    --subnet-mode custom \
    --project host-project-id

In your new network, create a subnet named tier-1:

gcloud compute networks subnets create tier-1 \
    --project host-project-id \
    --network shared-net \
    --range 10.0.4.0/22 \
    --region us-central1 \
    --secondary-range tier-1-services=10.0.32.0/20,tier-1-pods=10.4.0.0/14

Create another subnet named tier-2:

gcloud compute networks subnets create tier-2 \
    --project host-project-id \
    --network shared-net \
    --range 172.16.4.0/22 \
    --region us-central1 \
    --secondary-range tier-2-services=172.16.16.0/20,tier-2-pods=172.20.0.0/14

Determining the names of service accounts in your service projects

You have two service projects, each of which has several service accounts. This section is concerned with your GKE service accounts and your Google APIs service accounts. You need the names of these service accounts for the next section.

The following table lists the names of the GKE and Google APIs service accounts in your two service projects:

Service account type Service account name
GKE service-service-project-1-num@container-engine-robot.iam.gserviceaccount.com
service-service-project-2-num@container-engine-robot.iam.gserviceaccount.com
Google APIs service-project-1-num@cloudservices.gserviceaccount.com
service-project-2-num@cloudservices.gserviceaccount.com

Enabling Shared VPC and granting roles

To perform the tasks in this section, ensure that your organization has defined a Shared VPC Admin role.

In this section, you will perform the following tasks:

  1. In your host project, enable Shared VPC.
  2. Attach your two service projects to the host project.
  3. Grant the appropriate IAM roles to service accounts that belong to your service projects:
    • In your first service project, grant two service accounts the Compute Network User role on the tier-1 subnet of your host project.
    • In your second service project, grant two service projects the Compute Network User role on the tier-2 subnet of your host project.

Console

Perform the following steps to enable Shared VPC, attach service projects, and grant roles:

  1. Visit the Shared VPC page in the Cloud Console.

    Visit the Shared VPC page

  2. In the project picker, select your host project.

  3. Click Set up Shared VPC. The Enable host project screen displays.

  4. Click Save & continue. The Select subnets page displays.

  5. Under Sharing mode, select Individual subnets.

  6. Under Subnets to share, check tier-1 and tier-2. Clear all other checkboxes.

  7. Click Continue. The Give permissions page displays.

  8. Under Attach service projects, check your first service project and your second service project. Clear all the other checkboxes under Attach service projects.

  9. Under Kubernetes Engine access, check Enabled.

  10. Click Save. A new page displays.

  11. Under Individual subnet permissions, check tier-1.

  12. In the right pane, delete any service accounts that belong to your second service project. That is, delete any service accounts that contain service-project2-num.

  13. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your first service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.

  14. In the center pane, under Individual subnet permissions, check tier-2, and uncheck tier-1.

  15. In the right pane, delete any service accounts that belong to your first service project. That is, delete any service accounts that contain service-project-1-num.

  16. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your second service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.

gcloud

  1. Enable Shared VPC in your host project:

    gcloud compute shared-vpc enable host-project-id
    
  2. Attach your first service project to your host project:

    gcloud compute shared-vpc associated-projects add service-project-1-id \
        --host-project host-project-id
    
  3. Attach your second service project to your host project:

    gcloud compute shared-vpc associated-projects add service-project-2-id \
        --host-project host-project-id
    
  4. Get the IAM policy for the tier-1 subnet:

    gcloud compute networks subnets get-iam-policy tier-1 \
       --project host-project-id \
       --region us-central1
    

    The output contains an etag field. Make a note of the etag value.

  5. Create a file named tier-1-policy.yaml that has the following content:

    bindings:
    - members:
      - serviceAccount:service-project-1-num@cloudservices.gserviceaccount.com
      - serviceAccount:service-service-project-1-num@container-engine-robot.iam.gserviceaccount.com
      role: roles/compute.networkUser
    etag: etag-string
    

    where etag-string is the etag value that you noted previously.

  6. Set the IAM policy for the tier-1 subnet:

    gcloud compute networks subnets set-iam-policy tier-1 \
        tier-1-policy.yaml \
        --project host-project-id \
        --region us-central1
    
  7. Get the IAM policy for the tier-2 subnet:

    gcloud compute networks subnets get-iam-policy tier-2 \
        --project host-project-id \
        --region us-central1
    

    The output contains an etag field. Make a note of the etag value.

  8. Create a file named tier-2-policy.yaml that has the following content:

    bindings:
    - members:
      - serviceAccount:service-project-2-num@cloudservices.gserviceaccount.com
      - serviceAccount:service-service-project-2-num@container-engine-robot.iam.gserviceaccount.com
      role: roles/compute.networkUser
    etag: etag-string
    

    where: etag-string is the etag value that you noted previously.

  9. Set the IAM policy for the tier-2 subnet:

    gcloud compute networks subnets set-iam-policy tier-2 \
        tier-2-policy.yaml \
        --project host-project-id \
        --region us-central1
    

If you want the GKE cluster to create and manage the firewall resources in your host project, you can perform one of the following tasks:

  • In your host project, grant the Compute Security Admin role.
  • Grant a custom role with compute.firewalls.* and compute.networks.updatePolicy permissions to the GKE service account of the service project. To learn more, see the Creating additional firewall rules section.

Summary of roles granted on subnets

Here's a summary of the roles granted on the subnets:

Service account Role Subnet
service-service-project-1-num@container-engine-robot.iam.gserviceaccount.com Compute Network User tier-1
service-project-1-num@cloudservices.gserviceaccount.com Compute Network User tier-1
service-service-project-2-num@container-engine-robot.iam.gserviceaccount.com Compute Network User tier-2
service-project-2-num@cloudservices.gserviceaccount.com Compute Network User tier-2

Kubernetes Engine access

When attaching a service project, enabling Kubernetes Engine access grants the service project's GKE service account the permissions to perform network management operations in the host project.

If a service project was attached without enabling Kubernetes Engine access, assuming the Kubernetes Engine API has already been enabled in both the host and service project, you can manually assign the permissions to the service project's GKE service account by adding the following IAM role bindings in the host project:

Member Role Resource
service-service-project-num@container-engine-robot.iam.gserviceaccount.com Network User Specific subnet or whole host project
service-service-project-num@container-engine-robot.iam.gserviceaccount.com Host Service Agent User GKE service account in the host project

Granting the Host Service Agent User role

Each service project's GKE service account must have a binding for the Host Service Agent User role on the host project. The GKE service account takes the following form, where service-project-num is the project number of your service project:

service-service-project-num@container-engine-robot.iam.gserviceaccount.com

This binding allows the service project's GKE service account to perform network management operations in the host project, as if it were the host project's GKE service account. This role can only be granted to a service project's GKE service account.

Console

If you have been using the Cloud Console, you do not have to grant the Host Service Agent User role explicitly. That was done automatically when you used the Cloud Console to attach service projects to your host project.

gcloud

  1. For your first project, grant the Host Service Agent User role to the project's GKE Service Account. This role is granted on your host project:

    gcloud projects add-iam-policy-binding host-project-id \
       --member serviceAccount:service-service-project-1-num@container-engine-robot.iam.gserviceaccount.com \
       --role roles/container.hostServiceAgentUser
    
  2. For your second project, grant the Host Service Agent User role to the projet's GKE Service Account. This role is granted on your host project:

    gcloud projects add-iam-policy-binding host-project-id \
       --member serviceAccount:service-service-project-2-num@container-engine-robot.iam.gserviceaccount.com \
       --role roles/container.hostServiceAgentUser
    

Verifying usable subnets and secondary IP ranges

When creating a cluster, you must specify a subnet and the secondary IP ranges to be used for the cluster's Pods and Services. There are several reasons that an IP range might not be available for use. Whether you are creating the cluster with the Cloud Console or the gcloud command-line tool, you should specify usable IP ranges.

An IP range is usable for the new cluster's services if the range isn't already in use. The IP range that you specify for the new cluster's Pods can either be an unused range, or it can be a range that's shared with Pods in your other clusters. IP ranges that are created and managed by GKE can't be used by your cluster.

You can list a project's usable subnets and secondary IP ranges by using the gcloud command-line tool.

gcloud

gcloud container subnets list-usable \
    --project service-project-id \
    --network-project host-project-id

If you omit the --project or --network-project option, the gcloud command uses the default project from your active configuration. Since the host project and network project are distinct, you must specify one or both of --project and --network-project.

The command's output resembles this:

PROJECT   REGION       NETWORK      SUBNET          RANGE
xpn-host  us-central1  empty-vpc    empty-subnet    10.0.0.0/21
xpn-host  us-east1     some-vpc     some-subnet     10.0.0.0/19
    ┌──────────────────────┬───────────────┬─────────────────────────────┐
    │ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │            STATUS           │
    ├──────────────────────┼───────────────┼─────────────────────────────┤
    │ pods-range           │ 10.2.0.0/21   │ usable for pods or services │
    │ svc-range            │ 10.1.0.0/21   │ usable for pods or services │
    └──────────────────────┴───────────────┴─────────────────────────────┘
xpn-host  us-central1  shared-net   tier-2          172.16.4.0/22
    ┌──────────────────────┬────────────────┬─────────────────────────────┐
    │ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE  │            STATUS           │
    ├──────────────────────┼────────────────┼─────────────────────────────┤
    │ tier-2-services      │ 172.16.16.0/20 │ usable for pods or services │
    │ tier-2-pods          │ 172.20.0.0/14  │ usable for pods or services │
    └──────────────────────┴────────────────┴─────────────────────────────┘
xpn-host  us-central1  shared-net   tier-1          10.0.4.0/22
    ┌──────────────────────┬───────────────┬─────────────────────────────┐
    │ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │            STATUS           │
    ├──────────────────────┼───────────────┼─────────────────────────────┤
    │ tier-1-services      │ 10.0.32.0/20  │ unusable                    │
    │ tier-1-pods          │ 10.4.0.0/14   │ usable for pods             │
    │ tier-1-extra         │ 10.8.0.0/14   │ usable for pods or services │
    └──────────────────────┴───────────────┴─────────────────────────────┘

The list-usable command returns an empty list in the following situations:

  • When the service project's Kubernetes Engine service account does not have the Host Service Agent User role to the host project.
  • When the Kubernetes Engine service account in the host project does not exist (for example, if you've deleted that account accidentally).
  • When Kubernetes Engine API is not enabled in the host project, which implies the Kubernetes Engine service account in the host project is missing.

For more information, see the troubleshooting section.

Notes about secondary ranges

You can create 30 secondary ranges in a given subnet. For each cluster, you need two secondary ranges: one for Pods and one for Services.

Creating a cluster in your first service project

To create a cluster in your first service project, perform the following steps using the gcloud tool or the Google Cloud Console.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. In the project picker, select your first service project.

  3. Click the Create cluster button.

  4. For cluster Name, enter tier-1-cluster.

  5. For Location type, select Zonal.

  6. In the Zone drop-down list, select us-central1-a.

  7. From the navigation pane, under Cluster, click Networking.

  8. Select Enable VPC-native traffic routing (uses alias IP).

  9. Deselect Automatically create secondary ranges.

  10. Select Networks shared with me (from host project: ...).

  11. For Network, select shared-net.

  12. For Node subnet, select tier-1.

  13. For Pod secondary CIDR range, select tier-1-pods.

  14. For Services secondary CIDR range, select tier-1-services.

  15. Click Create.

  16. When the creation is complete, in the list of clusters, click tier-1-cluster.

  17. Under Node Pools, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-5c5add1f-grp.

  18. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

gcloud

Create a cluster named tier-1-cluster in your first service project:

gcloud container clusters create tier-1-cluster \
    --project service-project-1-id \
    --zone=us-central1-a \
    --enable-ip-alias \
    --network projects/host-project-id/global/networks/shared-net \
    --subnetwork projects/host-project-id/regions/us-central1/subnetworks/tier-1 \
    --cluster-secondary-range-name tier-1-pods \
    --services-secondary-range-name tier-1-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

gcloud compute instances list --project service-project-1-id

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.2
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.3
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.4

Creating a cluster in your second service project

To create a cluster in your second service project, perform the following steps using the gcloud tool or the Google Cloud Console.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. In the project picker, select your second service project.

  3. Click the Create cluster button.

  4. For cluster Name, enter tier-2-cluster.

  5. For Location type, select Zonal.

  6. In the Zone drop-down list, select us-central1-a.

  7. From the navigation pane, under Cluster, click Networking.

  8. Select Enable VPC-native traffic routing (uses alias IP).

  9. Deselect Automatically create secondary ranges.

  10. Select Networks shared with me (from host project: ...).

  11. For Network, select shared-net.

  12. For Node subnet, select tier-2.

  13. For Pod secondary CIDR range, select tier-2-pods.

  14. For Services secondary CIDR range, select tier-2-services.

  15. Click Create.

  16. When the creation is complete, in the list of clusters, click tier-2-cluster.

  17. Under Node Pools, click the name of your instance group. For example, gke-tier-2-cluster-default-pool-5c5add1f-grp.

  18. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

gcloud

Create a cluster named tier-2-cluster in your second service project:

gcloud container clusters create tier-2-cluster \
    --project service-project-2-id \
    --zone=us-central1-a \
    --enable-ip-alias \
    --network projects/host-project-id/global/networks/shared-net \
    --subnetwork projects/host-project-id/regions/us-central1/subnetworks/tier-2 \
    --cluster-secondary-range-name tier-2-pods \
    --services-secondary-range-name tier-2-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

gcloud compute instances list --project service-project-2-id

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.2
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.3
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.4

Creating firewall rules

To allow traffic into the network and between the clusters within the network, you need to create firewalls. The following sections demonstrate how to create and update firewall rules:

Creating a firewall rule to enable SSH connection to a node

In your host project, create a firewall rule for the shared-net network. Allow traffic to enter on TCP port 22, which permits you to connect to your cluster nodes using SSH.

Console

  1. Visit the Firewalls page in the Cloud Console.

    Visit the Firewall rules page

  2. In the project picker, select your host project.

  3. From the VPC Networking menu, click Create Firewall Rule.

  4. For Name, enter my-shared-net-rule.

  5. For Network, select shared-net.

  6. For Direction of traffic, select Ingress.

  7. For Action on match, select Allow.

  8. For Targets, select All instances in the network.

  9. For Source filter, select IP ranges.

  10. For Source IP ranges, enter 0.0.0.0/0.

  11. For Protocols and ports, select Specified protocols and ports. In the box, enter tcp:22.

  12. Click Create.

gcloud

Create a firewall rule for your shared network:

gcloud compute firewall-rules create my-shared-net-rule \
    --project host-project-id \
    --network shared-net \
    --direction INGRESS \
    --allow tcp:22

Connecting to a node using SSH

After creating the firewall that allows ingress traffic on TCP port 22, connect to the node using SSH.

Console

  1. Visit the Google Kubernetes Engine menu in the Cloud Console.

    Visit the Firewall rules page

  2. In the project picker, select your first service project.

  3. Click tier-1-cluster.

  4. Under Node Pools, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-faf87d48-grp.

  5. In the list of nodes, make a note of the internal IP addresses of the nodes. These addresses are in the 10.0.4.0/22 range.

  6. For one of your nodes, click SSH. This succeeds because SSH uses TCP port 22, which is allowed by your firewall rule.

gcloud

List the nodes in your first service project:

gcloud compute instances list --project service-project-1-id

The output includes the names of the nodes in your cluster:

NAME                                           ...
gke-tier-1-cluster-default-pool-faf87d48-3mf8  ...
gke-tier-1-cluster-default-pool-faf87d48-q17k  ...
gke-tier-1-cluster-default-pool-faf87d48-x9rk  ...

SSH into one of your nodes:

gcloud compute ssh node-name \
    --project service-project-1-id \
    --zone us-central1-a \

where node-name is the name of one of your nodes.

Updating the firewall rule to ping between nodes

  1. In your SSH command-line window, start the CoreOS Toolbox:

    /usr/bin/toolbox
    
  2. In the toolbox shell, ping one of your other nodes in the same cluster. For example:

    ping 10.0.4.4
    

    The ping command succeeds, because your node and the other node are both in the 10.0.4.0/22 range.

  3. Now, try to ping one of the nodes in the cluster in your other service project. For example:

    ping 172.16.4.3
    

    This time the ping command fails, because your firewall rule does not allow Internet Control Message Protocol (ICMP) traffic.

  4. At an ordinary command prompt, not your toolbox shell, Update your firewall rule to allow ICMP:

    gcloud compute firewall-rules update my-shared-net-rule \
       --project host-project-id \
       --allow tcp:22,icmp
    
  5. In your toolbox shell, ping the node again. For example:

    ping 172.16.4.3
    

    This time the ping command succeeds.

Creating additional firewall rules

You can create additional firewall rules to allow communication between nodes, Pods, and Services in your clusters.

For example, the following rule allows traffic to enter from any node, Pod, or Service in tier-1-cluster on any TCP or UDP port:

gcloud compute firewall-rules create my-shared-net-rule-2 \
    --project host-project-id \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 10.0.4.0/22,10.4.0.0/14,10.0.32.0/20

The following rule allows traffic to enter from any node, Pod, or Service in tier-2-cluster on any TCP or UDP port:

gcloud compute firewall-rules create my-shared-net-rule-3 \
    --project host-project-id \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 172.16.4.0/22,172.20.0.0/14,172.16.16.0/20

Kubernetes will also try to create and manage firewall resources when necessary, for example when you create a load balancer service. If Kubernetes finds itself unable to change the firewall rules due to a permission issue, a Kubernetes Event will be raised to guide you on how to make the changes.

If you want to grant Kubernetes permission to change the firewall rules, you can perform one of the following:

  • In your host project, grant the Compute Security Admin role.
  • Grant a custom role with compute.firewalls.* and compute.networks.updatePolicy permissions to the GKE service account of the service project.

For Ingress Load Balancers, if Kubernetes can't change the firewall rules due to insufficient permission, a firewallXPNError event is emitted every several minutes. In GLBC 1.4 and later, you can mute the firewallXPNError event by adding networking.gke.io/suppress-firewall-xpn-error: "true" annotation to the ingress resource. You can always remove this annotation to unmute.

Creating a private cluster in a Shared VPC

You can use Shared VPC with private clusters. There is no special setup required. However, you must ensure that the control plane (master) CIDR range does not overlap other reserved ranges in the shared network.

For private clusters created prior to January 15, 2020, the maximum number of private GKE clusters you can have per VPC network is limited to the number of peering connections from a single VPC network. New private clusters reuse VPC Network Peering connections which removes this limitation. To enable VPC Network Peering reuse on older private clusters, you can delete a cluster and recreate it. Upgrading a cluster does not cause it to reuse an existing VPC Network Peering connection.

In this section, you create a VPC-native cluster named private-cluster-vpc in a predefined shared VPC network.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. For cluster Name, enter private-cluster-vpc.

  4. From the navigation pane, under Cluster, click Networking.

  5. Click Private cluster.

  6. Ensure the Access master using its external IP address checkbox is selected.

  7. Set Master IP range to 172.16.0.16/28.

  8. In the Network drop-down list, select the VPC network you created previously.

  9. In the Node subnet drop-down list, select the shared subnet you created previously.

  10. Ensure the Enable VPC-native traffic routing (uses alias IP) checkbox is selected.

  11. Configure your cluster as desired.

  12. Click Create.

gcloud

Run the following command to create a cluster named private-cluster-vpc in a predefined Shared VPC:

gcloud container clusters create private-cluster-vpc \
    --project project-id \
    --enable-ip-alias \
    --network projects/host-project/global/networks/shared-net \
    --subnetwork shared-subnetwork \
    --cluster-secondary-range-name c0-pods \
    --services-secondary-range-name c0-services \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.0/28

Reserving IP addresses

You can reserve internal and external IP addresses for your Shared VPC clusters. Ensure that the IP addresses are reserved in the service project.

For internal IP addresses, you need to provide the subnetwork where the IP address belongs. To reserve an IP address across projects, use the full resource URL to identify the subnetwork.

You can use the following command in the gcloud command-line tool to reserve an internal IP address:

gcloud compute addresses create reserved-ip-name \
    --region=compute-region \
    --subnet=projects/host-project-id/regions/compute-region/subnetworks/subnetwork-name \
    --addresses=ip-address \
    --project=service-project-id

To call this command, you must have the compute.subnetworks.use permission added to the subnetwork. You can either grant the caller a compute.networkUser role on the subnetwork, or you can grant the caller a customized role with compute.subnetworks.use permission at the project level.

Cleaning up

After completing the exercises in this guide, perform the following tasks to remove the resources to prevent unwanted charges incurring on your account:

Deleting the clusters

Delete the two clusters you created.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. In the project picker, select your first service project.

  3. Select the tier-1-cluster, and click Delete.

  4. In the project picker, select your second service project.

  5. Select the tier-2-cluster, and click Delete.

gcloud

gcloud container clusters delete tier-1-cluster \
    --project service-project-1-id \
    --zone us-central1-a

gcloud container clusters delete tier-2-cluster \
    --project service-project-2-id \
    --zone us-central1-a

Disabling Shared VPC

Disable Shared VPC in your host project.

Console

  1. Visit the Shared VPC page in the Cloud Console.

    Visit the Shared VPC page

  2. In the project picker, select your host project.

  3. Click Disable Shared VPC.

  4. Enter the host-project-id in the text box, and click Disable.

gcloud

gcloud compute shared-vpc associated-projects remove service-project-1-id \
    --host-project host-project-id

gcloud compute shared-vpc associated-projects remove service-project-2-id \
    --host-project host-project-id

gcloud compute shared-vpc disable host-project-id

Deleting your firewall rules

Remove the firewall rules you created.

Console

  1. Visit the Firewalls page in the Cloud Console.

    Visit the Firewall rules page

  2. In the project picker, select your host project.

  3. In the list of rules, select my-shared-net-rule, my-shared-net-rule-2, and my-shared-net-rule-3.

  4. Click Delete.

gcloud

Delete your firewall rules:

gcloud compute firewall-rules delete \
    my-shared-net-rule \
    my-shared-net-rule-2 \
    my-shared-net-rule-3 \
    --project host-project-id

Deleting the shared network

Delete the shared network you created.

Console

  1. Visit the VPC networks page in the Cloud Console.

    Visit the VPC networks page

  2. In the project picker, select your host project.

  3. In the list of networks, select shared-net.

  4. Click Delete VPC Network.

gcloud

gcloud compute networks subnets delete tier-1 \
    --project host-project-id \
    --region us-central1

gcloud compute networks subnets delete tier-2 \
    --project host-project-id \
    --region us-central1

gcloud compute networks delete shared-net --project host-project-id

Removing the Host Service Agent User role

Remove the Host Service Agent User roles from your two service projects.

Console

  1. Visit the IAM page in the Cloud Console.

    Visit the IAM page

  2. In the project picker, select your host project.

  3. In the list of members, select the row that shows service-service-project-1-num@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.

  4. Select the row that shows service-service-project-2-num@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.

  5. Click Remove.

gcloud

  1. Remove the Host Service Agent User role from the GKE service account of your first service project:

    gcloud projects remove-iam-policy-binding host-project-id \
       --member serviceAccount:service-service-project-1-num@container-engine-robot.iam.gserviceaccount.com \
       --role roles/container.hostServiceAgentUser
    
  2. Remove the Host Service Agent User role from the GKE service account of your second service project:

    gcloud projects remove-iam-policy-binding host-project-id \
       --member serviceAccount:service-service-project-2-num@container-engine-robot.iam.gserviceaccount.com \
       --role roles/container.hostServiceAgentUser
    

Known issues

Firewall events for load balancer creation

If the Kubernetes Service Account was not granted firewall management permissions, the Kubernetes service controller might not create events for the required firewall changes. This is due to an RBAC permissions issue Kubernetes versions 1.9 and earlier. The service controller lacks the ability to raise events.

To fix this problem, apply these YAML files, which contain RBAC policies that allow the events to be created.

Clusters based on Kubernetes 1.10 and later already have these RBAC policies applied.

Troubleshooting

Permission Denied

Symptom:

Failed to get metadata from network project. GCE_PERMISSION_DENIED: Google
Compute Engine: Required 'compute.projects.get' permission for
'projects/host-project-id
Possible reasons:

  • The Kubernetes Engine API has not been enabled in the host project.

  • The service project's Google Kubernetes Engine service account does not have the Host Service Agent User role in the host project.

  • The host project's Google Kubernetes Engine service account does not exist. For example, it might have been deleted.

To fix the problem: Determine whether the host project's Google Kubernetes Engine service account exists. If it does not, do the following:

  • If the Kubernetes Engine API is not enabled in the host project, enable it. This creates the host project's Google Kubernetes Engine service account.

  • If the Kubernetes Engine API is enabled in the host project, this means that the Google Kubernetes Engine service account in the host project has been deleted. To restore the Google Kubernetes Engine service account in the host project, you must disable then re-enable the API. For more information, refer to Google Kubernetes Engine Troubleshooting.

What's next