Setting up Clusters with Shared VPC

This page shows how to create two Google Kubernetes Engine clusters, in separate projects, that use a shared virtual private cloud (VPC). For general information about GKE networking, visit the Network Overview.

Overview

With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project. You create networks, subnets, secondary address ranges, firewall rules, and other network resources in the host project. Then you share selected subnets, including secondary ranges, with the service projects. Components running in a service project can use the shared VPC to communicate with components running in the other service projects.

You can use Shared VPC with both zonal and regional clusters. Clusters that use Shared VPC cannot use legacy networks and must have Alias IPs enabled.

You can configure Shared VPC when you create a new cluster. Google Kubernetes Engine does not support converting existing clusters to the Shared VPC model.

With Shared VPC, certain quotas and limits apply. For example there is a quota for the number of networks in a project, and there is a limit on the number of service projects that can be attached to a host project. For details, see Quotas and limits.

The examples in this topic set up the infrastructure for a two-tier web application, as described in Shared VPC Overview.

The examples on this page use specific names and address ranges to illustrate general procedures. If you like, you can change the names and address ranges to suit your needs. Also, the exercises use the us-central1 region and the us-central1-a zone. If you like, you can change the region and zone to suit your needs.

Before you begin

To do this task, you need to have a Cloud Platform organization, and in your organization, you need to have three Cloud Platform projects.

In your organization, you must be granted the Compute Shared VPC Admin role.

Before you do the exercises in this topic, choose one of your projects to be the host project, and two of your projects to be service projects. Each project has a name, an ID, and a number. In some cases, the name and the ID are the same. This topic uses the following friendly names to refer to your projects:

  • Your host project
  • Your first service project
  • Your second service project

This topic uses the following placeholders to refer to your project IDs and numbers.

  • [HOST_PROJECT_ID] is the project ID of your host project.
  • [HOST_PROJECT_NUM] is the project number of your host project.
  • [SERVICE_PROJECT_1_ID] is the project ID of your first service project.
  • [SERVICE_PROJECT_1_NUM] is the project number of your first service project.
  • [SERVICE_PROJECT_2_ID] is the project ID of your second service project.
  • [SERVICE_PROJECT_2_NUM] is the project number of your second service project.

Finding your project IDs and numbers

gcloud

List your projects:

gcloud projects list

The output shows your project names, IDs and numbers. Make a note of the ID and number for later:

PROJECT_ID        NAME        PROJECT_NUMBER
host-123          host        1027xxxxxxxx
srv-1-456         srv-1       4964xxxxxxxx
srv-2-789         srv-2       4559xxxxxxxx

Console

  1. Visit the Home page in the Google Cloud Platform Console.
    Visit the Home page
  2. In the project picker, select the project that you have chosen to be the host project.
  3. Under Project info, you can see the project name, project ID, and project number. Make a note of the ID and number for later.
  4. Do the same for each of your projects that you have chosen to be service projects.

Enabling the Google Kubernetes Engine API in your projects

Before you continue with the exercises in this topic, make sure that the Google Kubernetes Engine API is enabled in all three of your projects. Enabling the API in a project creates a GKE service account for the project. To do the remaining steps in this topic, each of your projects must have a GKE service account.

gcloud

Enable the Google Kubernetes Engine API for your three projects. Each operation may take some time to complete:

gcloud services enable container.googleapis.com --project [HOST_PROJECT_ID]
gcloud services enable container.googleapis.com --project [SERVICE_PROJECT_1_ID]
gcloud services enable container.googleapis.com --project [SERVICE_PROJECT_2_ID]

Console

  1. Visit the APIs & Services dashboard in the GCP Console.
    Visit the APIs Dashboard
  2. In the project picker, select the project that you have chosen to be the host project.
  3. If Kubernetes Engine API is in the list of APIs, it is already enabled, and you don't need to do anything. If it is not in the list, click Enable APIs and Services. Search for Kubernetes Engine API. Click the Kubernetes Engine API card, and click Enable.
  4. Repeat these steps for each projects that you have chosen to be a service project.

Creating a network and two subnets

In your host project, create a network named shared-net. Then create two subnets: one named tier-1 and one named tier-2. For each subnet, create two secondary address ranges: one for services and one for pods.

gcloud

In your host project, create a network named shared-net:

gcloud compute networks create shared-net \
    --subnet-mode custom \
    --project [HOST_PROJECT_ID]

In your new network, create a subnet named tier-1:

gcloud compute networks subnets create tier-1 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --range 10.0.4.0/22 \
    --region us-central1 \
    --secondary-range tier-1-services=10.0.32.0/20,tier-1-pods=10.4.0.0/14

Create another subnet named tier-2:

gcloud compute networks subnets create tier-2 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --range 172.16.4.0/22 \
    --region us-central1 \
    --secondary-range tier-2-services=172.16.16.0/20,tier-2-pods=172.20.0.0/14

Console

  1. Visit the VPC networks page in the GCP Console.
    Visit the VPC networks page
  2. In the project picker, select your host project.
  3. Click Create VPC Network.
  4. For Name, enter shared-net.
  5. Under Subnet creation mode, select Custom.
  6. In the New subnet box, for Name, enter tier-1.
  7. For Region, select us-central1.
  8. For IP address range, enter 10.0.4.0/22.
  9. Click Create secondary IP range. For Subnet range name, enter tier-1-services, and for Secondary IP range, enter 10.0.32.0/20.
  10. Click Add IP range. For Subnet range name, enter tier-1-pods, and for Secondary IP range, enter 10.4.0.0/14.
  11. Click + Add subnet.
  12. For Name, enter tier-2.
  13. For Region, select us-central1.
  14. For IP address range, enter 172.16.4.0/22.
  15. Click Create secondary IP range. For Subnet range name, enter tier-2-services, and for Secondary IP range, enter 172.16.16.0/20.
  16. Click Add IP range. For Subnet range name, enter tier-2-pods, and for Secondary IP range, enter 172.20.0.0/14.
  17. Click Create.

Determining the names of service accounts in your service projects

You have two service projects, each of which has several service accounts. This section is concerned with your GKE service accounts and your Google APIs service accounts. You need the names of these service accounts for the next section.

These are the names of the GKE service accounts in your two service projects:

  • service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com
  • service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com

These are the names of the Google APIs service accounts in your two service projects:

  • [SERVICE_PROJECT_1_NUM]@cloudservices.gserviceaccount.com
  • [SERVICE_PROJECT_2_NUM]@cloudservices.gserviceaccount.com

Enabling shared VPC and granting roles

In your host project, enable Shared VPC, and attach your two service projects to the host project. Then grant the appropriate roles to service accounts that belong to your service projects.

gcloud

Enable Shared VPC in your host project:

gcloud compute shared-vpc enable [HOST_PROJECT_ID]

Attach your first service project to your host project:

gcloud compute shared-vpc associated-projects add \
    [SERVICE_PROJECT_1_ID] \
    --host-project [HOST_PROJECT_ID]

Attach your second service project to your host project:

gcloud compute shared-vpc associated-projects add \
    [SERVICE_PROJECT_2_ID] \
    --host-project [HOST_PROJECT_ID]

Get the IAM policy for the tier-1 subnet:

gcloud beta compute networks subnets get-iam-policy tier-1 \
   --project [HOST_PROJECT_ID] \
   --region us-central1

The output contains an etag field. Make a note of the etag value.

Create a file named tier-1-policy.yaml that has this content:

bindings:
- members:
  - serviceAccount:<var>[SERVICE_PROJECT_1_NUM]</var>@cloudservices.gserviceaccount.com
  - serviceAccount:service-<var>[SERVICE_PROJECT_1_NUM]</var>@container-engine-robot.iam.gserviceaccount.com
  role: roles/compute.networkUser
etag: <var>[ETAG_STRING]</var>

where [ETAG_STRING] is the etag value that you noted previously.

Set the IAM policy for the tier-1 subnet:

gcloud beta compute networks subnets set-iam-policy tier-1 \
    tier-1-policy.yaml \
    --project [HOST_PROJECT_ID] \
    --region us-central1

Next get the IAM policy for the tier-2 subnet:

gcloud beta compute networks subnets get-iam-policy tier-2 \
   --project [HOST_PROJECT_ID] \
   --region us-central1

The output contains an etag field. Make a note of the etag value.

Create a file named tier-2-policy.yaml that has this content:

bindings:
- members:
  - serviceAccount:<var>[SERVICE_PROJECT_2_NUM]</var>@cloudservices.gserviceaccount.com
  - serviceAccount:service-<var>[SERVICE_PROJECT_2_NUM]</var>@container-engine-robot.iam.gserviceaccount.com
  role: roles/compute.networkUser
etag: <var>[ETAG_STRING]</var>

where: [ETAG_STRING] is the etag value that you noted previously.

Set the IAM policy for the tier-2 subnet:

gcloud beta compute networks subnets set-iam-policy tier-2 \
    tier-2-policy.yaml \
    --project [HOST_PROJECT_ID] \
    --region us-central1

Console

Follow these steps to enable Shared VPC, attach service projects, and grant roles:

  1. Visit the Shared VPC page in the GCP Console.
    Visit the Shared VPC page
  2. In the project picker, select your host project.
  3. Click Set up Shared VPC.
    The Enable host project screen is displayed.
  4. Click Save & continue.
    The Select subnets page is displayed.
  5. Under Sharing mode, select Individual subnets.
  6. Under Subnets to share, check tier-1 and tier-2. Clear all other checkboxes.
  7. Click Continue.
    The Give permissions page is displayed.
  8. Under Attach service projects, check your first service project and your second service project. Clear all the other checkboxes under Attach service projects.
  9. Under Kubernetes Engine access, check Enabled.
  10. Click Save.
    A new page is displayed.
  11. Under Individual subnet permissions, check tier-1.
  12. In the right pane, delete any service accounts that belong to your second service project. That is, delete any service accounts that contain [SERVICE_PROJECT_2_NUM].
  13. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your first service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.
  14. In the center pane, under Individual subnet permissions, check tier-2, and uncheck tier-1.
  15. In the right pane, delete any service accounts that belong to your first service project. That is, delete any service accounts that contain [SERVICE_PROJECT_1_NUM].
  16. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your second service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.

The purpose of the preceding steps is to grant the appropriate Cloud IAM role to four service accounts. Two service accounts in your first service project are granted the Compute Network User role on the tier-1 subnet of your host project. And two service accounts in your second service project are granted the Compute Network User role on the tier-2 subnet of your host project.

Summary of roles granted on subnets

  • The service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com service account is granted the Compute Network User role on the tier-1 subnet.

  • The [SERVICE_PROJECT_1_NUM]@cloudservices.gserviceaccount.com service account is granted the Compute Network User role on the tier-1 subnet.

  • The service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com service account is granted the Compute Network User role on the tier-2 subnet.

  • The [SERVICE_PROJECT_2_NUM]@cloudservices.gserviceaccount.com service account is granted the Compute Network User role on the tier-2 subnet.

Granting the Host Service Agent User role

In each service project, you must grant the Host Service Agent User role to the GKE Service Account. This allows the GKE Service Account of the service project to use the GKE Service account of the host project to configure shared network resources.

The Host Service Agent User role is only granted to the service account of the service project. It cannot be granted to users.

gcloud

Grant the Host Service Agent User role to the GKE Service Account of your first service project. This role is granted on your host project:

gcloud projects add-iam-policy-binding [HOST_PROJECT_ID] \
    --member serviceAccount:service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role roles/container.hostServiceAgentUser

Grant the Host Service Agent User role to the GKE Service Account of your second service project. This role is granted on your host project:

gcloud projects add-iam-policy-binding [HOST_PROJECT_ID] \
    --member  serviceAccount:service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role    roles/container.hostServiceAgentUser

Console

If you have been using the GCP Console, you do not have to grant the Host Service Agent User role explicitly. That was done automatically when you used the GCP Console to attach service projects to your host project.

Verifying usable subnets and secondary IP ranges

When creating a cluster, you must specify a subnet and the secondary IP ranges to be used for the cluster's pods and services. There are several reasons that an IP range might not be available for use. Whether you are creating the cluster with the GCP Console or the gcloud command-line tool, you should specify usable IP ranges.

You can also list a project's usable subnets and secondary IP ranges from the command line:

gcloud

gcloud beta container subnets list-usable \
    --project [SERVICE_PROJECT_ID] \
    --network-project [HOST_PROJECT_ID]

If you omit the --project or --network-project option, the gcloud command uses the default project from your active configuration. Since the host project and network project are distinct, you must specify one or both of --project and --network-project.

The command's output resembles this:

PROJECT   REGION       NETWORK      SUBNET          RANGE
xpn-host  us-central1  empty-vpc    empty-subnet    10.0.0.0/21
xpn-host  us-east1     some-vpc     some-subnet     10.0.0.0/19
    ┌──────────────────────┬───────────────┬─────────────────────────────┐
    │ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │            STATUS           │
    ├──────────────────────┼───────────────┼─────────────────────────────┤
    │ pods-range           │ 10.2.0.0/21   │ usable for pods or services │
    │ svc-range            │ 10.1.0.0/21   │ usable for pods or services │
    └──────────────────────┴───────────────┴─────────────────────────────┘
xpn-host  us-central1  shared-net   tier-2          172.16.4.0/22
    ┌──────────────────────┬────────────────┬─────────────────────────────┐
    │ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE  │            STATUS           │
    ├──────────────────────┼────────────────┼─────────────────────────────┤
    │ tier-2-services      │ 172.16.16.0/20 │ usable for pods or services │
    │ tier-2-pods          │ 172.20.0.0/14  │ usable for pods or services │
    └──────────────────────┴────────────────┴─────────────────────────────┘
xpn-host  us-central1  shared-net   tier-1          10.0.4.0/22
    ┌──────────────────────┬───────────────┬─────────────────────────────┐
    │ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │            STATUS           │
    ├──────────────────────┼───────────────┼─────────────────────────────┤
    │ tier-1-services      │ 10.0.32.0/20  │ unusable                    │
    │ tier-1-pods          │ 10.4.0.0/14   │ usable for pods             │
    │ tier-1-extra         │ 10.8.0.0/14   │ usable for pods or services │
    └──────────────────────┴───────────────┴─────────────────────────────┘

An IP range is usable for the new cluster's services if the range isn't already in use. The IP range that you specify for the new cluster's Pods can either be an unused range, or it can be a range that's shared with Pods in your other clusters. IP ranges that are created and managed by GKE can't be used by your cluster.

Creating a cluster in your first service project

gcloud

Create a cluster in your first service project:

gcloud container clusters create tier-1-cluster \
    --project [SERVICE_PROJECT_1_ID] \
    --zone=us-central1-a \
    --enable-ip-alias \
    --network projects/[HOST_PROJECT_ID]/global/networks/shared-net \
    --subnetwork projects/[HOST_PROJECT_ID]/regions/us-central1/subnetworks/tier-1 \
    --cluster-secondary-range-name tier-1-pods \
    --services-secondary-range-name tier-1-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

gcloud compute instances list --project [SERVICE_PROJECT_1_ID]

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.2
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.3
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.4

Console

  1. Visit the Google Kubernetes Engine menu in the GCP Console.

    Visit the Google Kubernetes Engine menu

  2. In the project picker, select your first service project.

  3. Click Create cluster.

  4. For Cluster name, enter tier-1-cluster

  5. For Location type, select Zonal.

  6. For Zone, select us-central1-a.

  7. At the bottom of the page, click Advanced Options.

  8. In the VPC-native section, select Enable VPC-native (using alias IP).

  9. Deselect Automatically create secondary ranges.

  10. Select Networks shared with me (from host project: ...).

  11. For Node subnet, select tier-1.

  12. For Pod secondary CIDR range, select tier-1-pods.

  13. For Services secondary CIDR range,select tier-1-services.

  14. Click Create.

  15. When the creation is complete, in the list of clusters, click tier-1-cluster.

  16. Under Node Pools, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-5c5add1f-grp.

  17. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

Creating a cluster in your second service project.

gcloud

Create a cluster in your second service project:

gcloud container clusters create tier-2-cluster \
    --project [SERVICE_PROJECT_2_ID] \
    --zone=us-central1-a \
    --enable-ip-alias \
    --network projects/[HOST_PROJECT_ID]/global/networks/shared-net \
    --subnetwork projects/[HOST_PROJECT_ID]/regions/us-central1/subnetworks/tier-2 \
    --cluster-secondary-range-name tier-2-pods \
    --services-secondary-range-name tier-2-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

gcloud compute instances list --project [SERVICE_PROJECT_2_ID]

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.2
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.3
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.4

Console

  1. Visit the Google Kubernetes Engine menu in the GCP Console.

    Visit the Google Kubernetes Engine menu

  2. In the project picker, select your second service project.

  3. Click Create cluster.

  4. For Cluster name, enter tier-2-cluster.

  5. For Location type, select Zonal.

  6. For Zone, select us-central1-a.

  7. At the bottom of the page, click Advanced Options.

  8. In the VPC-native section, select Enable VPC-native (using alias IP).

  9. Deselect Automatically create secondary ranges.

  10. Select Networks shared with me (from host project: ...).

  11. For Node subnet, select tier-2.

  12. For Pod secondary CIDR range, select tier-2-pods.

  13. For Services secondary CIDR range,select tier-2-services.

  14. Click Create.

  15. When the creation is complete, in the list of clusters, click tier-2-cluster.

  16. Under Node Pools, click the name of your instance group. For example, gke-tier-2-cluster-default-pool-5c5add1f-grp.

  17. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

Creating firewall rules

Create a firewall rule for the shared-net network in your host project. Allow traffic to enter on TCP port 22. This allows you to connect to your cluster nodes using SSH.

gcloud

Create a firewall rule for your shared network:

gcloud compute firewall-rules create my-shared-net-rule \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --direction INGRESS \
    --allow tcp:22

Console

  1. Visit the Firewalls page in the GCP Console.

    Visit the Firewall rules page

  2. In the project picker, select your host project.

  3. From the VPC Networking menu, click Create Firewall Rule.

  4. For Name, enter my-shared-net-rule.

  5. For Network, select shared-net.

  6. For Direction of traffic, select Ingress.

  7. For Action on match, select Allow.

  8. For Targets, select All instances in the network.

  9. For Source filter, select IP ranges.

  10. For Source IP ranges, enter 0.0.0.0/0.

  11. For Protocols and ports, select Specified protocols and ports. In the box, enter tcp:22.

  12. Click Create.

Connecting to a node using SSH

gcloud

List the nodes in your first service project:

gcloud compute instances list --project [SERVICE_PROJECT_1_ID]

The output includes the names of the nodes in your cluster:

NAME                                           ...
gke-tier-1-cluster-default-pool-faf87d48-3mf8  ...
gke-tier-1-cluster-default-pool-faf87d48-q17k  ...
gke-tier-1-cluster-default-pool-faf87d48-x9rk  ...

SSH into one of your nodes:

gcloud compute ssh [NODE_NAME] \
    --project [SERVICE_PROJECT_1_ID] \
    --zone us-central1-a \

where [NODE_NAME] is the name of one of your nodes.

Console

  1. Visit the Google Kubernetes Engine menu in the GCP Console.

    Visit the Firewall rules page

  2. In the project picker, select your first service project.

  3. Click tier-1-cluster.

  4. Under Node Pools, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-faf87d48-grp.

  5. In the list of nodes, make a note of the internal IP addresses of the nodes. These addresses are in the 10.0.4.0/22 range.

  6. For one of your nodes, click SSH. This succeeds because SSH uses TCP port 22, which is allowed by your firewall rule.

Pinging between nodes

In your SSH command-line window, start the CoreOS Toolbox:

/usr/bin/toolbox

In the toolbox shell, ping one of your other nodes in the same cluster. For example:

ping 10.0.4.4

The ping command succeeds, because your node and the other node are both in the 10.0.4.0/22 range.

Now try to ping one of the nodes in the cluster in your other service project. For example:

ping 172.16.4.3

This time the ping command fails, because your firewall rule does not allow Internet Control Message Protocol (ICMP) traffic.

At an ordinary command prompt, not your toolbox shell, Update your firewall rule to allow ICMP:

gcloud compute firewall-rules update my-shared-net-rule \
    --project [HOST_PROJECT_ID] \
    --allow tcp:22,icmp

In your toolbox shell, ping the node again. For example:

ping 172.16.4.3

This time the ping command succeeds.

Creating additional firewall rules

You can create additional firewall rules to allow communication between nodes, Pods, and Services in your clusters. For example, this rule allows traffic to enter from any node, pod, or service in tier-1-cluster on any TCP or UDP port.

gcloud compute firewall-rules create my-shared-net-rule-2 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 10.0.4.0/22,10.4.0.0/14,10.0.32.0/20

This rule allows traffic to enter from any node, pod, or service in tier-2-cluster on any TCP or UDP port.

gcloud compute firewall-rules create my-shared-net-rule-3 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 172.16.4.0/22,172.20.0.0/14,172.16.16.0/20

Kubernetes will also try to create and manage firewall resources when necessary, for example when you create a load balancer service. If Kubernetes finds itself unable to change the firewall rules due to permission issue, a Kubernetes Event will be raised to guide you how to make the changes.

If you want to grant Kubernetes permission to change the firewall rules, you can go to your host project and grant Compute Security Admin role or a custom role with compute.firewalls.* permission to GKE service account of the service project.

For Ingress Load Balancers, if Kubernetes can't change the firewall rules due to insufficient permission, a firewallXPNError event is emitted every several minutes. In GLBC 1.4 and higher, you can mute the firewallXPNError event by adding networking.gke.io/suppress-firewall-xpn-error: "true" annotation to the ingress resource. You can always remove this annotation to unmute.

Creating a private cluster in a shared VPC

You can use Shared VPC with private clusters. There is no special setup required. However, you must ensure that the master CIDR range does not overlap other reserved ranges in the shared network.

The number of private clusters in a shared VPC is limited to twenty five.

In this section, you create a VPC-native cluster, private-cluster-vpc in a predefined shared VPC network.

gcloud

The following command creates a cluster, private-cluster-vpc, in a predefined shared VPC:

gcloud container clusters create private-cluster-vpc \
    --project [PROJECT_ID]]
    --enable-ip-alias \
    --network projects/[HOST_PROJECT]/global/networks/shared-net \
    --subnetwork [SHARED_SUBNETWORK] \
    --cluster-secondary-range-name c0-pods \
    --services-secondary-range-name c0-services \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.0/28

Console

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the GKE menu

  2. Click Create cluster.

  3. For Cluster name, enter private-cluster-vpc.

  4. Click Advanced options at the bottom of the menu.

  5. From VPC-native, select the Enable VPC-native (using alias IP) checkbox.

  6. From the Network drop-down menu, select the VPC network you created previously.

  7. From the Node subnet drop-down menu, selected the shared subnet you created previously.

  8. From Network security, select the Private cluster checkbox.

  9. Verify that the Access master using its external IP address checkbox is selected.

  10. Set Master IP range to 172.16.0.16/28.

  11. Configure your cluster as desired. Then, click Create.

Notes about secondary ranges

You can create five secondary ranges in a given subnet. For each cluster, you need two secondary ranges: one for pods and one for services. This means that you can create only two clusters that use a given subnet.

Note: The primary range and the pod secondary range can be shared between clusters, but this is not a recommended configuration.

Known issues

Secondary CIDR ranges might be in use by other clusters

When creating a shared VPC cluster with Google Cloud Platform Console, the suggested secondary ranges for Pods and Services might already be in use by other clusters. If a cluster is created with these CIDR ranges, the cluster creation operation will fail in error state.

If you encounter this issue, delete the cluster in the error state. Before creating the cluster again, verify that the secondary ranges are usable by the new cluster.

This issue will be fixed in future releases.

Firewall events for load balancer creation

If the Kubernetes Service Account was not granted firewall management permissions, the Kubernetes service controller might not create events for the required firewall changes. This is due to an RBAC permissions issue Kubernetes versions 1.9 and earlier. The service controller lacks the ability to raise events.

To fix this problem, apply these YAML files, which contain RBAC policies that allow the events to be created.

Clusters based on Kubernetes 1.10 and later already have these RBAC policies applied.

Cleaning up

After completing the exercises on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:

Deleting the clusters

gcloud

gcloud container clusters delete tier-1-cluster \
    --project [SERVICE_PROJECT_1_ID] \
    --zone us-central1-a

gcloud container clusters delete tier-2-cluster \
    --project [SERVICE_PROJECT_2_ID] \
    --zone us-central1-a

Console

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. In the project picker, select your first service project.

  3. Check tier-1-cluster, and click Delete.

  4. In the project picker, select your second service project.

  5. Check tier-2-cluster, and click Delete.

Disabling Shared VPC

gcloud

gcloud compute shared-vpc associated-projects remove [SERVICE_PROJECT_1_ID] \
    --host-project [HOST_PROJECT_ID]

gcloud compute shared-vpc associated-projects remove [SERVICE_PROJECT_2_ID] \
    --host-project [HOST_PROJECT_ID]

gcloud compute shared-vpc disable [HOST_PROJECT_ID]

Console

  1. Visit the Shared VPC page in the GCP Console.
    Visit the Shared VPC page
  2. In the project picker, select your host project.
  3. Click Disable Shared VPC.
  4. Enter [HOST_PROJECT_ID] in the text box, and click Disable.

Deleting your firewall rules

gcloud

Delete your firewall rules:

gcloud compute firewall-rules delete \
    my-shared-net-rule \
    my-shared-net-rule-2 \
    my-shared-net-rule-3 \
    --project [HOST_PROJECT_ID]

Console

  1. Visit the Firewalls page in the GCP Console.

    Visit the Firewall rules page

  2. In the project picker, select your host project.

  3. In the list of rules, check my-shared-net-rule, my-shared-net-rule-2, and my-shared-net-rule-3.

  4. Click Delete.

Deleting the shared-net network

gcloud

gcloud compute networks subnets delete tier-1 \
    --project [HOST_PROJECT_ID] \
    --region us-central1

gcloud compute networks subnets delete tier-2 \
    --project [HOST_PROJECT_ID] \
    --region us-central1

gcloud compute networks delete shared-net --project [HOST_PROJECT_ID]

Console

  1. Visit the VPC networks page in the GCP Console.
    Visit the VPC networks page
  2. In the project picker, select your host project.
  3. In the list of networks, click shared-net.
  4. Click Delete VPC Network.

Removing the Host Service Agent User role

gcloud

Remove the Host Service Agent User role from the GKE service account of your first service project:

gcloud projects remove-iam-policy-binding [HOST_PROJECT_ID] \
    --member serviceAccount:service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role roles/container.hostServiceAgentUser

Remove the Host Service Agent User role from the GKE service account of your second service project:

gcloud projects remove-iam-policy-binding [HOST_PROJECT_ID] \
    --member serviceAccount:service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role roles/container.hostServiceAgentUser

Console

  1. Visit the VPC networks page in the GCP Console.
    Visit the IAM page
  2. In the project picker, select your host project.
  3. In the list of members, check the row that shows service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.
  4. Also check the row that shows service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.
  5. Click Remove.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine