Setting up Clusters with Shared VPC

This page shows how to create two Kubernetes Engine clusters, in separate projects, that use a shared virtual private cloud (VPC).

Overview

With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project. You create networks, subnetworks, secondary address ranges, firewall rules, and other network resources in the host project. Then you share selected subnetworks, including secondary ranges, with the service projects. Components running in a service project can use the shared VPC to communicate with components running in the other service projects.

You can use Shared VPC with both zonal and regional clusters. Clusters that use Shared VPC cannot use legacy networks and must have Alias IPs enabled.

You can configure Shared VPC when you create a new cluster. Kubernetes Engine does not support converting existing clusters to the Shared VPC model.

The exercises in this topic set up the infrastructure for a two-tier web application, as described in Shared VPC Overview.

The exercises on this page use specific names and address ranges to illustrate general procedures. If you like, you can change the names and address ranges to suit your needs. Also, the exercises use the us-central1 region and the us-central1-a zone. If you like, you can change the region and zone to suit your needs.

Before you begin

To do this task, you need to have a Cloud Platform organization, and in your organization, you need to have three Cloud Platform projects.

In your organization, you must be granted the Compute Shared VPC Admin role.

Before you do the exercises in this topic, choose one of your projects to be the host project, and two of your projects to be service projects. Each project has a name, an ID, and a number. In some cases, the name and the ID are the same. This topic uses the following friendly names to refer to your projects:

  • Your host project
  • Your first service project
  • Your second service project

This topic uses the following placeholders to refer to your project IDs and numbers.

  • [HOST_PROJECT_ID] is the project ID of your host project.
  • [HOST_PROJECT_NUM] is the project number of your host project.
  • [SERVICE_PROJECT_1_ID] is the project ID of your first service project.
  • [SERVICE_PROJECT_1_NUM] is the project number of your first service project.
  • [SERVICE_PROJECT_2_ID] is the project ID of your second service project.
  • [SERVICE_PROJECT_2_NUM] is the project number of your second service project.

Configure gcloud to use the v1beta1 API

To use this feature with gcloud, you must enable the v1beta1 API surface for gcloud, which allows you to run gcloud beta container clusters commands.

To configure the gcloud command-line tool to use the v1beta1 API, run one of the following commands in your shell or terminal window:

export CLOUDSDK_CONTAINER_USE_V1_API_CLIENT=false
or:
gcloud config set container/use_v1_api false

Finding your project IDs and numbers

Console

  1. Visit the Home page in the Google Cloud Platform Console.
    Visit the Home page
  2. In the project picker, select the project that you have chosen to be the host project.
  3. Under Project info, you can see the project name, project ID, and project number. Make a note of the ID and number for later.
  4. Do the same for each of your projects that you have chosen to be service projects.

gcloud

List your projects:

gcloud projects list

The output shows your project names, IDs and numbers. Make a note of the ID and number for later:

PROJECT_ID        NAME        PROJECT_NUMBER
host-123          host        1027xxxxxxxx
srv-1-456         srv-1       4964xxxxxxxx
srv-2-789         srv-2       4559xxxxxxxx

Enabling the Kubernetes Engine API in your projects

Before you continue with the exercises in this topic, make sure that the Google Kubernetes Engine API is enabled in all three of your projects. Enabling the API in a project creates a Kubernetes Engine service account for the project. To do the remaining steps in this topic, each of your projects must have a Kubernetes Engine service account.

Console

  1. Visit the APIs Dashboard in the GCP Console.
    Visit the APIs Dashboard
  2. In the project picker, select the project that you have chosen to be the host project.
  3. If Google Kubernetes Engine API is in the list of APIs, it is already enabled, and you don't need to do anything. If it is not in the list, click Enable APIs and Services. Search for Google Kubernetes Engine API. Click the Google Kubernetes Engine API card, and click Enable.
  4. Do the same for each of your projects that you have chosen to be service projects.

gcloud

Enable the Kubernetes Engine API for your three projects. Each operation may take some time to complete:

gcloud services enable container.googleapis.com --project [HOST_PROJECT_ID]
gcloud services enable container.googleapis.com --project [SERVICE_PROJECT_1_ID]
gcloud services enable container.googleapis.com --project [SERVICE_PROJECT_2_ID]

Creating a network and two subnetworks

In your host project, create a network named shared-net. Then create two subnets: one named tier-1 and one named tier-2. For each subnet, create two secondary address ranges: one for services and one for pods.

Console

  1. Visit the VPC networks page in the GCP Console.
    Visit the VPC networks page
  2. In the project picker, select your host project.
  3. Click Create VPC Network.
  4. For Name, enter shared-net.
  5. Under Subnet creation mode, select Custom.
  6. In the New subnet box, for Name, enter tier-1.
  7. For Region, select us-central1.
  8. For IP address range, enter 10.0.4.0/22.
  9. Click Create secondary IP range. For Subnet range name, enter tier-1-services, and for Secondary IP range, enter 10.0.32.0/20.
  10. Click Add IP range. For Subnet range name, enter tier-1-pods, and for Secondary IP range, enter 10.4.0.0/14.
  11. Click + Add subnet.
  12. For Name, enter tier-2.
  13. For Region, select us-central1.
  14. For IP address range, enter 172.16.4.0/22.
  15. Click Create secondary IP range. For Subnet range name, enter tier-2-services, and for Secondary IP range, enter 172.16.16.0/20.
  16. Click Add IP range. For Subnet range name, enter tier-2-pods, and for Secondary IP range, enter 172.20.0.0/14.
  17. Click Create.

gcloud

In your host project, create a network named shared-net:

gcloud compute networks create shared-net \
    --subnet-mode custom \
    --project [HOST_PROJECT_ID]

In your new network, create a subnet named tier-1:

gcloud compute networks subnets create tier-1 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --range 10.0.4.0/22 \
    --region us-central1 \
    --secondary-range tier-1-services=10.0.32.0/20,tier-1-pods=10.4.0.0/14

Create another subnet named tier-2:

gcloud compute networks subnets create tier-2 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --range 172.16.4.0/22 \
    --region us-central1 \
    --secondary-range tier-2-services=172.16.16.0/20,tier-2-pods=172.20.0.0/14 

Determining the names of service accounts in your service projects

You have two service projects, each of which has several service accounts. This section is concerned with your Kubernetes Engine service accounts and your Google APIs service accounts. You need the names of these service accounts for the next section.

These are the names of the Kubernetes Engine service accounts in your two service projects:

  • service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com
  • service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com

These are the names of the Google APIs service accounts in your two service projects:

  • [SERVICE_PROJECT_1_NUM]@cloudservices.gserviceaccount.com
  • [SERVICE_PROJECT_2_NUM]@cloudservices.gserviceaccount.com

Enabling shared VPC and granting roles

In your host project, enable Shared VPC, and attach your two service projects to the host project. Then grant the appropriate roles to service accounts that belong to your service projects.

Console

Follow these steps to enable Shared VPC, attach service projects, and grant roles:

  1. Visit the Shared VPC page in the GCP Console.
    Visit the Shared VPC page
  2. In the project picker, select your host project.
  3. Click Set up Shared VPC.
    The Enable host project screen is displayed.
  4. Click Save & continue.
    The Select subnets page is displayed.
  5. Under Sharing mode, select Individual subnets.
  6. Under Subnets to share, check tier-1 and tier-2. Clear all other checkboxes.
  7. Click Continue.
    The Give permissions page is displayed.
  8. Under Attach service projects, check your first service project and your second service project. Clear all the other checkboxes under Attach service projects.
  9. Under Kubernetes Engine access, check Enabled.
  10. Click Save.
    A new page is displayed.
  11. Under Individual subnet permissions, check tier-1.
  12. In the right pane, delete any service accounts that belong to your second service project. That is, delete any service accounts that contain [SERVICE_PROJECT_2_NUM].
  13. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your first service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.
  14. In the center pane, under Individual subnet permissions, check tier-2, and uncheck tier-1.
  15. In the right pane, delete any service accounts that belong to your first service project. That is, delete any service accounts that contain [SERVICE_PROJECT_1_NUM].
  16. In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your second service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.

gcloud

Enable Shared VPC in your host project:

gcloud compute shared-vpc enable [HOST_PROJECT_ID]

Attach your first service project to your host project:

gcloud compute shared-vpc associated-projects add \
    [SERVICE_PROJECT_1_ID] \
    --host-project [HOST_PROJECT_ID]

Attach your second service project to your host project:

gcloud compute shared-vpc associated-projects add \
    [SERVICE_PROJECT_2_ID] \
    --host-project [HOST_PROJECT_ID]

Get the IAM policy for the tier-1 subnet:

gcloud beta compute networks subnets get-iam-policy tier-1 \
   --project [HOST_PROJECT_ID] \
   --region us-central1

The output contains an etag field. Make a note of the etag value.

Create a file named tier-1-policy.yaml that has this content:

bindings:
- members:
  - serviceAccount:[SERVICE_PROJECT_1_NUM]@cloudservices.gserviceaccount.com
  - serviceAccount:service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com
  role: roles/compute.networkUser
etag: [ETAG_STRING]

where [ETAG_STRING] is the etag value that you noted previously.

Set the IAM policy for the tier-1 subnet:

gcloud beta compute networks subnets set-iam-policy tier-1 \
    tier-1-policy.yaml \
    --project [HOST_PROJECT_ID] \
    --region us-central1

Next get the IAM policy for the tier-2 subnet:

gcloud beta compute networks subnets get-iam-policy tier-2 \
   --project [HOST_PROJECT_ID] \
   --region us-central1

The output contains an etag field. Make a note of the etag value.

Create a file named tier-2-policy.yaml that has this content:

bindings:
- members:
  - serviceAccount:[SERVICE_PROJECT_2_NUM]@cloudservices.gserviceaccount.com
  - serviceAccount:service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com
  role: roles/compute.networkUser
etag: [ETAG_STRING]

where: [ETAG_STRING] is the etag value that you noted previously.

Set the IAM policy for the tier-2 subnet:

gcloud beta compute networks subnets set-iam-policy tier-2 \
    tier-2-policy.yaml \
    --project [HOST_PROJECT_ID] \
    --region us-central1

The purpose of the preceding steps is to grant the appropriate Cloud IAM role to four service accounts. Two service accounts in your first service project are granted the Compute Network User role on the tier-1 subnet of your host project. And two service accounts in your second service project are granted the Compute Network User role on the tier-2 subnet of your host project.

Summary of roles granted on subnets

  • The service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com service account is granted the Compute Network User role on the tier-1 subnet.

  • The [SERVICE_PROJECT_1_NUM]@cloudservices.gserviceaccount.com service account is granted the Compute Network User role on the tier-1 subnet.

  • The service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com service account is granted the Compute Network User role on the tier-2 subnet.

  • The [SERVICE_PROJECT_2_NUM]@cloudservices.gserviceaccount.com service account is granted the Compute Network User role on the tier-2 subnet.

Granting the Host Service Agent User role

Console

If you have been using the GCP Console, you do not have to grant the Host Service Agent User role explicitly. That was done automatically when you used the GCP Console to attach service projects to your host project.

gcloud

Grant the Host Service Agent User role to the Kubernetes Engine Service Account of your first service project. This role is granted on your host project:

gcloud projects add-iam-policy-binding [HOST_PROJECT_ID] \
    --member serviceAccount:service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role roles/container.hostServiceAgentUser

Grant the Host Service Agent User role to the Kubernetes Engine Service Account of your second service project. This role is granted on your host project:

gcloud projects add-iam-policy-binding [HOST_PROJECT_ID] \
    --member  serviceAccount:service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role    roles/container.hostServiceAgentUser

Creating a cluster in your first service project

Console

  1. Visit the Kubernetes Engine menu in the GCP Console.

    Visit the Kubernetes Engine menu

  2. In the project picker, select your first service project.

  3. Click Create cluster.
  4. For Cluster name, enter tier-1-cluster
  5. For Zone, select us-central1-a.
  6. At the bottom of the page, click More to open Advanced Options.
  7. For Use alias IP ranges, select Enabled.
  8. For Automatically create secondary ranges , select Disabled.
  9. Select Networks shared with me (from host project: ...).
  10. For Node subnet, select tier-1.
  11. For Container secondary CIDR range, select tier-1-pods.
  12. For Services secondary CIDR range,select tier-1-services.
  13. Click Create.
  14. When the creation is complete, in the list of clusters, click tier-1-cluster.
  15. Under Node Pools, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-5c5add1f-grp.
  16. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

gcloud

Create a cluster in your first service project:

gcloud beta container clusters create tier-1-cluster \
    --project [SERVICE_PROJECT_1_ID] \
    --zone=us-central1-a \
    --enable-ip-alias \
    --network projects/[HOST_PROJECT_ID]/global/networks/shared-net \
    --subnetwork projects/[HOST_PROJECT_ID]/regions/us-central1/subnetworks/tier-1 \
    --cluster-secondary-range-name tier-1-pods \
    --services-secondary-range-name tier-1-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-1 subnet: 10.0.4.0/22.

gcloud compute instances list --project [SERVICE_PROJECT_1_ID]

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.2
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.3
gke-tier-1-cluster-...  us-central1-a  ... 10.0.4.4

Creating a cluster in your second service project.

Console

  1. Visit the Kubernetes Engine menu in the GCP Console.

    Visit the Kubernetes Engine menu

  2. In the project picker, select your second service project.

  3. Click Create cluster.
  4. For Cluster name, enter tier-2-cluster
  5. For Zone, select us-central1-a.
  6. At the bottom of the page, click More to open Advanced Options.
  7. For Use alias IP ranges, select Enabled.
  8. For Automatically create secondary ranges , select Disabled.
  9. Select Networks shared with me (from host project: ...).
  10. For Node subnet, select tier-2.
  11. For Container secondary CIDR range, select tier-2-pods.
  12. For Services secondary CIDR range,select tier-2-services.
  13. Click Create.
  14. When the creation is complete, in the list of clusters, click tier-2-cluster.
  15. Under Node Pools, click the name of your instance group. For example, gke-tier-2-cluster-default-pool-5c5add1f-grp.
  16. In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

gcloud

Create a cluster in your second service project:

gcloud beta container clusters create tier-2-cluster \
    --project [SERVICE_PROJECT_2_ID] \
    --zone=us-central1-a \
    --enable-ip-alias \
    --network projects/[HOST_PROJECT_ID]/global/networks/shared-net \
    --subnetwork projects/[HOST_PROJECT_ID]/regions/us-central1/subnetworks/tier-2 \
    --cluster-secondary-range-name tier-2-pods \
    --services-secondary-range-name tier-2-services

When the creation is complete, verify that your cluster nodes are in the primary range of the tier-2 subnet: 172.16.4.0/22.

gcloud compute instances list --project [SERVICE_PROJECT_2_ID]

The output shows the internal IP addresses of the nodes:

NAME                    ZONE           ... INTERNAL_IP
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.2
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.3
gke-tier-2-cluster-...  us-central1-a  ... 172.16.4.4

Creating firewall rules

Create a firewall rule for the shared-net network in your host project. Allow traffic to enter on TCP port 22. This allows you to connect to your cluster nodes using SSH.

Console

  1. Visit the Firewalls page in the GCP Console.

    Visit the Firewall rules page

  2. In the project picker, select your host project.

  3. Click Create Firewall Rule.
  4. For Name, enter my-shared-net-rule.
  5. For Network, select shared-net.
  6. For Direction of traffic, select Ingress.
  7. For Action on match, select Allow.
  8. For Targets, select All instances in the network.
  9. For Source filter, select IP ranges.
  10. For Source IP ranges, enter 0.0.0.0/0.
  11. For Protocols and ports, select Specified protocols and ports. In the box, enter tcp:22.
  12. Click Create.

gcloud

Create a firewall rule for your shared network:

gcloud compute firewall-rules create my-shared-net-rule \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --direction INGRESS \
    --allow tcp:22

Connecting to a node using SSH

Console

  1. Visit the Kubernetes Engine menu in the GCP Console.

    Visit the Firewall rules page

  2. In the project picker, select your first service project.

  3. Click tier-1-cluster.
  4. Under Node Pools, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-faf87d48-grp.
  5. In the list of nodes, make a note of the internal IP addresses of the nodes. These addresses are in the 10.0.4.0/22 range.
  6. For one of your nodes, click SSH. This succeeds because SSH uses TCP port 22, which is allowed by your firewall rule.

gcloud

List the nodes in your first service project:

gcloud compute instances list --project [SERVICE_PROJECT_1_ID]

The output includes the names of the nodes in your cluster:

NAME                                           ...
gke-tier-1-cluster-default-pool-faf87d48-3mf8  ...
gke-tier-1-cluster-default-pool-faf87d48-q17k  ...
gke-tier-1-cluster-default-pool-faf87d48-x9rk  ...

SSH into one of your nodes:

gcloud compute ssh [NODE_NAME] \
    --project [SERVICE_PROJECT_1_ID] \
    --zone us-central1-a \

where [NODE_NAME] is the name of one of your nodes.

Pinging between nodes

In your SSH command-line window, start the CoreOS Toolbox:

/usr/bin/toolbox

In the toolbox shell, ping one of your other nodes in the same cluster. For example:

ping 10.0.4.4

The ping command succeeds, because your node and the other node are both in the 10.0.4.0/22 range.

Now try to ping one of the nodes in the cluster in your other service project. For example:

ping 172.16.4.3

This time the ping command fails, because your firewall rule does not allow Internet Control Message Protocol (ICMP) traffic.

At an ordinary command prompt, not your toolbox shell, Update your firewall rule to allow ICMP:

gcloud compute firewall-rules update my-shared-net-rule \
    --project [HOST_PROJECT_ID] \
    --allow tcp:22,icmp

In your toolbox shell, ping the node again. For example:

ping 172.16.4.3

This time the ping command succeeds.

Creating additional firewall rules

You can create additional firewall rules to allow communication between nodes, Pods, and Services in your clusters. For example, this rule allows traffic to enter from any node, pod, or service in tier-1-cluster on any TCP or UDP port.

gcloud compute firewall-rules create my-shared-net-rule-2 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 10.0.4.0/22,10.4.0.0/14,10.0.32.0/20

This rule allows traffic to enter from any node, pod, or service in tier-2-cluster on any TCP or UDP port.

gcloud compute firewall-rules create my-shared-net-rule-3 \
    --project [HOST_PROJECT_ID] \
    --network shared-net \
    --allow tcp,udp \
    --direction INGRESS \
    --source-ranges 172.16.4.0/22,172.20.0.0/14,172.16.16.0/20

Using Shared VPC with private clusters

You can use Shared VPC with private clusters. There is no special setup required. However, you must ensure that the master CIDR range does not overlap other reserved ranges in the shared network.

Notes about secondary ranges

You can create five secondary ranges in a given subnet. For each cluster, you need two secondary ranges: one for pods and one for services. This means that you can create only two clusters that use a given subnet.

Note: The primary range and the pod secondary range can be shared between clusters, but this is not a recommended configuration.

Known issues

Don't edit cluster nodes in the GCP Console

Do not edit your cluster nodes in the GCP Console. Doing so can lead to losing your Alias IP ranges. This issue is being addressed.

Cluster creation page in the GCP Console

The Kubernetes Engine cluster creation page shows subnets that are available to the end user. Some of those subnets might not be available to the Kubernetes Engine service account. If you try to create a cluster using a subnet that is not available the Kubernetes Engine service account, the cluster creation will fail.

Also, the Kubernetes Engine cluster creation page might not show all subnets that are available to the Kubernetes Engine service account.

This issue will be fixed in a future release.

Firewall events for load balancer creation

If the Kubernetes Service Account was not granted firewall management permissions, the Kubernetes service controller might not create events for the required firewall changes. This is due to an RBAC permissions issue Kubernetes versions 1.9 and earlier. The service controller lacks the ability to raise events.

To fix this problem, apply these YAML files, which contain RBAC policies that allow the events to be created.

Clusters based on Kubernetes 1.10 and later already have these RBAC policies applied.

Cleaning up

After completing the exercises on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:

Deleting the clusters

Console

  1. Visit the Kubernetes Engine menu in GCP Console.

    Visit the Kubernetes Engine menu

  2. In the project picker, select your first service project.

  3. Check tier-1-cluster, and click Delete.
  4. In the project picker, select your second service project.
  5. Check tier-2-cluster, and click Delete.

gcloud

gcloud container clusters delete tier-1-cluster \
    --project [SERVICE_PROJECT_1_ID] \
    --zone us-central1-a

gcloud container clusters delete tier-2-cluster \ --project [SERVICE_PROJECT_2_ID] \ --zone us-central1-a

Disabling Shared VPC

Console

  1. Visit the Shared VPC page in the GCP Console.
    Visit the Shared VPC page
  2. In the project picker, select your host project.
  3. Click Disable Shared VPC.
  4. Enter [HOST_PROJECT_ID] in the text box, and click Disable.

gcloud

gcloud compute shared-vpc associated-projects remove [SERVICE_PROJECT_1_ID] \
    --host-project [HOST_PROJECT_ID]

gcloud compute shared-vpc associated-projects remove [SERVICE_PROJECT_2_ID] \ --host-project [HOST_PROJECT_ID]

gcloud compute shared-vpc disable [HOST_PROJECT_ID]

Deleting your firewall rules

Console

  1. Visit the Firewalls page in the GCP Console.

    Visit the Firewall rules page

  2. In the project picker, select your host project.

  3. In the list of rules, check my-shared-net-rule, my-shared-net-rule-2, and my-shared-net-rule-3.
  4. Click Delete.

gcloud

Delete your firewall rules:

gcloud compute firewall-rules delete \
    my-shared-net-rule \
    my-shared-net-rule-2 \
    my-shared-net-rule-3 \
    --project [HOST_PROJECT_ID]

Deleting the shared-net network

Console

  1. Visit the VPC networks page in the GCP Console.
    Visit the VPC networks page
  2. In the project picker, select your host project.
  3. In the list of networks, click shared-net.
  4. Click Delete VPC Network.

gcloud

gcloud compute networks subnets delete tier-1 \
    --project [HOST_PROJECT_ID] \
    --region us-central1

gcloud compute networks subnets delete tier-2 \ --project [HOST_PROJECT_ID] \ --region us-central1

gcloud compute networks delete shared-net --project [HOST_PROJECT_ID]

Removing the Host Service Agent User role

Console

  1. Visit the VPC networks page in the GCP Console.
    Visit the IAM page
  2. In the project picker, select your host project.
  3. In the list of members, check the row that shows service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.
  4. Also check the row that shows service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com is granted the Kubernetes Engine Host Service Agent User role.
  5. Click Remove.

gcloud

Remove the Host Service Agent User role from the Kubernetes Engine service account of your first service project:

gcloud projects remove-iam-policy-binding [HOST_PROJECT_ID] \
    --member serviceAccount:service-[SERVICE_PROJECT_1_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role roles/container.hostServiceAgentUser

Remove the Host Service Agent User role from the Kubernetes Engine service account of your second service project:

gcloud projects remove-iam-policy-binding [HOST_PROJECT_ID] \
    --member serviceAccount:service-[SERVICE_PROJECT_2_NUM]@container-engine-robot.iam.gserviceaccount.com \
    --role roles/container.hostServiceAgentUser

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine