This guide shows how to create two Google Kubernetes Engine (GKE) clusters, in separate projects, that use a Shared VPC. For general information about GKE networking, visit the Network overview.
Overview
With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project. You create networks, subnets, secondary address ranges, firewall rules, and other network resources in the host project. Then you share selected subnets, including secondary ranges, with the service projects. Components running in a service project can use the Shared VPC to communicate with components running in the other service projects.
You can use Shared VPC with Autopilot clusters and with zonal and regional Standard clusters.Standard clusters that use Shared VPC cannot use legacy networks and must have VPC-native traffic routing enabled. Autopilot clusters always enable VPC-native traffic routing.
You can configure Shared VPC when you create a new cluster. GKE does not support converting existing clusters to the Shared VPC model.
With Shared VPC, certain quotas and limits apply. For example there is a quota for the number of networks in a project, and there is a limit on the number of service projects that can be attached to a host project. For details, see Quotas and limits.
About the examples
The examples in this guide set up the infrastructure for a two-tier web application, as described in Shared VPC overview.
Before you begin
Before you start to set up a cluster with Shared VPC:
- Ensure you have a Google Cloud organization.
- Ensure your organization has three Google Cloud projects.
- Ensure that you're already familiar with the Shared VPC concepts including the various Identity and Access Management (IAM) roles used by Shared VPC. The tasks in this guide need to be performed by a Shared VPC Admin.
- Ensure that you're already familiar with any organization policy constraints applicable to your organization, folder, or projects. An Organization Policy Administrator might have defined constraints that limit which projects can be Shared VPC host projects or that limit which subnets can be shared. Refer to organization policy constraints for more information.
Before you perform the exercises in this guide:
- Choose one of your projects to be the host project.
- Choose two of your projects to be service projects.
Each project has a name, an ID, and a number. In some cases, the name and the ID are the same. This guide uses the following friendly names and placeholders to refer to your projects:
Friendly name | Project ID placeholder |
Project number placeholder |
---|---|---|
Your host project | HOST_PROJECT_ID |
HOST_PROJECT_NUM |
Your first service project | SERVICE_PROJECT_1_ID |
SERVICE_PROJECT_1_NUM |
Your second service project | SERVICE_PROJECT_2_ID |
SERVICE_PROJECT_2_NUM |
Finding your project IDs and numbers
You can find your project ID and numbers by using the gcloud CLI or the Google Cloud console.
Console
Go to the Home page of the Google Cloud console.
In the project picker, select the project that you have chosen to be the host project.
Under Project info, you can see the project name, project ID, and project number. Make a note of the ID and number for later.
Do the same for each of the projects that you have chosen to be service projects.
gcloud
List your projects with the following command:
gcloud projects list
The output shows your project names, IDs and numbers. Make a note of the ID and number for later:
PROJECT_ID NAME PROJECT_NUMBER
host-123 host 1027xxxxxxxx
srv-1-456 srv-1 4964xxxxxxxx
srv-2-789 srv-2 4559xxxxxxxx
Enabling the GKE API in your projects
Before you continue with the exercises in this guide, make sure that the GKE API is enabled in all three of your projects. Enabling the API in a project creates a GKE service account for the project. To perform the remaining tasks in this guide, each of your projects must have a GKE service account.
You can enable the GKE API using the Google Cloud console or the Google Cloud CLI.
Console
Go to the APIs & Services page in the Google Cloud console.
In the project picker, select the project that you have chosen to be the host project.
If Kubernetes Engine API is in the list of APIs, it is already enabled, and you don't need to do anything. If it is not in the list, click Enable APIs and Services. Search for
Kubernetes Engine API
. Click the Kubernetes Engine API card, and click Enable.Repeat these steps for each projects that you have chosen to be a service project. Each operation may take some time to complete.
gcloud
Enable the GKE API for your three projects. Each operation may take some time to complete:
gcloud services enable container.googleapis.com --project HOST_PROJECT_ID
gcloud services enable container.googleapis.com --project SERVICE_PROJECT_1_ID
gcloud services enable container.googleapis.com --project SERVICE_PROJECT_2_ID
Creating a network and two subnets
In this section, you will perform the following tasks:
- In your host project, create a network named
shared-net
. - Create two subnets named
tier-1
andtier-2
. - For each subnet, create two secondary address ranges: one for Services, and one for Pods.
Console
Go to the VPC networks page in the Google Cloud console.
In the project picker, select your host project.
Click add_box Create VPC Network.
For Name, enter
shared-net
.Under Subnet creation mode, select Custom.
In the New subnet box, for Name, enter
tier-1
.For Region, select a region.
Under IP stack type, select IPv4 (single-stack).
For IPv4 range, enter
10.0.4.0/22
.Click Create secondary IPv4 range. For Subnet range name, enter
tier-1-services
, and for Secondary IPv4 range, enter10.0.32.0/20
.Click Add IP range. For Subnet range name, enter
tier-1-pods
, and for Secondary IPv4 range, enter10.4.0.0/14
.Click Add subnet.
For Name, enter
tier-2
.For Region, select the same region that you selected for the previous subnet.
For IPv4 range, enter
172.16.4.0/22
.Click Create secondary IPv4 range. For Subnet range name, enter
tier-2-services
, and for Secondary IPv4 range, enter172.16.16.0/20
.Click Add IP range. For Subnet range name, enter
tier-2-pods
, and for Secondary IPv4 range, enter172.20.0.0/14
.Click Create.
gcloud
In your host project, create a network named shared-net
:
gcloud compute networks create shared-net \
--subnet-mode custom \
--project HOST_PROJECT_ID
In your new network, create a subnet named tier-1
:
gcloud compute networks subnets create tier-1 \
--project HOST_PROJECT_ID \
--network shared-net \
--range 10.0.4.0/22 \
--region COMPUTE_REGION \
--secondary-range tier-1-services=10.0.32.0/20,tier-1-pods=10.4.0.0/14
Create another subnet named tier-2
:
gcloud compute networks subnets create tier-2 \
--project HOST_PROJECT_ID \
--network shared-net \
--range 172.16.4.0/22 \
--region COMPUTE_REGION \
--secondary-range tier-2-services=172.16.16.0/20,tier-2-pods=172.20.0.0/14
Replace COMPUTE_REGION
with a
Compute Engine region.
Determining the names of service accounts in your service projects
You have two service projects, each of which has several service accounts. This section is concerned with your GKE service accounts and your Google APIs service accounts. You need the names of these service accounts for the next section.
The following table lists the names of the GKE and Google APIs service accounts in your two service projects:
Service account type | Service account name |
---|---|
GKE | service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com |
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com | |
Google APIs | SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com |
SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com |
Enabling Shared VPC and granting roles
To perform the tasks in this section, ensure that your organization has defined a Shared VPC Admin role.
In this section, you will perform the following tasks:
- In your host project, enable Shared VPC.
- Attach your two service projects to the host project.
- Grant the appropriate IAM roles to service accounts that
belong to your service projects:
- In your first service project, grant two service accounts the
Compute Network User
role on thetier-1
subnet of your host project. - In your second service project, grant two service accounts the
Compute Network User
role on thetier-2
subnet of your host project.
- In your first service project, grant two service accounts the
Console
Perform the following steps to enable Shared VPC, attach service projects, and grant roles:
Go to the Shared VPC page in the Google Cloud console.
In the project picker, select your host project.
Click Set up Shared VPC. The Enable host project screen displays.
Click Save & continue. The Select subnets page displays.
Under Sharing mode, select Individual subnets.
Under Subnets to share, check tier-1 and tier-2. Clear all other checkboxes.
Click Continue. The Give permissions page displays.
Under Attach service projects, check your first service project and your second service project. Clear all the other checkboxes under Attach service projects.
Under Kubernetes Engine access, check Enabled.
Click Save. A new page displays.
Under Individual subnet permissions, check tier-1.
In the right pane, delete any service accounts that belong to your second service project. That is, delete any service accounts that contain
SERVICE_PROJECT_2_NUM
.In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your first service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.
In the center pane, under Individual subnet permissions, check tier-2, and clear tier-1.
In the right pane, delete any service accounts that belong to your first service project. That is, delete any service accounts that contain
SERVICE_PROJECT_1_NUM
.In the right pane, look for the names of the Kubernetes Engine and Google APIs service accounts that belong to your second service project. You want to see both of those service account names in the list. If either of those is not in the list, enter the service account name under Add members, and click Add.
gcloud
Enable Shared VPC in your host project. The command that you use depends on the required administrative role that you have.
If you have Shared VPC Admin role at the organizational level:
gcloud compute shared-vpc enable HOST_PROJECT_ID
If you have Shared VPC Admin role at the folder level:
gcloud beta compute shared-vpc enable HOST_PROJECT_ID
Attach your first service project to your host project:
gcloud compute shared-vpc associated-projects add SERVICE_PROJECT_1_ID \ --host-project HOST_PROJECT_ID
Attach your second service project to your host project:
gcloud compute shared-vpc associated-projects add SERVICE_PROJECT_2_ID \ --host-project HOST_PROJECT_ID
Get the IAM policy for the
tier-1
subnet:gcloud compute networks subnets get-iam-policy tier-1 \ --project HOST_PROJECT_ID \ --region COMPUTE_REGION
The output contains an
etag
field. Make a note of theetag
value.Create a file named
tier-1-policy.yaml
that has the following content:bindings: - members: - serviceAccount:SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com - serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com role: roles/compute.networkUser etag: ETAG_STRING
Replace
ETAG_STRING
with theetag
value that you noted previously.Set the IAM policy for the
tier-1
subnet:gcloud compute networks subnets set-iam-policy tier-1 \ tier-1-policy.yaml \ --project HOST_PROJECT_ID \ --region COMPUTE_REGION
Get the IAM policy for the
tier-2
subnet:gcloud compute networks subnets get-iam-policy tier-2 \ --project HOST_PROJECT_ID \ --region COMPUTE_REGION
The output contains an
etag
field. Make a note of theetag
value.Create a file named
tier-2-policy.yaml
that has the following content:bindings: - members: - serviceAccount:SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com - serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com role: roles/compute.networkUser etag: ETAG_STRING
Replace
ETAG_STRING
with theetag
value that you noted previously.Set the IAM policy for the
tier-2
subnet:gcloud compute networks subnets set-iam-policy tier-2 \ tier-2-policy.yaml \ --project HOST_PROJECT_ID \ --region COMPUTE_REGION
Managing firewall resources
If you want a GKE cluster in a service project to create and manage the firewall resources in your host project, the service project's GKE service account must be granted the appropriate IAM permissions using one of the following strategies:
Grant the service project's GKE service account the
Compute Security Admin
role to the host project.
Console
In the Google Cloud console, go to the IAM page.
Select the host project.
Click
Grant access, then enter the service project's GKE service account principal,service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com
.Select the
Compute Security Admin
role from the drop-down list.Click Save.
gcloud
Grant the service project's GKE service account the Compute
Security Admin
role within the host project:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \
--member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com \
--role=roles/compute.securityAdmin
Replace the following:
HOST_PROJECT_ID
: the shared VPC host project IDSERVICE_PROJECT_NUM
: the ID of the service project containing the GKE service account
For a finer grained approach, create a custom IAM role that includes only the following permissions:
compute.networks.updatePolicy
,compute.firewalls.list
,compute.firewalls.get
,compute.firewalls.create
,compute.firewalls.update
, andcompute.firewalls.delete
. Grant the service project's GKE service account that custom role to the host project.
Console
Create a custom role within the host project containing the IAM permissions mentioned earlier:
In the Google Cloud console, go to the Roles page.
Using the drop-down list at the top of the page, select the host project.
Click Create Role.
Enter a Title, Description, ID and Role launch stage for the role. The role name cannot be changed after the role is created.
Click Add Permissions.
Filter for
compute.networks
and select the IAM permissions mentioned previously.Once all required permissions are selected, click Add.
Click Create.
Grant the service project's GKE service account the newly created custom role within the host project:
In the Google Cloud console, go to the IAM page.
Select the host project.
Click
Grant access, then enter the service project's GKE service account principal,service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com
.Filter for the Title of the newly created custom role and select it.
Click Save.
gcloud
Create a custom role within the host project containing the IAM permissions mentioned earlier:
gcloud iam roles create ROLE_ID \ --title="ROLE_TITLE" \ --description="ROLE_DESCRIPTION" \ --stage=LAUNCH_STAGE \ --permissions=compute.networks.updatePolicy,compute.firewalls.list,compute.firewalls.get,compute.firewalls.create,compute.firewalls.update,compute.firewalls.delete \ --project=HOST_PROJECT_ID
Grant the service project's GKE service account the newly created custom role within the host project:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com \ --role=projects/HOST_PROJECT_ID/roles/ROLE_ID
Replace the following:
ROLE_ID
: the name of the role, such asgkeFirewallAdmin
ROLE_TITLE
: a friendly title for the role, such asGKE Firewall Admin
ROLE_DESCRIPTION
: a short description of the role, such asGKE service account FW permissions
LAUNCH_STAGE
: the launch stage of the role in its lifecycle, such asALPHA
,BETA
, orGA
HOST_PROJECT_ID
: the shared VPC host project IDSERVICE_PROJECT_NUM
: the ID of the service project containing the GKE service account
If you have clusters in more than one service project, you must choose one of the strategies and repeat it for each service project's GKE service account.
Summary of roles granted on subnets
Here's a summary of the roles granted on the subnets:
Service account | Role | Subnet |
---|---|---|
service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | tier-1 |
SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com | Compute Network User | tier-1 |
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | tier-2 |
SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com | Compute Network User | tier-2 |
Kubernetes Engine access
When attaching a service project, enabling Kubernetes Engine access grants the service project's GKE service account the permissions to perform network management operations in the host project.
GKE assigns the following role automatically in the host project when enabling Kubernetes Engine access:
Member | Role | Resource |
---|---|---|
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Host Service Agent User | GKE service account in the host project |
However, you must add the Compute Network User
IAM permission manually
to the GKE service account of the service project to access
the host network.
Member | Role | Resource |
---|---|---|
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | Specific subnet or whole host project |
If a service project was attached without enabling Kubernetes Engine access, assuming the Kubernetes Engine API has already been enabled in both the host and service project, you can manually assign the permissions to the service project's GKE service account by adding the following IAM role bindings in the host project:
Member | Role | Resource |
---|---|---|
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | Specific subnet or whole host project |
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Host Service Agent User | GKE service account in the host project |
Granting the Host Service Agent User role
Each service project's GKE service account must have a binding for the Host Service Agent User role on the host project. The GKE service account takes the following form:
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com
Where SERVICE_PROJECT_NUM
is the project number of
your service project.
This binding allows the service project's GKE service account to perform network management operations in the host project, as if it were the host project's GKE service account. This role can only be granted to a service project's GKE service account.
Console
If you have been using the Google Cloud console, you do not have to grant the Host Service Agent User role explicitly. That was done automatically when you used the Google Cloud console to attach service projects to your host project.
gcloud
For your first project, grant the Host Service Agent User role to the project's GKE Service Account. This role is granted on your host project:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com \ --role roles/container.hostServiceAgentUser
For your second project, grant the Host Service Agent User role to the project's GKE Service Account. This role is granted on your host project:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com \ --role roles/container.hostServiceAgentUser
Verifying usable subnets and secondary IP address ranges
When creating a cluster, you must specify a subnet and the secondary IP address ranges to be used for the cluster's Pods and Services. There are several reasons that an IP address range might not be available for use. Whether you are creating the cluster with the Google Cloud console or the gcloud CLI, you should specify usable IP address ranges.
An IP address range is usable for the new cluster's Services if the range isn't already in use. The IP address range that you specify for the new cluster's Pods can either be an unused range, or it can be a range that's shared with Pods in your other clusters. IP address ranges that are created and managed by GKE can't be used by your cluster.
You can list a project's usable subnets and secondary IP address ranges by using the gcloud CLI.
gcloud
gcloud container subnets list-usable \
--project SERVICE_PROJECT_ID \
--network-project HOST_PROJECT_ID
Replace SERVICE_PROJECT_ID
with the project ID of
the service project.
If you omit the --project
or --network-project
option, the
gcloud CLI command uses the default project from your
active configuration. Because the host
project and network project are distinct, you must specify one or both of
--project
and --network-project
.
The output is similar to the following:
PROJECT: xpn-host
REGION: REGION_NAME
NETWORK: shared-net
SUBNET: tier-2
RANGE: 172.16.4.0/22
SECONDARY_RANGE_NAME: tier-2-services
IP_CIDR_RANGE: 172.20.0.0/14
STATUS: usable for pods or services
SECONDARY_RANGE_NAME: tier-2-pods
IP_CIDR_RANGE: 172.16.16.0/20
STATUS: usable for pods or services
PROJECT: xpn-host
REGION: REGION_NAME
NETWORK: shared-net
SUBNET: tier-1
RANGE: 10.0.4.0/22
SECONDARY_RANGE_NAME: tier-1-services
IP_CIDR_RANGE: 10.0.32.0/20
STATUS: usable for pods or services
SECONDARY_RANGE_NAME: tier-1-pods
IP_CIDR_RANGE: 10.4.0.0/14
STATUS: usable for pods or services
The list-usable
command returns an empty list in the following situations:
- When the service project's Kubernetes Engine service account does not have the Host Service Agent User role to the host project.
- When the Kubernetes Engine service account in the host project does not exist (for example, if you've deleted that account accidentally).
- When Kubernetes Engine API is not enabled in the host project, which implies the Kubernetes Engine service account in the host project is missing.
For more information, see the troubleshooting section.
Notes about secondary ranges
You can create 30 secondary ranges in a given subnet. For each cluster, you need two secondary ranges: one for Pods and one for Services.
Creating a cluster in your first service project
To create a cluster in your first service project, perform the following steps using the gcloud CLI or the Google Cloud console.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the project picker, select your first service project.
Click add_box Create.
In the Autopilot or Standard section, click Configure.
For Name, enter
tier-1-cluster
.In the Region drop-down list, select the same region that you used for the subnets.
From the navigation pane, click Networking.
Under Cluster networking, select shared-net.
For Node subnet, select tier-1.
Under Advanced networking options, for Cluster default Pod address range, select tier-1-pods.
For Service address range, select tier-1-services.
Click Create.
When the creation is complete, in the list of clusters, click tier-1-cluster.
On the Cluster details page, click the Nodes tab
Under Node Pools, click the name of the node pool you want to inspect.
Under Instance groups, click the name of the instance group you want to inspect. For example, gke-tier-1-cluster-default-pool-5c5add1f-grp.
In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-1 subnet:
10.0.4.0/22
.
gcloud
Create a cluster named tier-1-cluster
in your first service project:
gcloud container clusters create-auto tier-1-cluster \
--project=SERVICE_PROJECT_1_ID \
--location=COMPUTE_REGION \
--network=projects/HOST_PROJECT_ID/global/networks/shared-net \
--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-1 \
--cluster-secondary-range-name=tier-1-pods \
--services-secondary-range-name=tier-1-services
When the creation is complete, verify that your cluster nodes are
in the primary range of the tier-1 subnet: 10.0.4.0/22
.
gcloud compute instances list --project SERVICE_PROJECT_1_ID
The output shows the internal IP addresses of the nodes:
NAME ZONE ... INTERNAL_IP
gke-tier-1-cluster-... ZONE_NAME ... 10.0.4.2
gke-tier-1-cluster-... ZONE_NAME ... 10.0.4.3
gke-tier-1-cluster-... ZONE_NAME ... 10.0.4.4
Creating a cluster in your second service project
To create a cluster in your second service project, perform the following steps using the gcloud CLI or the Google Cloud console.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the project picker, select your second service project.
Click add_box Create.
In the Standard or Autopilot section, click Configure.
For Name, enter
tier-2-cluster
.In the Region drop-down list, select the same region that you used for the subnets.
From the navigation pane, click Networking.
For Network, select shared-net.
For Node subnet, select tier-2.
For Cluster default Pod address range, select tier-2-pods.
For Service address range, select tier-2-services.
Click Create.
When the creation is complete, in the list of clusters, click tier-2-cluster.
On the Cluster details page, click the Nodes tab
Under Node Pools, click the name of the node pool you want to inspect.
Under Instance groups, click the name of the instance group you want to inspect. For example,
gke-tier-2-cluster-default-pool-5c5add1f-grp
.In the list of instances, verify that the internal IP addresses of your nodes are in the primary range of the tier-2 subnet:
172.16.4.0/22
.
gcloud
Create a cluster named tier-2-cluster
in your second service project:
gcloud container clusters create-auto tier-2-cluster \
--project=SERVICE_PROJECT_2_ID \
--location=COMPUTE_REGION \
--network=projects/HOST_PROJECT_ID/global/networks/shared-net \
--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-2 \
--cluster-secondary-range-name=tier-2-pods \
--services-secondary-range-name=tier-2-services
When the creation is complete, verify that your cluster nodes are
in the primary range of the tier-2 subnet: 172.16.4.0/22
.
gcloud compute instances list --project SERVICE_PROJECT_2_ID
The output shows the internal IP addresses of the nodes:
NAME ZONE ... INTERNAL_IP
gke-tier-2-cluster-... ZONE_NAME ... 172.16.4.2
gke-tier-2-cluster-... ZONE_NAME ... 172.16.4.3
gke-tier-2-cluster-... ZONE_NAME ... 172.16.4.4
Creating firewall rules
To allow traffic into the network and between the clusters within the network, you need to create firewalls. The following sections demonstrate how to create and update firewall rules:
- Creating a firewall rule to enable SSH connection to a node: Demonstrates how to create a firewall rule that enables traffic from outside of the clusters using SSH.
Updating the firewall rule to ping between nodes: Demonstrates how to update the firewall rule to permit ICMP traffic between the clusters.
SSH and ICMP are used as examples, you must create firewall rules that enable your specific application's networking requirements.
Creating a firewall rule to enable SSH connection to a node
In your host project, create a firewall rule for the shared-net
network.
Allow traffic to enter on TCP port 22
, which permits you to connect to
your cluster nodes using SSH.
Console
Go to the Firewall page in the Google Cloud console.
In the project picker, select your host project.
From the VPC Networking menu, click Create Firewall Rule.
For Name, enter
my-shared-net-rule
.For Network, select shared-net.
For Direction of traffic, select Ingress.
For Action on match, select Allow.
For Targets, select All instances in the network.
For Source filter, select IP ranges.
For Source IP ranges, enter
0.0.0.0/0
.For Protocols and ports, select Specified protocols and ports. In the box, enter
tcp:22
.Click Create.
gcloud
Create a firewall rule for your shared network:
gcloud compute firewall-rules create my-shared-net-rule \
--project HOST_PROJECT_ID \
--network shared-net \
--direction INGRESS \
--allow tcp:22
Connecting to a node using SSH
After creating the firewall that allows ingress traffic on TCP port 22
,
connect to the node using SSH.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the project picker, select your first service project.
Click tier-1-cluster.
On the Cluster details page, click the Nodes tab.
Under Node Pools, click the name of your node pool.
Under Instance groups, click the name of your instance group. For example, gke-tier-1-cluster-default-pool-faf87d48-grp.
In the list of instances, make a note of the internal IP addresses of the nodes. These addresses are in the
10.0.4.0/22
range.For one of your nodes, click SSH. This succeeds because SSH uses TCP port
22
, which is allowed by your firewall rule.
gcloud
List the nodes in your first service project:
gcloud compute instances list --project SERVICE_PROJECT_1_ID
The output includes the names of the nodes in your cluster:
NAME ...
gke-tier-1-cluster-default-pool-faf87d48-3mf8 ...
gke-tier-1-cluster-default-pool-faf87d48-q17k ...
gke-tier-1-cluster-default-pool-faf87d48-x9rk ...
Connect to one of your nodes using SSH:
gcloud compute ssh NODE_NAME \
--project SERVICE_PROJECT_1_ID \
--zone COMPUTE_ZONE
Replace the following:
NODE_NAME
: the name of one of your nodes.COMPUTE_ZONE
: the name of a Compute Engine zone within the region.
Updating the firewall rule to ping between nodes
In your SSH command-line window, start the CoreOS Toolbox:
/usr/bin/toolbox
In the toolbox shell, ping one of your other nodes in the same cluster. For example:
ping 10.0.4.4
The
ping
command succeeds, because your node and the other node are both in the10.0.4.0/22
range.Now, try to ping one of the nodes in the cluster in your other service project. For example:
ping 172.16.4.3
This time the
ping
command fails, because your firewall rule does not allow Internet Control Message Protocol (ICMP) traffic.At an ordinary command prompt, not your toolbox shell, Update your firewall rule to allow ICMP:
gcloud compute firewall-rules update my-shared-net-rule \ --project HOST_PROJECT_ID \ --allow tcp:22,icmp
In your toolbox shell, ping the node again. For example:
ping 172.16.4.3
This time the
ping
command succeeds.
Creating additional firewall rules
You can create additional firewall rules to allow communication between nodes, Pods, and Services in your clusters.
For example, the following rule allows traffic to enter from any node, Pod, or
Service in tier-1-cluster
on any TCP or UDP port:
gcloud compute firewall-rules create my-shared-net-rule-2 \
--project HOST_PROJECT_ID \
--network shared-net \
--allow tcp,udp \
--direction INGRESS \
--source-ranges 10.0.4.0/22,10.4.0.0/14,10.0.32.0/20
The following rule allows traffic to enter from any node, Pod, or Service in
tier-2-cluster
on any TCP or UDP port:
gcloud compute firewall-rules create my-shared-net-rule-3 \
--project HOST_PROJECT_ID \
--network shared-net \
--allow tcp,udp \
--direction INGRESS \
--source-ranges 172.16.4.0/22,172.20.0.0/14,172.16.16.0/20
Kubernetes will also try to create and manage firewall resources when necessary, for example when you create a load balancer service. If Kubernetes finds itself unable to change the firewall rules due to a permission issue, a Kubernetes Event will be raised to guide you on how to make the changes.
If you want to grant Kubernetes permission to change the firewall rules, see Managing firewall resources.
For Ingress Load Balancers, if Kubernetes can't change the firewall rules due to
insufficient permission, a firewallXPNError
event is emitted every several
minutes. In GLBC 1.4
and later, you can mute the firewallXPNError
event by
adding networking.gke.io/suppress-firewall-xpn-error: "true"
annotation to the
ingress resource. You can always remove this annotation to unmute.
Creating a cluster based on VPC Network Peering in a Shared VPC
You can use Shared VPC with clusters based on VPC Network Peering.
This requires that you grant the following permissions on the host project, either to the user account or to the service account, used to create the cluster:
compute.networks.get
compute.networks.updatePeering
You must also ensure that the control plane IP address range does not overlap with other reserved ranges in the shared network.
In this section, you create a VPC-native cluster named
cluster-vpc
in a predefined shared VPC network.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
In the Autopilot or Standard section, click Configure.
For Name, enter
cluster-vpc
.From the navigation pane, click Networking.
Select Private cluster.
(Optional for Autopilot): Set Control plane IP range to
172.16.0.16/28
.In the Network drop-down list, select the VPC network you created previously.
In the Node subnet drop-down list, select the shared subnet you created previously.
Configure your cluster as needed.
Click Create.
gcloud
Run the following command to create a cluster named cluster-vpc
in
a predefined Shared VPC:
gcloud container clusters create-auto private-cluster-vpc \
--project=PROJECT_ID \
--location=COMPUTE_REGION \
--network=projects/HOST_PROJECT/global/networks/shared-net \
--subnetwork=SHARED_SUBNETWORK \
--cluster-secondary-range-name=tier-1-pods \
--services-secondary-range-name=tier-1-services \
--enable-private-nodes \
--master-ipv4-cidr=172.16.0.0/28
Reserving IP addresses
You can reserve internal and external IP addresses for your Shared VPC clusters. Ensure that the IP addresses are reserved in the service project.
For internal IP addresses, you must provide the subnetwork where the IP address belongs. To reserve an IP address across projects, use the full resource URL to identify the subnetwork.
You can use the following command in the Google Cloud CLI to reserve an internal IP address:
gcloud compute addresses create RESERVED_IP_NAME \
--region=COMPUTE_REGION \
--subnet=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNETWORK_NAME \
--addresses=IP_ADDRESS \
--project=SERVICE_PROJECT_ID
To call this command, you must have the compute.subnetworks.use
permission added to the subnetwork. You can either
grant the caller a compute.networkUser
role on the subnetwork,
or you can
grant the caller a customized role with
compute.subnetworks.use
permission at the project level.
Cleaning up
After completing the exercises in this guide, perform the following tasks to remove the resources to prevent unwanted charges incurring on your account:
Deleting the clusters
Delete the two clusters you created.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the project picker, select your first service project.
Select the tier-1-cluster, and click Delete.
In the project picker, select your second service project.
Select the tier-2-cluster, and click Delete.
gcloud
gcloud container clusters delete tier-1-cluster \
--project SERVICE_PROJECT_1_ID \
--zone COMPUTE_ZONE
gcloud container clusters delete tier-2-cluster \
--project SERVICE_PROJECT_2_ID \
--zone COMPUTE_ZONE
Disabling Shared VPC
Disable Shared VPC in your host project.
Console
Go to the Shared VPC page in the Google Cloud console.
In the project picker, select your host project.
Click Disable Shared VPC.
Enter the
HOST_PROJECT_ID
in the field, and click Disable.
gcloud
gcloud compute shared-vpc associated-projects remove SERVICE_PROJECT_1_ID \
--host-project HOST_PROJECT_ID
gcloud compute shared-vpc associated-projects remove SERVICE_PROJECT_2_ID \
--host-project HOST_PROJECT_ID
gcloud compute shared-vpc disable HOST_PROJECT_ID
Deleting your firewall rules
Remove the firewall rules you created.
Console
Go to the Firewall page in the Google Cloud console.
In the project picker, select your host project.
In the list of rules, select my-shared-net-rule, my-shared-net-rule-2, and my-shared-net-rule-3.
Click Delete.
gcloud
Delete your firewall rules:
gcloud compute firewall-rules delete \
my-shared-net-rule \
my-shared-net-rule-2 \
my-shared-net-rule-3 \
--project HOST_PROJECT_ID
Deleting the shared network
Delete the shared network you created.
Console
Go to the VPC networks page in the Google Cloud console.
In the project picker, select your host project.
In the list of networks, select shared-net.
Click Delete VPC Network.
gcloud
gcloud compute networks subnets delete tier-1 \
--project HOST_PROJECT_ID \
--region COMPUTE_REGION
gcloud compute networks subnets delete tier-2 \
--project HOST_PROJECT_ID \
--region COMPUTE_REGION
gcloud compute networks delete shared-net --project HOST_PROJECT_ID
Removing the Host Service Agent User role
Remove the Host Service Agent User roles from your two service projects.
Console
Go to the IAM page in the Google Cloud console.
In the project picker, select your host project.
In the list of members, select the row that shows
service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com
is granted the Kubernetes Engine Host Service Agent User role.Select the row that shows
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com
is granted the Kubernetes Engine Host Service Agent User role.Click Remove access.
gcloud
Remove the Host Service Agent User role from the GKE service account of your first service project:
gcloud projects remove-iam-policy-binding HOST_PROJECT_ID \ --member serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com \ --role roles/container.hostServiceAgentUser
Remove the Host Service Agent User role from the GKE service account of your second service project:
gcloud projects remove-iam-policy-binding HOST_PROJECT_ID \ --member serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com \ --role roles/container.hostServiceAgentUser
Troubleshooting
The following sections help you to resolve common issues with Shared VPC clusters.
Error: Failed to get metadata from network project
The following message is a common error when working with Shared VPC clusters:
Failed to get metadata from network project. GCE_PERMISSION_DENIED: Google
Compute Engine: Required 'compute.projects.get' permission for
'projects/HOST_PROJECT_ID
This error can occur for the following reasons:
The Kubernetes Engine API has not been enabled in the host project.
The host project's GKE service account does not exist. For example, it might have been deleted.
The host project's GKE service account does not have the Kubernetes Engine Service Agent (
container.serviceAgent
) role in the host project. The binding might have been accidentally removed.The service project's GKE service account does not have the Host Service Agent User role in the host project.
To resolve the issue, determine whether the host project's GKE service account exists.
If the service account doesn't exist, do the following:
If the Kubernetes Engine API is not enabled in the host project, enable it. This creates the host project's GKE service account and grants the host project's GKE service account the Kubernetes Engine Service Agent (
container.serviceAgent
) role in the host project.If the Kubernetes Engine API is enabled in the host project, this means that either the host project's GKE service account has been deleted or it doesn't have the Kubernetes Engine Service Agent (
container.serviceAgent
) role in the host project. To restore the GKE service account or the role binding, you must disable then re-enable the Kubernetes Engine API. For more information, see Error 400/403: Missing edit permissions on account.
Issue: Connectivity
If you're experiencing connectivity issues between Compute Engine VMs that are in the same Virtual Private Cloud (VPC) network or two VPC networks connected with VPC Network Peering, refer to Troubleshooting connectivity between virtual machine (VM) instances with internal IP addresses in the Virtual Private Cloud (VPC) documentation.
Issue: Packet loss
If you're experiencing issues with packet loss when sending traffic from a cluster to an external IP address using Cloud NAT, VPC-native clusters, or IP masquerade agent, see Troubleshoot Cloud NAT packet loss from a cluster.
What's next
- Read the Shared VPC overview.
- Learn about provisioning a Shared VPC.
- Read the GKE network overview.
- Read about automatically created firewall rules.
- Learn how to troubleshoot connectivity between virtual machine (VM) instances with internal IP addresses.