This tutorial describes how to access a private Google Kubernetes Engine (GKE) cluster using Cloud Build private pools. This access lets you use Cloud Build to deploy your application on a private GKE cluster. This tutorial is intended for network administrators and is applicable to all situations where Cloud Build private pools need to communicate with services running in a peered Virtual Private Cloud (VPC) network. For example, the private pool workers could communicate with the following services:
- Private GKE cluster
- Cloud SQL database
- Memorystore instance
- Compute Engine instance running in a different VPC network than the one peered with the Cloud Build private pool
Cloud Build private pools and GKE cluster control planes both run in Google-owned VPC networks. These VPC networks are peered to your own VPC network on Google Cloud. However, VPC Network Peering doesn't support transitive peering, which can be a restriction when using Cloud Build private pools. This tutorial presents a solution that uses Cloud VPN to allow workers in a Cloud Build private pool to access the control plane of a private GKE cluster.
This tutorial assumes that you're familiar with Google Kubernetes Engine,
Cloud Build, the gcloud
command, VPC Network Peering, and
Cloud VPN.
Architecture overview
When you create a private GKE cluster with no client
access to the public endpoint, clients can only access the
GKE cluster control plane
using its
private IP address.
Clients like kubectl
can communicate with the control plane only
if they run on an instance that has access to the VPC network
and is in an
authorized network.
If you want to use Cloud Build to deploy your application on this private GKE cluster, then you need to use Cloud Build private pools to access the GKE clusters. Private pools are a set of worker instances that run in a Google Cloud project owned by Google, and are peered to your VPC network using a VPC Network Peering connection. In this setup, the worker instances are allowed to communicate with the private IP address of the GKE cluster control plane.
However, the GKE cluster control plane also runs in a Google-owned project and connects to your VPC network using Private Service Connect (PSC). VPC Network Peering doesn't support transitive peering, so packets can't be routed directly between the Cloud Build private pool and the GKE cluster control plane.
To enable Cloud Build worker instances to access the GKE cluster control plane, you can peer the private pool and use PSC to connect the GKE cluster control plane with two VPC networks that you own and then connect these two VPC networks using Cloud VPN. This peering and connection allows each side of the VPC tunnel to advertise the private pool and GKE cluster control plane networks, thus completing the route.
The following architectural diagram shows the resources that are used in this tutorial:
We recommend creating all resources used in this tutorial in the same Google Cloud region for low latency. The VPN tunnel can traverse two different regions if this inter-region communication is needed for your own implementation. The two VPC networks that you own can also belong to different projects.
Objectives
- Create a private GKE cluster.
- Set up a Cloud Build private pool.
- Create a HA VPN connection between two VPC networks.
- Enable routing of packets across two VPC Network Peerings and a VPC connection.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Cloud Build, Google Kubernetes Engine, and Service Networking APIs.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Creating two VPC networks in your own project
In this section, you create two VPC networks and a subnet for the GKE cluster nodes.
In Cloud Shell, create the first VPC network (called "Private pool peering VPC network" in the preceding diagram). You don't need to create subnets in this network.
gcloud compute networks create PRIVATE_POOL_PEERING_VPC_NAME \ --subnet-mode=CUSTOM
Replace PRIVATE_POOL_PEERING_VPC_NAME with the name of your VPC network to be peered with the Cloud Build private pool network.
Create the second VPC network (called "GKE cluster VPC network" in the preceding diagram):
gcloud compute networks create GKE_CLUSTER_VPC_NAME \ --subnet-mode=CUSTOM
Replace GKE_CLUSTER_VPC_NAME with the name of your VPC network to peer with the GKE cluster control plane.
Create a subnet for the GKE cluster nodes:
gcloud compute networks subnets create GKE_SUBNET_NAME \ --network=GKE_CLUSTER_VPC_NAME \ --range=GKE_SUBNET_RANGE \ --region=REGION
Replace the following:
- GKE_SUBNET_NAME: the name of the subnetwork that is intended to host the GKE cluster nodes.
- GKE_CLUSTER_VPC_NAME: the name of your VPC network to connect with the GKE cluster control plane.
- GKE_SUBNET_RANGE: the IP address range of
GKE_SUBNET_NAME. For this tutorial, you can
use
10.244.252.0/22
. - REGION: the Google Cloud region hosting
the GKE cluster. For this tutorial, you can
use
us-central1
.
You've now set up two VPC networks in your own project, and they're ready to peer with other services.
Creating a private GKE cluster
In this section, you create the private GKE cluster.
In Cloud Shell, create a GKE cluster with no client access to the public endpoint of the control plane.
gcloud container clusters create PRIVATE_CLUSTER_NAME \ --region=REGION \ --enable-master-authorized-networks \ --network=GKE_CLUSTER_VPC_NAME \ --subnetwork=GKE_SUBNET_NAME \ --enable-private-nodes \ --enable-private-endpoint \ --enable-ip-alias \ --master-ipv4-cidr=CLUSTER_CONTROL_PLANE_CIDR
Replace the following:
- PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.
- REGION: the region for the
GKE cluster. In this tutorial, use
us-central1
for the region, the same region as the one you used for the VPC networks. - GKE_CLUSTER_VPC_NAME: the name of your VPC network to connect to the GKE cluster control plane.
- GKE_SUBNET_RANGE: the IP address range of
GKE_SUBNET_NAME. For this tutorial, you can
use
10.244.252.0/22
. CLUSTER_CONTROL_PLANE_CIDR: the IP address range of the GKE cluster control plane. It must have a
/28
prefix. For this tutorial, use172.16.0.32/28
.
You have now created a private GKE cluster.
Configure VPC Network Peering for GKE 1.28 and below
If you are using this tutorial to configure an existing cluster running GKE version 1.28 or earlier, your private VPC network uses VPC Network Peering to connect to the GKE cluster. Complete the following steps:
Retrieve the name of the GKE cluster's VPC Network Peering. This VPC Network Peering was automatically created when you created the GKE cluster.
export GKE_PEERING_NAME=$(gcloud container clusters describe PRIVATE_CLUSTER_NAME \ --region=REGION \ --format='value(privateClusterConfig.peeringName)')
Replace the following:
- PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.
- REGION: the region for the
GKE cluster. In this tutorial, use
us-central1
for the region, the same region as the one you used for the VPC networks.
Enable the export of custom routes in order to advertise the private pool network to the GKE cluster control plane:
gcloud compute networks peerings update $GKE_PEERING_NAME \ --network=GKE_CLUSTER_VPC_NAME \ --export-custom-routes \ --no-export-subnet-routes-with-public-ip
Replace GKE_CLUSTER_VPC_NAME with the name of your VPC network to connect with the GKE cluster control plane.
For more information about custom routes, you can read Importing and exporting custom routes.
Creating a Cloud Build private pool
In this section, you create the Cloud Build private pool.
In Cloud Shell, allocate a named IP address range in the PRIVATE_POOL_PEERING_VPC_NAME VPC network for the Cloud Build private pool:
gcloud compute addresses create RESERVED_RANGE_NAME \ --global \ --purpose=VPC_PEERING \ --addresses=PRIVATE_POOL_NETWORK \ --prefix-length=PRIVATE_POOL_PREFIX \ --network=PRIVATE_POOL_PEERING_VPC_NAME
Replace the following:
- RESERVED_RANGE_NAME: the name of the private IP address range that hosts the Cloud Build private pool.
- PRIVATE_POOL_NETWORK: the first IP address of
RESERVED_RANGE_NAME. For this tutorial, you can
use
192.168.0.0
. - PRIVATE_POOL_PREFIX: the prefix of
RESERVED_RANGE_NAME. Each private pool created
will use
/24
from this range. For this tutorial, you can use20
; this allows you to create up to sixteen pools. - PRIVATE_POOL_PEERING_VPC_NAME: the name of your VPC network to be peered with the Cloud Build private pool network.
- IP range is
global
because when--purpose
isVPC_PEERING
the named IP address range must beglobal
.
Create a private connection between the VPC network that contains the Cloud Build private pool and PRIVATE_POOL_PEERING_VPC_NAME:
gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=RESERVED_RANGE_NAME \ --network=PRIVATE_POOL_PEERING_VPC_NAME
Replace the following:
- RESERVED_RANGE_NAME: the name of the private IP address range that hosts the Cloud Build private pool.
- PRIVATE_POOL_PEERING_VPC_NAME: the name of your VPC network to be peered with the Cloud Build private pool network.
Enable the export of custom routes in order to advertise the GKE cluster control plane network to the private pool:
gcloud compute networks peerings update servicenetworking-googleapis-com \ --network=PRIVATE_POOL_PEERING_VPC_NAME \ --export-custom-routes \ --no-export-subnet-routes-with-public-ip
Replace PRIVATE_POOL_PEERING_VPC_NAME with the name of your VPC network to be peered with the Cloud Build private pool network.
Create a Cloud Build private pool that is peered with PRIVATE_POOL_PEERING_VPC_NAME:
gcloud builds worker-pools create PRIVATE_POOL_NAME \ --region=REGION \ --peered-network=projects/$GOOGLE_CLOUD_PROJECT/global/networks/PRIVATE_POOL_PEERING_VPC_NAME
Replace the following:
- PRIVATE_POOL_NAME: the name of the Cloud Build private pool.
- REGION: the region for the
GKE cluster. In this tutorial, use
us-central1
for the region, the same region as the one you used for the VPC networks.
You have now created a Cloud Build private pool and peered it with the VPC network in your own project.
Creating a Cloud VPN connection between your two VPC networks
In your own project, you now have a VPC network peered with the Cloud Build private pool and a second VPC network peered with the private GKE cluster.
In this section, you create a Cloud VPN connection between the two VPC networks in your project. This connection completes the route and allows the Cloud Build private pools to access the GKE cluster.
In Cloud Shell, create two HA VPN gateways that connect to each other. To create these gateways, follow the instructions in Creating two fully configured HA VPN gateways that connect to each other. The setup is complete after you have created the BGP sessions. While following these instructions, use the following values:
- PRIVATE_POOL_PEERING_VPC_NAME for
NETWORK_1
- GKE_CLUSTER_VPC_NAME for
NETWORK_2
- REGION for
REGION_1
andREGION_2
- PRIVATE_POOL_PEERING_VPC_NAME for
Configure each of the four BGP sessions you created to advertise the routes to the private pool VPC network and the GKE cluster control plane VPC network:
gcloud compute routers update-bgp-peer ROUTER_NAME_1 \ --peer-name=PEER_NAME_GW1_IF0 \ --region=REGION \ --advertisement-mode=CUSTOM \ --set-advertisement-ranges=PRIVATE_POOL_NETWORK/PRIVATE_POOL_PREFIX gcloud compute routers update-bgp-peer ROUTER_NAME_1 \ --peer-name=PEER_NAME_GW1_IF1 \ --region=REGION \ --advertisement-mode=CUSTOM \ --set-advertisement-ranges=PRIVATE_POOL_NETWORK/PRIVATE_POOL_PREFIX gcloud compute routers update-bgp-peer ROUTER_NAME_2 \ --peer-name=PEER_NAME_GW2_IF0 \ --region=REGION \ --advertisement-mode=CUSTOM \ --set-advertisement-ranges=CLUSTER_CONTROL_PLANE_CIDR gcloud compute routers update-bgp-peer ROUTER_NAME_2 \ --peer-name=PEER_NAME_GW2_IF1 \ --region=REGION \ --advertisement-mode=CUSTOM \ --set-advertisement-ranges=CLUSTER_CONTROL_PLANE_CIDR
Where the following values are the same names that you used when you created the two HA VPN gateways:
- ROUTER_NAME_1
- PEER_NAME_GW1_IF0
- PEER_NAME_GW1_IF1
- ROUTER_NAME_2
- PEER_NAME_GW2_IF0
- PEER_NAME_GW2_IF1
Enabling Cloud Build access to the GKE cluster control plane
Now that you have a VPN connection between the two VPC networks in your project, enable Cloud Build access to the GKE cluster control plane.
In Cloud Shell, add the private pool network range to the control plane authorized networks in GKE:
gcloud container clusters update PRIVATE_CLUSTER_NAME \ --enable-master-authorized-networks \ --region=REGION \ --master-authorized-networks=PRIVATE_POOL_NETWORK/PRIVATE_POOL_PREFIX
Replace the following:
- PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.
- REGION: the region for the
GKE cluster. In this tutorial, use
us-central1
for the region, the same region as the one you used for the VPC networks. - PRIVATE_POOL_NETWORK: the first IP address of
RESERVED_RANGE_NAME. For this tutorial, you
can use
192.168.0.0
. - PRIVATE_POOL_PREFIX: the prefix of
RESERVED_RANGE_NAME. Each private pool created
will use
/24
from this range. For this tutorial, you can use20
; this allows you to create up to sixteen pools.
Allow the service account you are using for the build to access the GKE cluster control plane:
export PROJECT_NUMBER=$(gcloud projects describe $GOOGLE_CLOUD_PROJECT --format 'value(projectNumber)') gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ --member=serviceAccount:SERVICE_ACCOUNT \ --role=roles/container.developer
The Cloud Build private pools now can access the GKE cluster control plane.
Verifying the solution
In this section, you verify that the solution is working by running the
command kubectl get nodes
in a build step which is running in the private
pool.
In Cloud Shell, create a temporary folder with a Cloud Build configuration file that runs the command
kubectl get nodes
:mkdir private-pool-test && cd private-pool-test cat > cloudbuild.yaml <<EOF steps: - name: "gcr.io/cloud-builders/kubectl" args: ['get', 'nodes'] env: - 'CLOUDSDK_COMPUTE_REGION=REGION' - 'CLOUDSDK_CONTAINER_CLUSTER=PRIVATE_CLUSTER_NAME' options: workerPool: 'projects/$GOOGLE_CLOUD_PROJECT/locations/REGION/workerPools/PRIVATE_POOL_NAME' EOF
Replace the following:
- REGION: the region for the
GKE cluster. In this tutorial, use
us-central1
for the region, the same region as the one you used for the VPC networks. - PRIVATE_CLUSTER_NAME: the name of the private GKE cluster.
- PRIVATE_POOL_NAME: the name of the Cloud Build private pool.
- REGION: the region for the
GKE cluster. In this tutorial, use
Start the build job:
gcloud builds submit --config=cloudbuild.yaml
Verify that the output is the list of nodes in the GKE cluster. The build log shown in the console includes a table similar to this:
NAME STATUS ROLES AGE VERSION gke-private-default-pool-3ec34262-7lq9 Ready <none> 9d v1.19.9-gke.1900 gke-private-default-pool-4c517758-zfqt Ready <none> 9d v1.19.9-gke.1900 gke-private-default-pool-d1a885ae-4s9c Ready <none> 9d v1.19.9-gke.1900
You have now verified that the workers from the private pool can access the GKE cluster. This access lets you use Cloud Build to deploy your application on this private GKE cluster.
Troubleshooting
If you encounter problems with this tutorial, see the following documents:
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Delete the individual resources
In Cloud Shell, delete the GKE cluster:
gcloud container clusters delete PRIVATE_CLUSTER_NAME \ --region=REGION \ --async
When you run this command, the VPC Network Peering is automatically deleted.
Delete the Cloud Build private pool:
gcloud builds worker-pools delete PRIVATE_POOL_NAME \ --region=REGION
Delete the private connection between the service producer VPC network and PRIVATE_POOL_PEERING_VPC_NAME:
gcloud services vpc-peerings delete \ --network=PRIVATE_POOL_PEERING_VPC_NAME \ --async
Delete the named IP address range used for the private pool:
gcloud compute addresses delete RESERVED_RANGE_NAME \ --global
Delete the four VPN tunnels. Use the same names that you specified at Create VPN tunnels.
gcloud compute vpn-tunnels delete \ TUNNEL_NAME_GW1_IF0 \ TUNNEL_NAME_GW1_IF1 \ TUNNEL_NAME_GW2_IF0 \ TUNNEL_NAME_GW2_IF1 \ --region=REGION
Delete the two Cloud Routers. Use the same names that you specified at Create Cloud Routers.
gcloud compute routers delete \ ROUTER_NAME_1 \ ROUTER_NAME_2 \ --region=REGION
Delete the two VPN Gateways. Use the same names that you specified at Create the HA VPN gateways.
gcloud compute vpn-gateways delete \ GW_NAME_1 \ GW_NAME_2 \ --region=REGION
Delete GKE_SUBNET_NAME, which is the subnetwork that hosts the GKE cluster nodes:
gcloud compute networks subnets delete GKE_SUBNET_NAME \ --region=REGION
Delete the two VPC networks PRIVATE_POOL_PEERING_VPC_NAME and GKE_CLUSTER_VPC_NAME:
gcloud compute networks delete \ PRIVATE_POOL_PEERING_VPC_NAME \ GKE_CLUSTER_VPC_NAME
What's next
- Learn how to run builds in a private pool.
- Run a proxy within the private GKE cluster that has access to the control plane.
- Learn how to deploy to GKE from Cloud Build.
- Try out other Google Cloud features for yourself. Have a look at our tutorials.