Setting up a Private Cluster

This page explains how to create a private cluster in Kubernetes Engine.

Overview

In Kubernetes Engine, a private cluster is a cluster that makes your master inaccessible from the public internet. In a private cluster, nodes do not have public IP addresses, so your workloads run in an environment that is isolated from the internet. Nodes have addresses only in the private RFC 1918 address space. Nodes and masters communicate with each other privately using VPC peering.

In the Kubernetes Engine API, address ranges are expressed as Classless Inter-Domain Routing (CIDR) blocks.

The exercises on this page use specific names, regions, and address ranges to illustrate general procedures. If you like, you can change the names, regions, and address ranges to suit your needs.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Kubernetes Engine API.
  • Enable Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • Set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • Update all gcloud commands to the latest version:
    gcloud components update

Creating a private cluster

When you create a private cluster, you must specify a /28 CIDR range for the VMs that run the Kubernetes master components. You also need to enable Alias IPs.

In this exercise, you create a cluster named pr-clust-1, and you specify a CIDR range of 172.16.0.16/28 for the masters. You enable Alias IPs, and you let Kubernetes Engine automatically create a subnetwork for you.

Console

  1. Visit the Kubernetes Engine menu in the Google Cloud Platform Console.

    Visit the Kubernetes Engine menu

  2. Click Create cluster.

  3. For Cluster name, enter pr-clust-1
  4. From the Private cluster drop-down menu, select Enabled.
  5. Verify that Use alias IP ranges is set to Enabled.
  6. Set Master IP range to 172.16.0.16/28.
  7. Click Create.

When the cluster creation is complete, verify that your cluster nodes do not have external IP addresses:

  1. Visit the Kubernetes Engine menu in the Google Cloud Platform Console.

    Visit the Kubernetes Engine menu

  2. In the list of clusters, click pr-clust-1.

  3. Under Node Pools, click the name of your instance group. For example, gke-pr-clust-1-default-pool-5c5add1f-grp.

  4. In the list of instances, verify that your instances do not have external IP addresses.

gcloud

You can create a private cluster by using the --private-cluster, --master-ipv4-cidr, and --enable-ip-alias flags.

Create the cluster:

gcloud beta container clusters create pr-clust-1 \
    --private-cluster \
    --master-ipv4-cidr 172.16.0.16/28 \
    --enable-ip-alias \
    --create-subnetwork ""

API

To create a private cluster, set privateCluster to true, and specify a value for masterIPv4CidrBlock. Under IPAllocationPolicy, set useIpAliases and createSubnetwork to true.

{
  "name": "pr-clust-1",
  ...
  “ipAllocationPolicy”: {
    "useIpAliases": true,
    "createSubnetwork": true,
  },
  ...
  "privateCluster": true,
  "masterIpv4CidrBlock": 172.16.0.16/28,
  ...
}

Viewing your subnet and secondary address ranges

Console

  1. Visit the VPC networks page in the GCP Console.
    Go to the VPC networks page

  2. Click the name of the subnetwork that was automatically created for your cluster. For example: gke-pr-clust-1-subnet-163e3c97.

  3. Under IP address range, you can see the primary address range of your subnet. This is the range that is used for nodes.

  4. Under Secondary IP ranges, you can see the address range for pods and the address range for services.

  5. Notice that Private Google access is set to Enabled. This enables your cluster hosts, which have only private IP addresses, to communicate with Google APIs and services. See Google Private Access.

gcloud

List the subnets in the default network:

gcloud compute networks subnets list --network default

In the output, find the name of the subnetwork that was automatically created for your cluster. For example, gke-pr-clust-1-subnet-163e3c97.

Get information about the automatically created subnet:

gcloud compute networks subnets describe [SUBNET_NAME] --region us-central1

where [SUBNET_NAME] is the name of the automatically created subnetwork.

The output shows the primary address range and the secondary ranges:

...
ipCidrRange: 10.0.0.0/22
kind: compute#subnetwork
name: gke-pr-clust-1-subnet-163e3c97
...
privateIpGoogleAccess: true
...
secondaryIpRanges:
- ipCidrRange: 10.40.0.0/14
  rangeName: gke-pr-clust-1-pods-163e3c97
- ipCidrRange: 10.0.16.0/20
  rangeName: gke-pr-clust-1-services-163e3c97
...

In the preceding output, you can see that one of the secondary ranges is for pods: gke-pr-clust-1-pods-163e3c97. The other secondary range is for services: gke-pr-clust-1-services-163e3c97.

Notice that privateIPGoogleAccess is set to true. This enables your cluster hosts, which have only private IP addresses, to communicate with Google APIs and services. See Google Private Access.

Enabling master authorized networks

At this point, the only IP addresses that have access to the master are the addresses in these ranges:

  • The primary range of your subnetwork. This is the range used for nodes.

  • The secondary range of your subnetwork that is used for pods.

To provide additional access to the master, you must authorize selected address ranges.

Console

  1. Visit the Kubernetes Engine menu in the Google Cloud Platform Console.

    Visit the Kubernetes Engine menu

  2. Click pr-clust-1, and then click Edit.

  3. Verify that Master authorized networks is set to Enabled.

  4. Click Add authorized network.

  5. For Name, enter my-external-range.

  6. For Network, enter the CIDR range of the external IP addresses that you want to have access to your Kubernetes master.

  7. Click Save.

gcloud

Authorize your external address range:

gcloud beta container clusters update pr-clust-1 \
    --enable-master-authorized-networks \
    --master-authorized-networks [MY_EXTERNAL_RANGE]

where [MY_EXTERNAL_RANGE] is the CIDR range of the external addresses that you want to have access to your Kubernetes master.

Now that you have access to the master from a range of external addresses, you can use kubectl to get information about your cluster. For example, you can use kubectl to verify that your nodes do not have external IP addresses.

Verify that your cluster nodes do not have external IP addresses:

kubectl get nodes --output yaml | grep -A4 addresses

The output shows that the nodes have internal IP addresses but do not have external addresses:

...
addresses:
- address: 10.0.0.4
  type: InternalIP
- address: ""
  type: ExternalIP
...

Here is another command you can use to verify that your nodes do not have external IP addresses:

kubectl get nodes --output wide

The output shows an empty column for EXTERNAL-IP:

STATUS ... VERSION        EXTERNAL-IP   OS-IMAGE ...
Ready      v1.8.7-gke.1                 Container-Optimized OS from Google
Ready      v1.8.7-gke.1                 Container-Optimized OS from Google
Ready      v1.8.7-gke.1                 Container-Optimized OS from Google

Creating a private cluster that uses a custom subnetwork

In the preceding exercise, Kubernetes Engine automatically created a subnetwork for you. In this exercise, you create your own custom subnetwork, and then create a private cluster.

Your subnetwork has a primary address range and two secondary address ranges.

Console

Create a subnetwork and two secondary ranges:

  1. Visit the VPC networks page in the GCP Console.
    Go to the VPC networks page

  2. In the list of VPC networks, click default, and then click Add subnet.

  3. For Name, enter my-subnet.
  4. From the Region drop-down menu, select us-central1.
  5. For IP address range, enter 10.0.4.0/22.
  6. Click Create secondary IP range. For Subnet range name, enter my-svc-range, and for Secondary IP range, enter 10.0.32.0/20.
  7. Click Add IP range. For Subnet range name, enter my-pod-range, and for Secondary IP range, enter 10.4.0.0/14.
  8. From the Private Google access drop-down menu, select Enabled.
  9. Click Add.

Create a private cluster that uses your subnetwork:

  1. Visit the Kubernetes Engine menu in the GCP Console.

    Visit the Kubernetes Engine menu

  2. Click Create cluster.

  3. For Cluster name, enter pr-clust-2
  4. From the Private cluster drop-down menu, select Enabled.
  5. Set Master IP range to 172.16.0.32/28.
  6. From the Use alias IP ranges drop-down menu, select Enabled.
  7. From the Automatically create subnet drop-down menu, select Disabled.
  8. From the Node subnet drop-down menu, select my-subnet.
  9. From the Services subnet drop-down menu, select my-svc-range.
  10. From the Container subnet drop-down menu, select my-pod-range.
  11. Click Create.

After the cluster creation is complete, verify that your nodes do not have public IP addresses. Also verify that the internal addresses of your nodes are in the primary range of your subnetwork.

gcloud

Create a subnetwork and secondary ranges:

gcloud compute networks subnets create my-subnet \
    --network default \
    --range 10.0.4.0/22 \
    --enable-private-ip-google-access \
    --region us-central1 \
    --secondary-range my-svc-range=10.0.32.0/20,my-pod-range=10.4.0.0/14

Create a private cluster that uses your subnetwork:

gcloud beta container clusters create pr-clust-2 \
    --private-cluster \
    --enable-ip-alias \
    --master-ipv4-cidr 172.16.0.32/28 \
    --subnetwork my-subnet \
    --services-secondary-range-name my-svc-range \
    --cluster-secondary-range-name my-pod-range

After the cluster creation is complete, verify that your nodes do not have public IP addresses. Also verify that the internal addresses of your nodes are in the primary range of your subnetwork.

kubectl get nodes --output yaml | grep -A4 addresses

The output shows that the nodes have internal IP addresses in your subnet range, and that the nodes do not have external IP addresses:

addresses:
- address: 10.0.4.3
  type: InternalIP
- address: ""
  type: ExternalIP

API

To create a private cluster that uses your custom subnetwork, include your custom address ranges under ipAllocationPolicy.

{
  "name": "pr-clust-2",
  ...
  “ipAllocationPolicy”: {
    "useIpAliases": true,
    "nodeIpv4CidrBlock": "10.0.4.0/22"
    "servicesSecondaryRangeName": "my-svc-range",
    "servicesIpv4CidrBlock: "10.0.32.0/20",
    "clusterSecondaryRangeName": "my-pod-range"
    "clusterIpv4CidrBlock": "10.4.0.0/14",
  },
  ...
  "privateCluster": true,
  "masterIpv4CidrBlock": 172.16.0.32/28,
  ...
}

At this point, the only IP addresses that have access to the master are the addresses in these ranges:

  • The primary range of your subnetwork. This is the range used for nodes. In this example, the range for nodes is 10.0.4.0/22.

  • The secondary range of your subnetwork that is used for pods. In this example, the range for pods is 10.4.0.0/14.

To provide additional access to the master, authorize specific external address ranges as shown in the preceding section.

Pulling a container image from a registry

In a private cluster, the Docker runtime can pull container images from Google's Container Registry. It cannot pull images from any other registry on the internet. This is because the nodes in a private cluster do not have external IP addresses, so they cannot communicate with sites outside of Google.

The nodes in a private cluster can communicate with Google services, like Container Registry, if they are on a subnet that has Private Google Access enabled.

All of the subnets created for the exercises on this page have Private Google Access enabled. In the first exercise, Kubernetes Engine automatically created a subnet for you, and it enabled Private Google Access. In the second exercise, you created a custom subnet and specified that Private Google Access should be enabled:

gcloud compute networks subnets create my-subnet \
    ...
    --enable-private-ip-google-access
    ...

Create a deployment that pulls an image from Cloud Registry:

kubectl run hello-deployment --image gcr.io/google-samples/hello-app:2.0
kubectl get pods

The output shows the running pod:

NAME                                READY     STATUS    RESTARTS   AGE
hello-deployment-5574df5d59-8tks9   1/1       Running   0          5s

Enabling Kubernetes metrics server

Since Kubernetes 1.8, a Metrics Server Deployment runs in the kube-system namespace by default in all clusters. Metrics Server is a cluster-wide aggregator of resource usage data. In private clusters, a Metrics Server Pod running on a cluster cannot communicate with the cluster master by default. You need to create firewall rules that allow communication from the cluster master's IPv4 CIDR (--master-ipv4-cidr) to the cluster's nodes. Doing so enables Metrics Server Pods to report stats to the cluster master.

Get firewall rules

Console

Show the firewall rules for your cluster using the prefix gke-pr-clust-1:

  1. Go to the Firewall rules page in the Google Cloud Platform Console.
    Go to the Firewall rules page
  2. In the Filter Resources text box search gke-pr-clust-1

gcloud

Show the firewall rules for your cluster using the prefix gke-pr-clust-1:

gcloud compute firewall-rules list \
    --filter='name~^gke-pr-clust-1' \
    --format='table(
        name,
        network,
        direction,
        sourceRanges.list():label=SRC_RANGES,
        allowed[].map().firewall_rule().list():label=ALLOW,
        targetTags.list():label=TARGET_TAGS
    )'

The output should be similar to this example:

NAME                             NETWORK  DIRECTION  SRC_RANGES                ALLOW                         TARGET_TAGS
gke-pr-clust-1-43938bce-all      default  INGRESS    10.40.0.0/14              icmp,esp,ah,sctp,tcp,udp
gke-pr-clust-1-43938bce-kubelet  default  INGRESS    172.16.0.16/28            tcp:10250                     gke-pr-clust-1-43938bce-node
gke-pr-clust-1-43938bce-vms      default  INGRESS    10.128.0.0/9,10.0.0.0/22  icmp,tcp:1-65535,udp:1-65535  gke-pr-clust-1-43938bce-node

The Metrics Server Pod listens on port 443. You need to add a firewall rule that allows the cluster master to reach nodes on port 443.

Add firewall rule

Console

To create firewall rules allowing HTTPs traffic from the cluster master, perform the following steps:

  1. Visit the Firewalls rules menu in GCP Console.

    Visit the Firewall Rules menu

  2. In the project picker, select your host project.

  3. Click Create Firewall Rule.
  4. For Name, enter user-pr-clust-1-https.
  5. For Network, select default.
  6. For Direction of traffic, select Ingress.
  7. For Action on match, select Allow.
  8. For Targets, select Specified target tags.
  9. For Target tags, enter the node's target tag, gke-pr-clust-1-node.
  10. For Source filter, select IP ranges.
  11. For Source IP ranges, enter the masterIpv4CidrBlock 172.16.0.32/28.
  12. For Protocols and ports, select Specified protocols and ports. In the box, enter tcp:443.
  13. Click Create.

gcloud

To create firewall rules allowing HTTPs traffic from the cluster master, run the following command:

gcloud compute firewall-rules create user-pr-clust-1-https \
    --action=ALLOW \
    --direction=INGRESS \
    --source-ranges=172.16.0.32/28 \
    --rules=tcp:443 \
    --target-tags=gke-pr-clust-1-node

Cleaning up

After completing the exercises on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:

Delete the clusters

Console

  1. Visit the Kubernetes Engine menu in GCP Console.

    Visit the Kubernetes Engine menu

  2. Select pr-clust-1, and pr-clust-2.

  3. Click Delete.

gcloud

gcloud container clusters delete pr-clust-1
gcloud container clusters delete pr-clust-2

Delete the subnetworks

Console

  1. Go to the VPC networks page in the Google Cloud Platform Console.
    Go to the VPC networks page
  2. In the list of networks, click default.
  3. Click my-subnet, and then click Delete subnet.

gcloud

gcloud compute networks subnets delete my-subnet --region us-central1
gcloud compute networks subnets delete my-shared-subnet --region us-central1

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine