Setting up a Private Cluster

This page explains how to create a private cluster in Google Kubernetes Engine with private nodes and a cluster master with either a public or private endpoint. For general information about GKE networking, refer to Network Overview.

Overview

Private clusters run their nodes with internal IP addresses only, which ensures that their workloads are isolated from the public Internet. With private clusters, your cluster master can be reachable only from internal IP addresses, or publicly-reachable from a set of authorized networks.

Features

Private clusters have the following features:

VPC network peering
Private clusters use VPC network peering to connect your cluster's VPC network with the Google-owned VPC network, thereby routing traffic between private nodes and masters using internal (RFC 1918) IP addresses.
Private Google Access
Private nodes do not have outbound Internet access because they aren't given external IP addresses. Private Google Access provides private nodes and their workloads with limited outbound access to Google Cloud Platform APIs and services over Google's private network. For example, Private Google Access makes it possible for private nodes to pull container images from Google Container Registry, and to send logs to Stackdriver.

Requirements

Private clusters have the following requirements:

VPC-native
Your private cluster must be a VPC-native cluster, a cluster with alias IPs enabled. VPC-native clusters are not compatible with legacy networks.
Kubernetes version 1.8.14-gke.0 or later
Your nodes must run Kubernetes version 1.8.14-gke.0 or later.

Restrictions

Private clusters have the following restrictions:

  • You cannot convert an existing, non-private cluster to a private cluster.
  • You cannot use a cluster master, node, Pod, or Service IP range that overlaps with 172.17.0.0/16.
  • Deleting the VPC peering between the cluster master and the cluster nodes, deleting the firewall rules that allow ingress traffic from the cluster master to nodes on port 10250, and/or deleting the default route to the default Internet gateway, causes a private cluster to stop functioning.

Limitations

Private clusters have the following limitations:

  • You can add up to 25 peerings in a VPC network.
  • The size of the private RFC 1918 CIDR block for the cluster master must be /28.
  • While GKE can detect overlap in the cluster master IPv4 CIDR block, it cannot detect overlap within a shared VPC network.

Limitations for regional clusters

  • Currently, you cannot use a proxy to reach the cluster master through its private IP address.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

About private cluster master endpoints

All clusters have a canonical endpoint. The endpoint is the IP address of the Kubernetes API server that kubectl and other services use to communicate with your cluster master. The endpoint is displayed in GCP Console under the Endpoints field of the cluster's Details tab, and in the output of gcloud container clusters describe under the endpoint field.

When you run gcloud container clusters describe on a private cluster, two unique endpoint values are displayed under the privateClusterConfig field:

  • privateEndpoint, an internal IP address. Nodes use this to communicate with the cluster master. Optionally, you can use this endpoint as well.
  • publicEndpoint, an external IP address. You and external services can use this to communicate with the cluster master.

Depending on how you create a private cluster, its endpoint can be one of these two values.

When is publicEndpoint used?

By default, private clusters are created with private nodes and a cluster master with a publicly-reachable endpoint:

  • gcloud container clusters create has a flag, --enable-private-endpoint. When you create a private cluster by passing the --enable-ip-alias and --enable-private-nodes flags, --enable-private-endpoint is automatically passed with its default value, false.
  • From the cluster creation menu in GCP Console, when you select Private cluster (which creates a cluster with private nodes), the Access master using its external IP address checkbox is selected by default.

In these cases, the endpoint becomes the publicEndpoint value, and you can add authorized networks to grant specific networks access to your cluster.

When is privateEndpoint used?

If you specify --enable-private-endpoint true, or clear the Access master using its external IP address checkbox, while creating a private cluster, the endpoint becomes the privateEndpoint value. To communicate with the cluster master, you would need to add authorized networks; however, you would only be able to authorize internal CIDR ranges within your VPC network.

Creating a private cluster with a publicly-reachable cluster master endpoint

The following sections explain how to use the gcloud command-line tool or GCP Console to create a cluster that runs private nodes and runs a cluster master with a publicly-reachable endpoint. To access your cluster using kubectl or other Kubernetes API clients, you need to add CIDR blocks to the list of authorized networks.

Using an automatically-generated subnet

In this section, you create a private cluster named private-cluster-0 with the CIDR range 172.16.0.0/28 for the cluster master.

After you create this cluster, you should view its subnet to verify the subnet's ranges.

gcloud

Run the following command:

gcloud container clusters create private-cluster-0 \
    --create-subnetwork name=private-cluster-0 \
    --enable-ip-alias \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.0/28 \
    --enable-master-authorized-networks \
    --master-authorized-networks [CIDR_BLOCK] \
    --no-enable-basic-auth \
    --no-issue-client-certificate

where:

  • --create-subnetwork is passed an empty string, causing GKE to automatically create a subnet. If you leave the --create-subnetwork flag's parameter blank, GKE automatically generates a subnet with Private Google Access enabled.
  • --enable-ip-alias makes the cluster VPC-native by enabling alias IPs in the cluster.
  • --enable-private-nodes indicates that the cluster's nodes are created without external IP addresses.
  • --master-ipv4-cidr reserves the CIDR range you specify for the cluster master. This setting is permanent for this cluster.
  • --enable-master-authorized-networks enables authorized networks in the cluster.
  • --master-authorized-networks passes in comma-delimited CIDRs that you'd like to grant whitelisted access to the cluster master endpoint.

Console

Perform the following steps:

  1. Visit the GKE menu in Google Cloud Platform Console.

    Visit the GKE menu

  2. Click Create cluster.

  3. Choose the Standard cluster template or choose an appropriate template for your workload.
  4. For Cluster name, enter private-cluster-0.
  5. Click Advanced options at the bottom of the menu.
  6. From VPC-native, select the Enable VPC-native (using alias IP) checkbox. Leave the Network drop-down menu set to default and the Node subnet drop-down menu set to default causes GKE to generate a subnet for your cluster.
  7. From Network security, select the Private cluster checkbox.
  8. To create a master that is accessible from authorized external IP ranges, keep the Access master using its external IP address checkbox selected.
  9. Set Master IP range to 172.16.0.0/28.
  10. Click Add authorized network.
  11. Fill the Name box with the desired name for the authorized network.
  12. Fill the Network box with a CIDR range that you want to grant whitelisted access to your Kubernetes master.
  13. Click Done. Add additional authorized networks as desired.
  14. Clear the Enable Kubernetes Dashboard checkbox.
  15. Click Create.

API

Specify the privateClusterConfig field in the Cluster API resource:

{
  "name": "private-cluster-0",
  ...
  "ipAllocationPolicy": {
    "useIpAliases": true,
    "createSubnetwork": true,
  },
  ...
    "privateClusterConfig" {
      "enablePrivateNodes": boolean # Creates nodes with internal IP addresses only
      "enablePrivateEndpoint": boolean # false creates a cluster master with a publicly-reachable endpoint
      "masterIpv4CidrBlock": string # CIDR block for the cluster master
      "privateEndpoint": string # Output only
      "publicEndpoint": string # Output only
  }
}

Using a custom subnet

In this section, you create a VPC-native cluster named private-cluster-1 with the CIDR range 172.16.0.0/28 for the cluster master. You create a network, net-0, with the range 192.168.0.0/20. You create a subnet, subnet-0, in which to run the cluster. Your subnet has two secondary address ranges: c0-pods for the Pod IP addresses, and c0-services for the Service IP addresses.

gcloud

Create a network

First, create a network in which to run your cluster. The following command creates a network, net-0:

gcloud compute networks create net-0 \
    --subnet-mode custom

Create a subnet and secondary ranges

Next, create a subnet, subnet-0, in the net-0 network with secondary ranges c0-pods for Pod IPs and c0-services for Service IPs:

gcloud compute networks subnets create subnet-0 \
    --network net-0 \
    --region us-central1 \
    --range 192.168.0.0/20 \
    --secondary-range c0-pods=10.4.0.0/14,c0-services=10.0.32.0/20 \
    --enable-private-ip-google-access

Create a private cluster

Now, create a private cluster, private-cluster-1, using the network, subnet, and secondary ranges you created, and a CIDR block for the cluster master:

gcloud container clusters create private-cluster-1 \
    --zone us-central-c \
    --enable-ip-alias \
    --network net-0 \
    --subnetwork subnet-0 \
    --cluster-secondary-range-name c0-pods \
    --services-secondary-range-name c0-services \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.0/28 \
    --enable-master-authorized-networks \
    --master-authorized-networks [CIDR_BLOCK] \
    --no-enable-basic-auth \
    --no-issue-client-certificate

Console

Create a network, subnet, and secondary ranges

  1. Visit the VPC networks page in GCP Console.

    Go to the VPC networks page

  2. Click Create VPC network.

  3. From Name enter net-0.
  4. Ensure that Subnet creation mode is set to Custom.
  5. From New subnet, in Name, enter subnet-0.
  6. From the Region drop-down menu, select the desired region.
  7. For IP address range, enter 192.168.0.0/20.
  8. Click Create secondary IP range. For Subnet range name, enter c0-services, and for Secondary IP range, enter 10.0.32.0/20.
  9. Click Add IP range. For Subnet range name, enter c0-pods, and for Secondary IP range, enter 10.4.0.0/14.
  10. From Private Google access, click On.
  11. Click Done.
  12. Click Create.

Create a private cluster

Create a private cluster that uses your subnet:

  1. Visit the GKE menu in GCP Console.

    Visit the GKE menu

  2. Click Create cluster.

  3. Choose the Standard cluster template or choose an appropriate template for your workload.
  4. For Cluster name, enter private-cluster-1.
  5. Click Advanced options at the bottom of the menu.
  6. From VPC-native, select the Enable VPC-native (using alias IP) checkbox.
  7. From the Network drop-down menu, select net-0.
  8. From the Node subnet drop-down menu, select subnet-0.
  9. Clear the Automatically create secondary ranges checkbox.
  10. From the Pod secondary CIDR range drop-down menu, select c0-pods.
  11. From the Services secondary CIDR range drop-down menu, select c0-services.
  12. Under Network security, select the Private cluster checkbox.
  13. Set Master IP range to 172.16.0.0/28.
  14. Click Add authorized network.
  15. Fill the Name box with the desired name for the authorized network.
  16. Fill the Network box with a CIDR range that you want to grant whitelisted access to your Kubernetes master.
  17. Click Done. Add additional authorized networks as desired.
  18. Clear the Enable Kubernetes Dashboard checkbox.
  19. Click Create.

Creating a private cluster with a private cluster master endpoint

In this section, you create a private cluster with private nodes and a private cluster master endpoint. Your cluster master becomes accessible only within your VPC network. While the cluster master still technically has a public IP address, firewall rules prevent access to your cluster. You need to add authorized networks to access your cluster master, even from inside your cluster's VPC. You cannot access the master at all from outside the cluster.

gcloud

Follow the gcloud procedure in Creating a private cluster with a publicly-reachable cluster master endpoint.

To create a cluster with a private cluster master endpoint, specify the --enable-private-endpoint true flag at cluster creation.

Console

Follow the GCP Console procedure in Creating a private cluster with a publicly-reachable cluster master endpoint.

To create a cluster with a publicly-reachable master, clear the Access master using its external IP address checkbox under Network security.

API

To create a cluster with a publicly-reachable master, specify the enablePrivateEndpoint: true field in the privateClusterConfig resource.

Creating a private cluster in a shared VPC network

To learn how to create a private cluster in a shared VPC network, refer to the Shared VPC documentation.

Adding authorized networks to a private cluster

When you create a private cluster, the only IP addresses that have access to the cluster master are addresses in the following ranges:

  • The primary range of the subnet used for nodes.
  • The subnet's secondary range used for Pods.
  • The subnet's secondary range used for Services.

To provide additional access to the master, you can add authorized networks to your cluster.

Authorized networks have the following limitations:

  • You can add up to 20 authorized networks (whitelisted CIDR blocks) in a project.
  • The maximum size of any CIDR for an authorized network is /24.

gcloud

To add an authorized network to an existing cluster, run the following command:

gcloud beta container clusters update [CLUSTER_NAME] \
    --enable-master-authorized-networks \
    --master-authorized-networks [CIDR_BLOCK]

where:

  • [CLUSTER_NAME] is the name of the cluster.
  • [CIDR_BLOCK] is a CIDR block that you want to have access to your cluster master. You can add up to 20 comma-delimited CIDR blocks per project.

Console

To add an authorized network to an existing cluster, perform the following steps:

  1. Visit the GKE menu in the GCP Console.

    Visit the GKE menu

  2. Select the desired cluster, and then click Edit.

  3. Click Add authorized network.
  4. Fill the Name box with the desired name for the authorized network.
  5. Fill the Network box with a CIDR range that you want to grant whitelisted access to your Kubernetes master.
  6. Click Done. Add additional authorized networks as desired.
  7. Click Save.

Verify that nodes run without external IPs

After you create a private cluster, you should verify that the cluster's nodes are running without external IP addresses.

gcloud

To verify that your cluster's nodes do not have external IP addresses, run the following command:

kubectl get nodes --output yaml | grep -A4 addresses

The output shows that the nodes have internal IP addresses but do not have external addresses:

...
addresses:
- address: 10.0.0.4
  type: InternalIP
- address: ""
  type: ExternalIP
...

Here is another command you can use to verify that your nodes do not have external IP addresses:

kubectl get nodes --output wide

The output's EXTERNAL-IP column is empty:

STATUS ... VERSION        EXTERNAL-IP   OS-IMAGE ...
Ready      v.8.7-gke.1                 Container-Optimized OS from Google
Ready      v1.8.7-gke.1                 Container-Optimized OS from Google
Ready      v1.8.7-gke.1                 Container-Optimized OS from Google

Console

To verify that your cluster's nodes do not have external IP addresses, perform the following steps:

  1. Visit the GKE menu in the GCP Console.

    Visit the GKE menu

  2. In the list of clusters, click the desired cluster.

  3. Under Node pools, click the name of your instance group. For example, gke-private-cluster-0-default-pool-5c5add1f-grp.
  4. In the list of instances, verify that your instances do not have external IP addresses.

Viewing the cluster's subnet and secondary address ranges

After you create a private cluster, you can view the subnet and secondary address ranges that you or GKE provisioned for the cluster.

gcloud

List all subnets

To list the subnets in the default network, run the following command:

gcloud compute networks subnets list --network [NETWORK]

where [NETWORK] is the private cluster's network. If you created the cluster with an automatically-created subnet, use default.

In the command output, find the name of the cluster's subnet.

View cluster's subnet

Get information about the automatically created subnet:

gcloud compute networks subnets describe [SUBNET_NAME]

where [SUBNET_NAME] is the name of the subnet.

The output shows the primary address range for nodes (the first ipCidrRange field) and the secondary ranges for Pods and Services (under secondaryIpRanges):

...
ipCidrRange: 10.0.0.0/22
kind: compute#subnetwork
name: gke-private-cluster-1-subnet-163e3c97
...
privateIpGoogleAccess: true
...
secondaryIpRanges:
- ipCidrRange: 10.40.0.0/14
  rangeName: gke-private-cluster-1-pods-163e3c97
- ipCidrRange: 10.0.16.0/20
  rangeName: gke-private-cluster-1-services-163e3c97
...

Console

  1. Visit the VPC networks page in the GCP Console.

    Go to the VPC networks page

  2. Click the name of the subnet. For example, gke-private-cluster-0-subnet-163e3c97.

  3. Under IP address range, you can see the primary address range of your subnet. This is the range that is used for nodes.
  4. Under Secondary IP ranges, you can see the IP address range for Pods and the range for Services.

Viewing a private cluster's endpoint

You can view a private clusters endpoint using the gcloud command-line tool or GCP Console.

gcloud

Run the following command:

gcloud container clusters describe [CLUSTER_NAME]

In the command output, the endpoint value is the cluster endpoint.

Console

Perform the following steps:

  1. Visit the GKE menu in the GCP Console.

    Visit the GKE menu

  2. From the list, click the desired cluster.

  3. From the Details tab, under Cluster, look for the Endpoint field.

Pulling container images from an image registry

In a private cluster, the container runtime can pull container images from Container Registry; it cannot pull images from any other container image registry on the Internet. This is because the nodes in a private cluster do not have external IP addresses, so by default they cannot communicate with services outside of the Google network.

The nodes in a private cluster can communicate with Google services, like Container Registry, if they are on a subnet that has Private Google Access enabled.

The following commands create a Deployment that pulls a sample image from a Google-owned Container Registry repository. The next command lists all Pods running in the cluster:

kubectl run hello-deployment --image gcr.io/google-samples/hello-app:2.0
kubectl get pods

The output from kubectl get pods shows the running Pod:

NAME                                READY     STATUS    RESTARTS   AGE
hello-deployment-5574df5d59-8tks9   1/1       Running   0          5s

Other private cluster configurations

In addition to the preceding configurations, you can run private clusters with the following configurations.

Granting private nodes outbound Internet access

Private nodes don't have outbound Internet access because they do not have external IP addresses. You can run a private cluster with outbound Internet access for its nodes by managing your own NAT gateway.

To learn more, refer to Using a NAT Gateway with Google Kubernetes Engine.

Running a private cluster with network proxies for its master

You can create private clusters with network proxies, so that the cluster master is unreachable from outside your network, except through a proxy that you create and host in private IP space.

To learn more, refer to Creating Google Kubernetes Engine Private Clusters with Network Proxies for External Access.

Adding firewall rules for specific use cases

By default, firewall rules restrict your cluster master to only initiate TCP connections to your nodes on ports 443 (HTTPS) and 10250 (kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. For example, in Kubernetes 1.9 and older, kubectl top accesses heapster, which needs a firewall rule to that allow TCP connections on port 8080. To grant such access, you can add firewall rules.

The following sections explain how to add a firewall rule to a private cluster.

View cluster master's CIDR block

You need the cluster master's CIDR block to add a firewall rule.

gcloud

Run the following command:

gcloud container clusters describe [CLUSTER_NAME]

In the command output, take note of the value in the masterIpv4CidrBlock field.

Console

  1. Visit the GKE menu in the GCP Console.

    Visit the GKE menu

  2. Select the desired cluster.

From the Details tab, under Cluster, take note of the value in the Master address range field.

View existing firewall rules

You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.

gcloud

Run the following command:

gcloud compute firewall-rules list \
    --filter 'name~^gke-[CLUSTER_NAME]' \
    --format 'table(
        name,
        network,
        direction,
        sourceRanges.list():label=SRC_RANGES,
        allowed[].map().firewall_rule().list():label=ALLOW,
        targetTags.list():label=TARGET_TAGS
    )'

In the command output, take note of the value in the Targets field.

Console

Perform the following steps:

  1. Visit the Firewall rules menu in the GCP Console.

    Visit the Firewall rules menu

  2. Fill the Filter resources box with gke-[CLUSTER_NAME]

In the results, take note of the value in the Targets field.

Add a firewall rule

gcloud

Run the following command:

gcloud compute firewall-rules create [FIREWALL_RULE_NAME] \
    --action ALLOW \
    --direction INGRESS \
    --source-ranges [MASTER_CIDR_BLOCK] \
    --rules [PROTOCOL]:[PORT] \
    --target-tags [TARGET]

where:

  • [FIREWALL_RULE_NAME] is the name you choose for the firewall rule.
  • [MASTER_CIDR_BLOCK] is the cluster master's CIDR block that you collected previously.
  • [PROTOCOL]:[PORT] is the desired port and its protocol, tcp or udp.
  • [TARGET] is the target valued that you collected previously.

Console

Perform the following steps:

  1. Visit the Firewall rules menu in the GCP Console.

    Visit the Firewall rules menu

  2. Click Create firewall rule.

  3. Fill the Name box with the desired name for the firewall rule.
  4. From the Network drop-down menu, select the relevant network.
  5. From Direction of traffic, click Ingress.
  6. From Action on match, click Allow.
  7. From the Targets drop-down menu, select Specified target tags.
  8. Fill the Target tags box with the target value that you previously collected.
  9. From the Source filter drop-down menu, select IP ranges.
  10. Fill the Source IP ranges box with the cluster master's CIDR block that you collected previously.
  11. From Protocols and ports, click Specified protocols and ports, check the box for the relevant protocol (TCP or UDP), and fill the box with the desired port.
  12. Click Create.

Cleaning up

After completing the tasks on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:

Delete the clusters

gcloud

gcloud container clusters delete -q private-cluster-0 private-cluster-1

Console

  1. Visit the GKE menu in GCP Console.

    Visit the GKE menu

  2. Select each cluster.

  3. Click Delete.

Delete the network

gcloud

gcloud compute networks delete net-0

Console

  1. Go to the VPC networks page in the GCP Console.

    Go to the VPC networks page

  2. In the list of networks, from net-0, click the trash icon.

  3. Beside each subnet, click the trash icons.

Troubleshooting

The following sections explain how to resolve common issues related to private clusters.

Can't reach master

Symptoms
After creating a private cluster, attempting to run kubectl commands against the cluster returns an error, such as Unable to connect to the server: dial tcp 35.226.229.85:443: connect: connection timed out.
Potential causes
kubectl is unable to talk to the cluster master.
Resolution
You need to add authorized networks for your cluster to whitelist your network's IP addresses.

Can't create cluster due to omitted flag

Symptoms
gcloud container clusters create returns an error such as Cannot specify --enable-private-endpoint without --enable-private-nodes.
Potential causes
You did not specify a necessary flag.
Resolution
Ensure that you specify the necessary flags. You cannot enable a private endpoint for the cluster master without also enabling private nodes.

Can't create cluster due to overlapping master IPv4 CIDR block

Symptoms
gcloud container clusters create returns an error such as The given master_ipv4_cidr 10.128.0.0/28 overlaps with an existing network 10.128.0.0/20.
Potential causes
You specified a master CIDR block that overlaps with an existing subnet in your VPC.
Resolution
Specify a CIDR block for --master-ipv4-cidr that does not overlap with an existing subnet.

Can't create subnet

Symptoms
When you attempt to create a private cluster with an automatic subnet, or to create a custom subnet, you might encounter the following error: An IP range in the peer network overlaps with an IP range in one of the active peers of the local network.
Potential causes
The master CIDR range you specified overlaps with another IP range in the cluster. This can also occur if you've recently deleted a private cluster and you're attempting to create a new private cluster using the same master CIDR.
Resolution
Try using a different CIDR range.

Can't pull image from public Docker Hub

Symptoms
A Pod running in your cluster displays a warning in kubectl describe such as Failed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Potential causes
Node in a private cluster do not have outbound access to the public Internet. They have limited access to Google APIs and services, including Container Registry.
Resolution
To use images from Docker Hub in a private cluster, set up a Docker Hub mirror in Container Registry and pull images from the mirror rather than directly from Docker Hub.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine