Setting up a Private Cluster

This page explains how to create a private cluster in Google Kubernetes Engine.

Overview

In a private cluster, the nodes have internal RFC 1918 IP addresses only, which ensures that their workloads are isolated from the public Internet.

Every cluster has a Kubernetes API server called the master. Masters run on VMs in Google-owned projects. In a private cluster, you can control access to the master.

A private cluster can use an HTTP(S) load balancer or a network load balancer to accept incoming traffic, even though the cluster nodes do not have public IP addresses. A private cluster can also use an internal load balancer to accept traffic from within your VPC network.

Features

Private clusters use the following GCP features:

VPC Network Peering
Private clusters require VPC Network Peering. Your VPC network contains the cluster nodes, but a separate VPC network in a Google-owned project contains the master. The two VPC networks are connected using VPC Network Peering. Traffic between nodes and the master is routed entirely using internal IP addresses.
Private Google Access
Private nodes do not have outbound Internet access because they don't have external IP addresses. Private Google Access provides private nodes and their workloads with limited outbound access to Google Cloud Platform APIs and services over Google's private network. For example, Private Google Access makes it possible for private nodes to pull container images from Google Container Registry, and to send logs to Stackdriver.

Requirements

Private clusters have the following requirements:

VPC-native
A private cluster must be a VPC-native cluster, which has Alias IP Ranges enabled. VPC-native clusters are not compatible with legacy networks.
Kubernetes version 1.8.14-gke.0 or later
The nodes in a private cluster must run Kubernetes version 1.8.14-gke.0 or later.

Restrictions

Private clusters have the following restrictions:

  • You cannot convert an existing, non-private cluster to a private cluster.
  • You cannot use a cluster master, node, Pod, or Service IP range that overlaps with 172.17.0.0/16.
  • Deleting the VPC peering between the cluster master and the cluster nodes, deleting the firewall rules that allow ingress traffic from the cluster master to nodes on port 10250, or deleting the default route to the default Internet gateway, causes a private cluster to stop functioning.

Limitations

Private clusters have the following limitations:

  • Each private cluster you create uses a unique VPC Network Peering. Each VPC network can peer with up to 25 other VPC networks.
  • The size of the RFC 1918 block for the cluster master must be /28.
  • While GKE can detect overlap with the cluster master address block, it cannot detect overlap within a shared VPC network.

Limitations for regional clusters

  • Currently, you cannot use a proxy to reach the cluster master through its private IP address.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

Access to the master

In a private cluster, the master has two endpoints:

  • Private endpoint: This is the internal IP address of the master, behind an internal load balancer in the master's VPC network. Nodes communicate with the master using the private endpoint. Any VM in your VPC network, and in the same region as your private cluster, can use the private endpoint.

  • Public endpoint: This is the external IP address of the master. You can configure access to the public endpoint. In the most restricted case, there is no access to the public endpoint. You can relax the restriction by authorizing certain address ranges to access the public endpoint. You can also remove all restriction and allow anyone to access the public endpoint.

Determining cluster endpoints

To determine your cluster endpoints, describe your cluster using the following commands:

gcloud container clusters describe [CLUSTER-NAME] \
    --zone=[ZONE] | --region=[REGION] \
    --format="get(privateClusterConfig.privateEndpoint)"
gcloud container clusters describe [CLUSTER-NAME] \
    --zone=[ZONE] | --region=[REGION] \
    --format="get(privateClusterConfig.publicEndpoint)"

where:

  • [CLUSTER-NAME] is the name of your cluster.
  • [ZONE] is the zone of a zonal cluster, or [REGION] is the region of a regional cluster.

Access to the cluster endpoints

Private clusters have three configuration combinations to control access to the cluster endpoints. The following table describes these three configurations.

Public endpoint access disabled Public endpoint access enabled,
master authorized networks enabled
Public endpoint access enabled,
master authorized networks disabled
Security Highest level of restricted access to the master. Client access to the master's public endpoint is blocked. Access to the master must be from internal IP addresses. Restricted access to the master from both internal and external IP addresses that you define. Access to the master from any IP address.
GCP Console configuration options

Select Enable VPC-native.
Select Private cluster.
Clear Access master using its external IP address.

Enable master authorized networks is automatically enabled.

Select Enable VPC-native.
Select Private cluster.
Select Access master using its external IP address.
Select Enable master authorized networks.
Select Enable VPC-native.
Select Private cluster.
Select Access master using its external IP address.
Clear Enable master authorized networks.
gcloud cluster creation flags --enable-ip-alias
--enable-private-nodes
--enable-private-endpoint
--enable-master-authorized-networks
--enable-ip-alias
--enable-private-nodes
--enable-master-authorized-networks
--enable-ip-alias
--enable-private-nodes
--enable-master-authorized-networks
--master-authorized-networks=0.0.0.0/0
Communication between nodes and master

Nodes always contact the master using the private endpoint.

Master authorized networks

Required for access to the master from internal IP addresses other than nodes and Pods.

You do not need to explicitly authorize the internal IP address ranges of nodes and Pods. Those internal addresses are always authorized to communicate with the private endpoint.

Use --master-authorized-networks to specify additional internal IP addresses that can access the master. You cannot include external IP addresses in the list of master authorized networks, because access to the public endpoint is disabled.

Required for access to the master from external IP addresses, and from internal IP addresses other than nodes and Pods.

Use --master-authorized-networks to specify external and internal IP addresses, other than nodes and pods, that can access the master.

Not used.

If you enable access to the master's public endpoint without enabling master authorized networks, access to the master's public endpoint is not restricted.

Access using kubectl

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if they are in the same region as the cluster and their internal IP addresses are included in the list of master authorized networks.

From public IP addresses: Never.

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if they are in the same region as the cluster and their internal IP addresses are included in the list of master authorized networks.

From public IP addresses: Machines with public IP addresses can use kubectl to communicate with the public endpoint only if their public IP addresses are included in the list of master authorized networks. This includes machines outside of GCP and GCP VMs with external IP addresses.

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if they are in the same region as the cluster.

From public IP addresses: Any machine with a public IP address can use kubectl to communicate with the public endpoint. This includes machines outside of GCP and GCP VMs with external IP addresses.

Creating a private cluster with limited access to the public endpoint

When you create a private cluster, you must specify a /28 RFC 1918 address range to be used by the cluster master. The range you specify for the cluster master must not overlap with any subnetwork in your VPC network. After you create the cluster, you cannot change the cluster master's address range.

Using an automatically-generated subnet

In this section, you create a private cluster named private-cluster-0. GKE automatically generates a subnet for your cluster nodes. The subnet has Private Google Access enabled. In the subnet, GKE automatically creates two secondary ranges: one for Pods and one for Services.

gcloud

Run the following command:

gcloud container clusters create private-cluster-0 \
    --create-subnetwork name=my-subnet-0 \
    --enable-master-authorized-networks \
    --enable-ip-alias \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.0/28 \
    --no-enable-basic-auth \
    --no-issue-client-certificate

where:

  • --enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
  • --create-subnetwork name=my-subnet-0 causes GKE to automatically create a subnet named my-subnet-0.
  • --enable-ip-alias makes the cluster VPC-native.
  • --enable-private-nodes indicates that the cluster's nodes do not have external IP addresses.
  • --master-ipv4-cidr 172.16.0.0/28 specifies an RFC 1918 range for the master. This setting is permanent for this cluster.

Console

Perform the following steps:

  1. Visit the GKE menu in Google Cloud Platform Console.

    Visit the GKE menu

  2. Click Create cluster.

  3. Choose the Standard cluster template or choose an appropriate template for your workload.

  4. For Cluster name, enter my-subnet-0.

  5. Click Advanced options at the bottom of the menu.

  6. From VPC-native, select the Enable VPC-native (using alias IP) checkbox. Leave the Network drop-down menu set to default and the Node subnet drop-down menu set to default. This causes GKE to generate a subnet for your cluster.

  7. From Network security, select the Private cluster checkbox.

  8. To create a master that is accessible from authorized external IP ranges, keep the Access master using its external IP address checkbox selected.

  9. Set Master IP range to 172.16.0.0/28.

  10. Keep the __Enable master authorized networks checkbox selected.

  11. Clear the Enable Kubernetes Dashboard checkbox.

  12. Clear the Issue a client certificate checkbox.

  13. Click Create.

API

Specify the privateClusterConfig field in the Cluster API resource:

{
  "name": "private-cluster-0",
  ...
  "ipAllocationPolicy": {
    "useIpAliases": true,
    "createSubnetwork": true,
  },
  ...
    "privateClusterConfig" {
      "enablePrivateNodes": boolean # Creates nodes with internal IP addresses only
      "enablePrivateEndpoint": boolean # false creates a cluster master with a publicly-reachable endpoint
      "masterIpv4CidrBlock": string # CIDR block for the cluster master
      "privateEndpoint": string # Output only
      "publicEndpoint": string # Output only
  }
}

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-0.
  • The secondary range used for Pods.

Suppose you have a group of machines, outside of your VPC network, that have addresses in the range 203.0.113.0/29. You could authorize those machines to access the public endpoint by entering this command:

gcloud container clusters update private-cluster-0 \
    --enable-master-authorized-networks \
    --master-authorized-networks 203.0.113.0/29

Now these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-0.
  • The secondary range used for Pods.
  • Address ranges that you have authorized, for example, 203.0.113.0/29.

Using a custom subnet

In this section, you create a private cluster named private-cluster-1. You create a network, my-net-1. You create a subnet, my-subnet-1, with primary range 192.168.0.0/20, for your cluster nodes. Your subnet has two secondary address ranges: my-pods-1 for the Pod IP addresses, and my-services-1 for the Service IP addresses.

gcloud

Create a network

First, create a network for your cluster. The following command creates a network, my-net-1:

gcloud compute networks create my-net-1 \
    --subnet-mode custom

Create a subnet and secondary ranges

Next, create a subnet, my-subnet-1, in the my-net-1 network, with secondary ranges my-pods-1 for Pods and my-services-1 for Services:

gcloud compute networks subnets create my-subnet-1 \
    --network my-net-1\
    --region us-central1 \
    --range 192.168.0.0/20 \
    --secondary-range my-pods-1=10.4.0.0/14,my-services-1=10.0.32.0/20 \
    --enable-private-ip-google-access

Create a private cluster

Now, create a private cluster, private-cluster-1, using the network, subnet, and secondary ranges you created.

gcloud container clusters create private-cluster-1 \
    --zone us-central1-c \
    --enable-master-authorized-networks \
    --enable-ip-alias \
    --network my-net-1 \
    --subnetwork my-subnet-1 \
    --cluster-secondary-range-name my-pods-1 \
    --services-secondary-range-name my-services-1 \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.16/28 \
    --no-enable-basic-auth \
    --no-issue-client-certificate

Console

Create a network, subnet, and secondary ranges

  1. Visit the VPC networks page in GCP Console.

    Go to the VPC networks page

  2. Click Create VPC network.

  3. From Name enter my-net-1.

  4. Ensure that Subnet creation mode is set to Custom.

  5. From New subnet, in Name, enter my-subnet-1.

  6. From the Region drop-down menu, select the desired region.

  7. For IP address range, enter 192.168.0.0/20.

  8. Click Create secondary IP range. For Subnet range name, enter my-services-1, and for Secondary IP range, enter 10.0.32.0/20.

  9. Click Add IP range. For Subnet range name, enter my-pods-1, and for Secondary IP range, enter 10.4.0.0/14.

  10. From Private Google access, click On.

  11. Click Done.

  12. Click Create.

Create a private cluster

Create a private cluster that uses your subnet:

  1. Visit the GKE menu in GCP Console.

    Visit the GKE menu

  2. Click Create cluster.

  3. Choose the Standard cluster template or choose an appropriate template for your workload.

  4. For Cluster name, enter private-cluster-1.

  5. Click Advanced options at the bottom of the menu.

  6. From VPC-native, select the Enable VPC-native (using alias IP) checkbox.

  7. From the Network drop-down menu, select my-net-1.

  8. From the Node subnet drop-down menu, select my-subnet-1.

  9. Clear the Automatically create secondary ranges checkbox.

  10. From the Pod secondary CIDR range drop-down menu, select my-pods-1.

  11. From the Services secondary CIDR range drop-down menu, select my-services-1.

  12. Under Network security, select the Private cluster checkbox.

  13. Set Master IP range to 172.16.0.16/28.

  14. Keep the __Enable master authorized networks checkbox selected.

  15. Clear the Enable Kubernetes Dashboard checkbox.

  16. Clear the Issue a client certificate checkbox.

  17. Click Create.

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-1
  • The secondary range my-pods-1.

Suppose you have a group of machines, outside of my-net-1, that have addresses in the range 203.0.113.0/29. You could authorize those machines to access the public endpoint by entering this command:

gcloud container clusters update private-cluster-1 \
    --enable-master-authorized-networks \
    --master-authorized-networks 203.0.113.0/29

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-1
  • The secondary range my-pods-1.
  • Address ranges that you have authorized, for example, 203.0.113.0/29.

Creating a private cluster with no client access to the public endpoint

In this section, you create a private cluster with private nodes and no access the public endpoint. Your cluster master is accessible only from within your VPC network. You cannot access the master at all from outside your VPC network.

gcloud

Run the following command:

gcloud container clusters create private-cluster-2 \
    --create-subnetwork name=my-subnet-2 \
    --enable-master-authorized-networks \
    --enable-ip-alias \
    --enable-private-nodes \
    --enable-private-endpoint \
    --master-ipv4-cidr 172.16.0.32/28 \
    --no-enable-basic-auth \
    --no-issue-client-certificate

where:

  • --enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
  • --create-subnetwork name=my-subnet-2 causes GKE to automatically create a subnet named my-subnet-2.
  • --enable-ip-alias makes the cluster VPC-native.
  • --enable-private-nodes indicates that the cluster's nodes do not have external IP addresses.
  • --master-ipv4-cidr 172.16.0.32/28 specifies an RFC 1918 range for the master. This setting is permanent for this cluster.

Console

Perform the following steps:

  1. Visit the GKE menu in Google Cloud Platform Console.

    Visit the GKE menu

  2. Click Create cluster.

  3. Choose the Standard cluster template or choose an appropriate template for your workload.

  4. For Cluster name, enter my-subnet-2.

  5. Click Advanced options at the bottom of the menu.

  6. From VPC-native, select the Enable VPC-native (using alias IP) checkbox. Leave the Network drop-down menu set to default and the Node subnet drop-down menu set to default. This causes GKE to generate a subnet for your cluster.

  7. From Network security, select the Private cluster checkbox.

  8. Clear the Access master using its external IP address checkbox. Notice that __Enable Master Authorized

  9. Set Master IP range to 172.16.0.32/28.

  10. The __Enable master authorized networks checkbox is selected automatically.

  11. Clear the Enable Kubernetes Dashboard checkbox.

  12. Clear the Issue a client certificate checkbox.

  13. Click Create.

API

To create a cluster with a publicly-reachable master, specify the enablePrivateEndpoint: true field in the privateClusterConfig resource.

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-2.
  • The secondary range used for Pods.

For example, suppose you created a VM in the primary range of my-subnet-2. Then on that VM, you could configure kubectl to use the internal IP address of the master.

If you want to access the cluster master from outside my-subnet-2, you must authorize at least one address range to have access to the private endpoint.

Suppose you have a VM that is in the default network, in the same region as your cluster, but not in my-subnet-2.

For example:

  • my-subnet-2: 10.0.0.0/22
  • Pod secondary range: 10.52.0.0/14
  • VM address: 10.128.0.3

You could authorize the VM to access the master by using this command:

gcloud container clusters update private-cluster-2 \
    --enable-master-authorized-networks \
    --master-authorized-networks 10.128.0.3/32

Creating a private cluster in a shared VPC network

To learn how to create a private cluster in a shared VPC network, refer to the Shared VPC documentation.

Notes on master authorized networks

Authorized networks have the following limitations:

  • You can add up to 20 authorized networks (whitelisted CIDR blocks) in a project.
  • The maximum size of any CIDR for an authorized network is /24.

For more information, refer to Add an authorized network to an existing cluster .

Verify that nodes do not have external IPs

After you create a private cluster, you should verify that the cluster's nodes do not have external IP addresses.

gcloud

To verify that your cluster's nodes do not have external IP addresses, run the following command:

kubectl get nodes --output wide

The output's EXTERNAL-IP column is empty:

STATUS ... VERSION   EXTERNAL-IP  OS-IMAGE ...
Ready      v.8.7-gke.1                 Container-Optimized OS from Google
Ready      v1.8.7-gke.1                Container-Optimized OS from Google
Ready      v1.8.7-gke.1                Container-Optimized OS from Google

Console

To verify that your cluster's nodes do not have external IP addresses, perform the following steps:

  1. Visit the GKE menu in the GCP Console.

    Visit the GKE menu

  2. In the list of clusters, click the desired cluster.

  3. Under Node pools, click the name of your instance group. For example, gke-private-cluster-0-default-pool-5c5add1f-grp.

  4. In the list of instances, verify that your instances do not have external IP addresses.

Viewing the cluster's subnet and secondary address ranges

After you create a private cluster, you can view the subnet and secondary address ranges that you or GKE provisioned for the cluster.

gcloud

List all subnets

To list the subnets in your cluster's network, run the following command:

gcloud compute networks subnets list --network [NETWORK]

where [NETWORK] is the private cluster's network. If you created the cluster with an automatically-created subnet, use default.

In the command output, find the name of the cluster's subnet.

View cluster's subnet

Get information about the automatically created subnet:

gcloud compute networks subnets describe [SUBNET_NAME]

where [SUBNET_NAME] is the name of the subnet.

The output shows the primary address range for nodes (the first ipCidrRange field) and the secondary ranges for Pods and Services (under secondaryIpRanges):

...
ipCidrRange: 10.0.0.0/22
kind: compute#subnetwork
name: gke-private-cluster-1-subnet-163e3c97
...
privateIpGoogleAccess: true
...
secondaryIpRanges:

  • ipCidrRange: 10.40.0.0/14 rangeName: gke-private-cluster-1-pods-163e3c97
  • ipCidrRange: 10.0.16.0/20 rangeName: gke-private-cluster-1-services-163e3c97 ...

Console

  1. Visit the VPC networks page in the GCP Console.

    Go to the VPC networks page

  2. Click the name of the subnet. For example, gke-private-cluster-0-subnet-163e3c97.

  3. Under IP address range, you can see the primary address range of your subnet. This is the range that is used for nodes.

  4. Under Secondary IP ranges, you can see the IP address range for Pods and the range for Services.

Viewing a private cluster's endpoints

You can view a private clusters endpoints using the gcloud command-line tool or GCP Console.

gcloud

Run the following command:

gcloud container clusters describe [CLUSTER_NAME]

The output shows both the private and public endpoints:

...
privateClusterConfig:
enablePrivateEndpoint: true
enablePrivateNodes: true
masterIpv4CidrBlock: 172.16.0.32/28
privateEndpoint: 172.16.0.34
publicEndpoint: 35.239.154.67

Console

Perform the following steps:

  1. Visit the GKE menu in the GCP Console.

    Visit the GKE menu

  2. From the list, click the desired cluster.

  3. From the Details tab, under Cluster, look for the Endpoint field.

Pulling container images from an image registry

In a private cluster, the container runtime can pull container images from Container Registry; it cannot pull images from any other container image registry on the Internet. This is because the nodes in a private cluster do not have external IP addresses, so by default they cannot communicate with services outside of the Google network.

The nodes in a private cluster can communicate with Google services, like Container Registry, if they are on a subnet that has Private Google Access enabled.

The following commands create a Deployment that pulls a sample image from a Google-owned Container Registry repository:

kubectl run hello-deployment --image gcr.io/google-samples/hello-app:2.0

Other private cluster configurations

In addition to the preceding configurations, you can run private clusters with the following configurations.

Granting private nodes outbound Internet access

Private nodes don't have outbound Internet access because they do not have external IP addresses. If you want to provide outbound Internet access for your private nodes, you can use Cloud NAT or you can manage your own NAT gateway.

Running a private cluster with network proxies for its master

You can create private clusters with network proxies, so that the cluster master is unreachable from outside your network, except through a proxy that you create and host in private IP space.

To learn more, refer to Creating Google Kubernetes Engine Private Clusters with Network Proxies for External Access.

Adding firewall rules for specific use cases

By default, firewall rules restrict your cluster master to only initiate TCP connections to your nodes on ports 443 (HTTPS) and 10250 (kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. For example, in Kubernetes 1.9 and older, kubectl top accesses heapster, which needs a firewall rule to allow TCP connections on port 8080. To grant such access, you can add firewall rules.

The following sections explain how to add a firewall rule to a private cluster.

View cluster master's CIDR block

You need the cluster master's CIDR block to add a firewall rule.

gcloud

Run the following command:

gcloud container clusters describe [CLUSTER_NAME]

In the command output, take note of the value in the masterIpv4CidrBlock field.

Console

  1. Visit the GKE menu in the GCP Console.

    Visit the GKE menu

  2. Select the desired cluster.

From the Details tab, under Cluster, take note of the value in the Master address range field.

View existing firewall rules

You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.

gcloud

Run the following command:

gcloud compute firewall-rules list \
    --filter 'name~^gke-[CLUSTER_NAME]' \
    --format 'table(
        name,
        network,
        direction,
        sourceRanges.list():label=SRC_RANGES,
        allowed[].map().firewall_rule().list():label=ALLOW,
        targetTags.list():label=TARGET_TAGS
    )'

In the command output, take note of the value in the Targets field.

Console

Perform the following steps:

  1. Visit the Firewall rules menu in the GCP Console.

    Visit the Firewall rules menu

  2. Fill the Filter resources box with gke-[CLUSTER_NAME]

In the results, take note of the value in the Targets field.

Add a firewall rule

gcloud

Run the following command:

gcloud compute firewall-rules create [FIREWALL_RULE_NAME] \
    --action ALLOW \
    --direction INGRESS \
    --source-ranges [MASTER_CIDR_BLOCK] \
    --rules [PROTOCOL]:[PORT] \
    --target-tags [TARGET]

where:

  • [FIREWALL_RULE_NAME] is the name you choose for the firewall rule.
  • [MASTER_CIDR_BLOCK] is the cluster master's CIDR block that you collected previously.
  • [PROTOCOL]:[PORT] is the desired port and its protocol, tcp or udp.
  • [TARGET] is the target valued that you collected previously.

Console

Perform the following steps:

  1. Visit the Firewall rules menu in the GCP Console.

    Visit the Firewall rules menu

  2. Click Create firewall rule.

  3. Fill the Name box with the desired name for the firewall rule.

  4. From the Network drop-down menu, select the relevant network.

  5. From Direction of traffic, click Ingress.

  6. From Action on match, click Allow.

  7. From the Targets drop-down menu, select Specified target tags.

  8. Fill the Target tags box with the target value that you previously collected.

  9. From the Source filter drop-down menu, select IP ranges.

  10. Fill the Source IP ranges box with the cluster master's CIDR block that you collected previously.

  11. From Protocols and ports, click Specified protocols and ports, check the box for the relevant protocol (TCP or UDP), and fill the box with the desired port.

  12. Click Create.

Cleaning up

After completing the tasks on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:

Delete the clusters

gcloud

gcloud container clusters delete -q private-cluster-0 private-cluster-1

Console

  1. Visit the GKE menu in GCP Console.

    Visit the GKE menu

  2. Select each cluster.

  3. Click Delete.

Delete the network

gcloud

gcloud compute networks delete net-0

Console

  1. Go to the VPC networks page in the GCP Console.

    Go to the VPC networks page

  2. In the list of networks, from net-0, click the trash icon.

  3. Beside each subnet, click the trash icon.

Troubleshooting

The following sections explain how to resolve common issues related to private clusters.

Cluster overlaps with active peer

Symptoms
Attempting to create a private cluster returns an error such as Google Compute Engine: An IP range in the peer network overlaps with an IP range in an active peer of the local network..
Potential causes
You chose an overlapping master CIDR.
Resolution
Delete and recreate the cluster using a different master CIDR.

Can't reach master

Symptoms
After creating a private cluster, attempting to run kubectl commands against the cluster returns an error, such as Unable to connect to the server: dial tcp [IP_ADDRESS]: connect: connection timed out or Unable to connect to the server: dial tcp [IP_ADDRESS]: i/o timeout.
Potential causes
kubectl is unable to talk to the cluster master.
Resolution
You need to add authorized networks for your cluster to whitelist your network's IP addresses.

Can't create cluster due to omitted flag

Symptoms
gcloud container clusters create returns an error such as Cannot specify --enable-private-endpoint without --enable-private-nodes.
Potential causes
You did not specify a necessary flag.
Resolution
Ensure that you specify the necessary flags. You cannot enable a private endpoint for the cluster master without also enabling private nodes.

Can't create cluster due to overlapping master IPv4 CIDR block

Symptoms
gcloud container clusters create returns an error such as The given master_ipv4_cidr 10.128.0.0/28 overlaps with an existing network 10.128.0.0/20.
Potential causes
You specified a master CIDR block that overlaps with an existing subnet in your VPC.
Resolution
Specify a CIDR block for --master-ipv4-cidr that does not overlap with an existing subnet.

Can't create subnet

Symptoms
When you attempt to create a private cluster with an automatic subnet, or to create a custom subnet, you might encounter the following error: An IP range in the peer network overlaps with an IP range in one of the active peers of the local network.
Potential causes
The master CIDR range you specified overlaps with another IP range in the cluster. This can also occur if you've recently deleted a private cluster and you're attempting to create a new private cluster using the same master CIDR.
Resolution
Try using a different CIDR range.

Can't pull image from public Docker Hub

Symptoms
A Pod running in your cluster displays a warning in kubectl describe such as Failed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Potential causes
Nodes in a private cluster do not have outbound access to the public Internet. They have limited access to Google APIs and services, including Container Registry.
Resolution
To use images from Docker Hub in a private cluster, configure your Docker daemon to fetch images from Container Registry's Docker Hub mirror. It will then fetch images from the mirror rather than directly from Docker Hub.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine