This page explains how to create a private cluster in Google Kubernetes Engine (GKE). In a private cluster, the nodes have internal RFC 1918 IP addresses only, which ensures that their workloads are isolated from the public internet. To learn more about how private clusters work, refer to Private clusters.
Before you begin
Make yourself familar with the requirements, restrictions, and limitations before moving to the next step.
To prepare for this task, perform the following steps:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
- Set your default project ID:
gcloud config set project [PROJECT_ID]
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone [COMPUTE_ZONE]
- If you are working with regional clusters, set your default compute region:
gcloud config set compute/region [COMPUTE_REGION]
- Update
gcloud
to the latest version:gcloud components update
Access to the master
In private clusters, the master has a private and public endpoint. There are three configuration combinations to control access to the cluster endpoints.
- Public endpoint access disabled. This creates a private cluster with no client access to the public endpoint.
- Public endpoint access enabled, master authorized networks enabled. This creates a private cluster with limited access to the public endpoint.
- Public endpoint access enabled, master authorized networks disabled. This creates a private cluster with unrestricted access to the public endpoint.
See Access to cluster endpoints for an overview of the differences between the above configuration options.
Creating a private cluster with no client access to the public endpoint
In this section, you create a private cluster with private nodes and no access the public endpoint.
gcloud
Run the following command:
gcloud container clusters create private-cluster-0 \ --create-subnetwork name=my-subnet-0 \ --enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --enable-private-endpoint \ --master-ipv4-cidr 172.16.0.32/28 \ --no-enable-basic-auth \ --no-issue-client-certificate
where:
--enable-master-authorized-networks
specifies that access to the public endpoint is restricted to IP address ranges that you authorize.--create-subnetwork name=my-subnet-0
causes GKE to automatically create a subnet namedmy-subnet-0
.--enable-ip-alias
makes the cluster VPC-native.--enable-private-nodes
indicates that the cluster's nodes do not have external IP addresses.--master-ipv4-cidr 172.16.0.32/28
specifies an RFC 1918 range for the master. This setting is permanent for this cluster.
Console
Perform the following steps:
Visit the GKE menu in Google Cloud Console.
Click Create cluster.
Choose the Standard cluster template or choose an appropriate template for your workload.
For Name, enter
my-subnet-0
.Click Availability, networking, security, and additional features at the bottom of the menu.
From VPC-native, select the Enable VPC-native (using alias IP) checkbox. Leave the Network drop-down menu set to
default
and the Node subnet drop-down menu set todefault
. This causes GKE to generate a subnet for your cluster.From Network security, select the Private cluster checkbox.
Clear the Access master using its external IP address checkbox.
Set Master IP range to
172.16.0.32/28
.The Enable master authorized networks checkbox is selected automatically.
Clear the Enable Kubernetes Dashboard checkbox.
Clear the Issue a client certificate checkbox.
Click Create.
API
To create a cluster with a publicly-reachable master, specify the
enablePrivateEndpoint: true
field in the privateClusterConfig
resource.
At this point, these are the only IP addresses that have access to the cluster master:
- The primary range of
my-subnet-0
. - The secondary range used for Pods.
For example, suppose you created a VM in the primary range of my-subnet-0
.
Then on that VM, you could
configure kubectl
to use the internal IP address
of the master.
If you want to access the cluster master from outside my-subnet-0
, you must
authorize at least one address range to have access to the private endpoint.
Suppose you have a VM that is in the default network, in the same region as
your cluster, but not in my-subnet-0
.
For example:
my-subnet-0
: 10.0.0.0/22- Pod secondary range: 10.52.0.0/14
- VM address: 10.128.0.3
You could authorize the VM to access the master by using this command:
gcloud container clusters update private-cluster-0 \ --enable-master-authorized-networks \ --master-authorized-networks 10.128.0.3/32
Creating a private cluster with limited access to the public endpoint
When creating a private cluster using this configuration, you can choose to use an automatically generated subnet, or a custom subnet.
Using an automatically generated subnet
In this section, you create a private cluster named where GKE automatically generates a subnet for your cluster nodes. The subnet has Private Google Access enabled. In the subnet, GKE automatically creates two secondary ranges: one for Pods and one for Services.
gcloud
Run the following command:
gcloud container clusters create private-cluster-1 \ --create-subnetwork name=my-subnet-1 \ --enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --master-ipv4-cidr 172.16.0.0/28 \ --no-enable-basic-auth \ --no-issue-client-certificate
where:
--enable-master-authorized-networks
specifies that access to the public endpoint is restricted to IP address ranges that you authorize.--create-subnetwork name=my-subnet-1
causes GKE to automatically create a subnet namedmy-subnet-1
.--enable-ip-alias
makes the cluster VPC-native.--enable-private-nodes
indicates that the cluster's nodes do not have external IP addresses.--master-ipv4-cidr 172.16.0.0/28
specifies an RFC 1918 range for the master. This setting is permanent for this cluster.
Console
Perform the following steps:
Visit the GKE menu in Google Cloud Console.
Click Create cluster.
Choose the Standard cluster template or choose an appropriate template for your workload.
For Name, enter
my-subnet-1
.Click Availability, networking, security, and additional features at the bottom of the menu.
For VPC-native, leave Enable VPC-native (using alias IP) selected. Leave the Network drop-down menu set to
default
and the Node subnet drop-down menu set todefault
. This causes GKE to generate a subnet for your cluster.From Network security, select the Private cluster checkbox.
To create a master that is accessible from authorized external IP ranges, keep the Access master using its external IP address checkbox selected.
Set Master IP range to
172.16.0.0/28
.Keep the Enable master authorized networks checkbox selected.
Clear the Enable Kubernetes Dashboard checkbox.
Clear the Issue a client certificate checkbox.
Click Create.
API
Specify the privateClusterConfig
field in the Cluster
API resource:
{ "name": "private-cluster-1", ... "ipAllocationPolicy": { "createSubnetwork": true, }, ... "privateClusterConfig" { "enablePrivateNodes": boolean # Creates nodes with internal IP addresses only "enablePrivateEndpoint": boolean # false creates a cluster master with a publicly-reachable endpoint "masterIpv4CidrBlock": string # CIDR block for the cluster master "privateEndpoint": string # Output only "publicEndpoint": string # Output only } }
At this point, these are the only IP addresses that have access to the cluster master:
- The primary range of
my-subnet-1
. - The secondary range used for Pods.
Suppose you have a group of machines, outside of your VPC network, that have
addresses in the range 203.0.113.0/29
. You could authorize those machines to
access the public endpoint by entering this command:
gcloud container clusters update private-cluster-1 \ --enable-master-authorized-networks \ --master-authorized-networks 203.0.113.0/29
Now these are the only IP addresses that have access to the cluster master:
- The primary range of
my-subnet-1
. - The secondary range used for Pods.
- Address ranges that you have authorized, for example, 203.0.113.0/29.
Using a custom subnet
In this section, you create a private cluster named private-cluster-2
. You
create a network, my-net-0
. You create a subnet, my-subnet-2
, with primary
range 192.168.0.0/20
, for your cluster nodes. Your subnet has two secondary
address ranges: my-pods-1
for the Pod IP addresses, and my-services-1
for
the Service IP addresses.
gcloud
Create a network
First, create a network for your cluster. The following command creates a
network, my-net-0
:
gcloud compute networks create my-net-0 \ --subnet-mode custom
Create a subnet and secondary ranges
Next, create a subnet, my-subnet-2
, in the my-net-0
network, with
secondary ranges my-pods-2
for Pods and my-services-2
for Services:
gcloud compute networks subnets create my-subnet-2 \ --network my-net-0\ --region us-central1 \ --range 192.168.0.0/20 \ --secondary-range my-pods-2=10.4.0.0/14,my-services-2=10.0.32.0/20 \ --enable-private-ip-google-access
Create a private cluster
Now, create a private cluster, private-cluster-1
, using the network,
subnet, and secondary ranges you created.
gcloud container clusters create private-cluster-1 \ --zone us-central1-c \ --enable-master-authorized-networks \ --network my-net-0 \ --subnetwork my-subnet-2 \ --cluster-secondary-range-name my-pods-2 \ --services-secondary-range-name my-services-2 \ --enable-private-nodes \ --enable-ip-alias \ --master-ipv4-cidr 172.16.0.16/28 \ --no-enable-basic-auth \ --no-issue-client-certificate
Console
Create a network, subnet, and secondary ranges
Visit the VPC networks page in Cloud Console.
Click Create VPC network.
From Name enter
my-net-0
.Ensure that Subnet creation mode is set to Custom.
From New subnet, in Name, enter
my-subnet-2
.From the Region drop-down menu, select the desired region.
For IP address range, enter
192.168.0.0/20
.Click Create secondary IP range. For Subnet range name, enter
my-services-1
, and for Secondary IP range, enter10.0.32.0/20
.Click Add IP range. For Subnet range name, enter
my-pods-1
, and for Secondary IP range, enter10.4.0.0/14
.From Private Google Access, click On.
Click Done.
Click Create.
Create a private cluster
Create a private cluster that uses your subnet:
Visit the GKE menu in Cloud Console.
Click Create cluster.
Choose the Standard cluster template or choose an appropriate template for your workload.
For Name, enter
private-cluster-2
.Click Availability, networking, security, and additional features at the bottom of the menu.
For VPC-native, leave Enable VPC-native (using alias IP) selected. checkbox.
From the Network drop-down menu, select my-net-1.
From the Node subnet drop-down menu, select my-subnet-2.
Clear the Automatically create secondary ranges checkbox.
From the Pod secondary CIDR range drop-down menu, select my-pods-1.
From the Services secondary CIDR range drop-down menu, select my-services-1.
Under Network security, select the Private cluster checkbox.
Set Master IP range to
172.16.0.16/28
.Keep the Enable master authorized networks checkbox selected.
Clear the Enable Kubernetes Dashboard checkbox.
Clear the Issue a client certificate checkbox.
Click Create.
At this point, these are the only IP addresses that have access to the cluster master:
- The primary range of
my-subnet-2
- The secondary range
my-pods-1
.
Suppose you have a group of machines, outside of my-net-1
, that have addresses
in the range 203.0.113.0/29
. You could authorize those machines to access the
public endpoint by entering this command:
gcloud container clusters update private-cluster-1 \ --enable-master-authorized-networks \ --master-authorized-networks 203.0.113.0/29
At this point, these are the only IP addresses that have access to the cluster master:
- The primary range of
my-subnet-2
- The secondary range
my-pods-1
. - Address ranges that you have authorized, for example, 203.0.113.0/29.
Using Cloud Shell to access a private cluster
The private cluster you created in the preceding exercise, private-cluster-1
,
has a public endpoint and has master authorized networks enabled. If you want to
use Cloud Shell to access the cluster, you must add
the public IP address of your Cloud Shell to the cluster's list of master
authorized networks.
To do this:
In your Cloud Shell command-line window, use
dig
to find the external IP address of your Cloud Shell:dig +short myip.opendns.com @resolver1.opendns.com
Add the external address of your Cloud Shell to your cluster's list of master authorized networks:
gcloud container clusters update private-cluster-1 \ --zone us-central1-c \ --enable-master-authorized-networks \ --master-authorized-networks [EXISTING_AUTH_NETS],[SHELL_IP]/32
where:
[EXISTING_AUTH_NETS]
is your existing list of master authorized networks. You can find your master authorized networks in the console or by running the following command:gcloud container clusters describe private-cluster-1 --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])"
.[SHELL_IP]
is the external IP address of your Cloud Shell.
Get credentials, so that you can use
kubectl
to access the cluster:gcloud container clusters get-credentials private-cluster-1 \ --zone us-central1-a \ --project [PROJECT_ID]
where
[PROJECT_ID]
is your project ID.
Now you can use kubectl
, in Cloud Shell, to access your private cluster. For
example:
kubectl get nodes
Creating a private cluster with unrestricted access to the public endpoint
In this section, you create a private cluster where any IP address can access the master.
gcloud
Run the following command:
gcloud container clusters create private-cluster-2 \ --create-subnetwork name=my-subnet-3 \ --no-enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --enable-private-endpoint \ --master-ipv4-cidr 172.16.0.32/28 \ --no-enable-basic-auth \ --no-issue-client-certificate
where:
--enable-master-authorized-networks
specifies that access to the public endpoint is restricted to IP address ranges that you authorize.--create-subnetwork name=my-subnet-3
causes GKE to automatically create a subnet namedmy-subnet-3
.--enable-ip-alias
makes the cluster VPC-native.--enable-private-nodes
indicates that the cluster's nodes do not have external IP addresses.--master-ipv4-cidr 172.16.0.32/28
specifies an RFC 1918 range for the master. This setting is permanent for this cluster.
Console
Perform the following steps:
Visit the GKE menu in Google Cloud Console.
Click Create cluster.
Choose the Standard cluster template or choose an appropriate template for your workload.
For Name, enter
private-cluster-2
.Click Availability, networking, security, and additional features at the bottom of the menu.
From VPC-native, select the Enable VPC-native (using alias IP) checkbox. Leave the Network drop-down menu set to
default
and the Node subnet drop-down menu set todefault
. This causes GKE to generate a subnet for your cluster.From Network security, select the Private cluster checkbox.
Clear the Access master using its external IP address checkbox.
Set Master IP range to
172.16.0.32/28
.Clear the Enable master authorized networks checkbox automatically.
Clear the Enable Kubernetes Dashboard checkbox.
Clear the Issue a client certificate checkbox.
Click Create.
Other private cluster configurations
In addition to the preceding configurations, you can run private clusters with the following configurations.
Granting private nodes outbound internet access
To provide outbound internet access for your private nodes, you can use Cloud NAT or you can manage your own NAT gateway.
Creating a private cluster in a shared VPC network
To learn how to create a private cluster in a shared VPC network, refer to the Shared VPC documentation.
Routing between on-premises/cluster VPC and cluster master
When you create a
GKE private cluster
with a private master endpoint, the cluster's master is
inaccessible from the public internet, but it needs to be accessible to
client-side tools like kubectl
. To enable traffic between the cluster
master and your on-premises network, there must be routes between the
on-premises network and the Google-owned VPC network that hosts the cluster
master.
The Google VPC automatically exports the route to the master CIDR to your VPC, connected to your on-premises network, using Cloud VPN or Cloud Interconnect. However, the route to your on-premises environment must also be exported from your VPC to the Google VPC.
To share the routes, enable the --export-custom-routes
flag on the peering
between your VPC and the Google-owned VPC.
Identify the peering between the VPC for your cluster and the Google-owned VPC:
gcloud beta container clusters describe [CLUSTER-NAME]
The output of this command includes the cluster's
privateClusterConfig.peeringName
field. This is the name of the peering between your cluster and the Google-owned VPC. For example:privateClusterConfig: enablePrivateNodes: true masterIpv4CidrBlock: 172.16.1.0/28 peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer privateEndpoint: 172.16.1.2 publicEndpoint: 34.68.128.12
Enable
--export-custom-routes
on the peering:gcloud beta compute networks peerings update [PEERING] --network [NETWORK] \ --export-custom-routes
where [PEERING] is the value of
privateClusterConfig.peeringName
identified in the previous step, and [NETWORK] is the name of your VPC.This flag causes [NETWORK] to advertise its routes to the Google-owned VPC where your cluster master is located. The next step explains how to advertise the route from your VPC to the Google-owned VPC to your on-premises environment.
Optional: To enable bidirectional connectivity, the route to the cluster master CIDR must also be advertised from your VPC to your on-premises environment. You can do this in two ways:
Recommended technique: Advertise the route to the master subnet as a custom route over eBGP from Cloud Router. See Advertising Custom IP Ranges for more information. This technique is recommended because the advertised routes are withdrawn if eBGP is unavailable, which helps blackhole traffic.
Provision a static route on the on-prem router or edge device towards Google Cloud.
Verifying that custom route export is enabled
Use this command to verify that the --export-custom-routes
option is enabled on the peering
between your VPC and the Google-owned VPC:
gcloud beta compute networks peerings list
The output of this command lists your Compute Engine network's peerings. Find
the peering (whose name you found in step one of the above procedure) and verify
that its EXPORT_CUSTOM_ROUTES
column is True
and the STATE
column is
ACTIVE
.
Disabling custom route export
To disable custom route export from your VPC:
gcloud beta compute networks peerings update [PEERING] --network [NETWORK] --no-export-custom-routes
where [PEERING] is the value of privateClusterConfig.peeringName
, and
[NETWORK] is the name of your VPC. To find the peeringName
, see the
first step of the instructions above to enable custom route export.
Verify that nodes do not have external IPs
After you create a private cluster, verify that the cluster's nodes do not have external IP addresses.
gcloud
To verify that your cluster's nodes do not have external IP addresses, run the following command:
kubectl get nodes --output wide
The output's EXTERNAL-IP
column is empty:
STATUS ... VERSION EXTERNAL-IP OS-IMAGE ... Ready v.8.7-gke.1 Container-Optimized OS from Google Ready v1.8.7-gke.1 Container-Optimized OS from Google Ready v1.8.7-gke.1 Container-Optimized OS from Google
Console
To verify that your cluster's nodes do not have external IP addresses, perform the following steps:
Visit the GKE menu in the Cloud Console.
In the list of clusters, click the desired cluster.
Under Node pools, click the name of your instance group. For example, gke-private-cluster-0-default-pool-5c5add1f-grp.
In the list of instances, verify that your instances do not have external IP addresses.
Viewing the cluster's subnet and secondary address ranges
After you create a private cluster, you can view the subnet and secondary address ranges that you or GKE provisioned for the cluster.
gcloud
List all subnets
To list the subnets in your cluster's network, run the following command:
gcloud compute networks subnets list --network [NETWORK]
where [NETWORK] is the private cluster's network. If you created
the cluster with an automatically-created subnet, use default
.
In the command output, find the name of the cluster's subnet.
View cluster's subnet
Get information about the automatically created subnet:
gcloud compute networks subnets describe [SUBNET_NAME]
where [SUBNET_NAME] is the name of the subnet.
The output shows the primary address range for nodes (the first
ipCidrRange
field) and the secondary ranges for Pods and Services (under
secondaryIpRanges
):
...
ipCidrRange: 10.0.0.0/22
kind: compute#subnetwork
name: gke-private-cluster-1-subnet-163e3c97
...
privateIpGoogleAccess: true
...
secondaryIpRanges:
- ipCidrRange: 10.40.0.0/14
rangeName: gke-private-cluster-1-pods-163e3c97
- ipCidrRange: 10.0.16.0/20
rangeName: gke-private-cluster-1-services-163e3c97
...
Console
Visit the VPC networks page in the Cloud Console.
Click the name of the subnet. For example, gke-private-cluster-0-subnet-163e3c97.
Under IP address range, you can see the primary address range of your subnet. This is the range that is used for nodes.
Under Secondary IP ranges, you can see the IP address range for Pods and the range for Services.
Viewing a private cluster's endpoints
You can view a private cluster's endpoints using the gcloud
command-line tool
or Cloud Console.
gcloud
Run the following command:
gcloud container clusters describe [CLUSTER_NAME]
The output shows both the private and public endpoints:
... privateClusterConfig: enablePrivateEndpoint: true enablePrivateNodes: true masterIpv4CidrBlock: 172.16.0.32/28 privateEndpoint: 172.16.0.34 publicEndpoint: 35.239.154.67
Console
Perform the following steps:
Visit the GKE menu in the Cloud Console.
From the list, click the desired cluster.
From the Details tab, under Cluster, look for the Endpoint field.
Pulling container images from an image registry
In a private cluster, the container runtime can pull container images from Container Registry; it cannot pull images from any other container image registry on the internet. This is because the nodes in a private cluster do not have external IP addresses, so by default they cannot communicate with services outside of the Google network.
The nodes in a private cluster can communicate with Google services, like Container Registry, if they are on a subnet that has Private Google Access enabled.
The following commands create a Deployment that pulls a sample image from a Google-owned Container Registry repository:
kubectl run hello-deployment --image gcr.io/google-samples/hello-app:2.0
Adding firewall rules for specific use cases
This section explains how to add a firewall rule to a private cluster. By default,
firewall rules restrict your cluster master to only initiate TCP connections to
your nodes on ports 443
(HTTPS) and 10250
(kubelet). For some Kubernetes
features, you might need to add firewall rules to allow access on additional
ports. For example, in Kubernetes 1.9 and older, kubectl top
accesses heapster
,
which needs a firewall rule to allow TCP connections on port 8080. To grant such
access, you can add firewall rules.
Adding a firewall rule allows traffic from the cluster master to all of the following:
- The specified port of each node (hostPort).
- The specified port of each Pod running on these nodes.
- The specified port of each Service running on these nodes.
To learn about firewall rules, refer to Firewall Rules in the Cloud Load Balancing documentation
To add a firewall rule in a private cluster, you need to record the cluster master's CIDR block and the target used. After you have recorded this you can create the rule.
Step 1. View cluster master's CIDR block
You need the cluster master's CIDR block to add a firewall rule.
gcloud
Run the following command:
gcloud container clusters describe [CLUSTER_NAME]
In the command output, take note of the value in the masterIpv4CidrBlock
field.
Console
Visit the GKE menu in the Cloud Console.
Select the desired cluster.
From the Details tab, under Cluster, take note of the value in the Master address range field.
Step 2. View existing firewall rules
You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.
gcloud
Run the following command:
gcloud compute firewall-rules list \ --filter 'name~^gke-[CLUSTER_NAME]' \ --format 'table( name, network, direction, sourceRanges.list():label=SRC_RANGES, allowed[].map().firewall_rule().list():label=ALLOW, targetTags.list():label=TARGET_TAGS )'
In the command output, take note of the value in the Targets field.
Console
Perform the following steps:
Visit the Firewall rules menu in the Cloud Console.
Fill the Filter resources box with
gke-[CLUSTER_NAME]
In the results, take note of the value in the Targets field.
Step 3. Add a firewall rule
gcloud
Run the following command:
gcloud compute firewall-rules create [FIREWALL_RULE_NAME] \ --action ALLOW \ --direction INGRESS \ --source-ranges [MASTER_CIDR_BLOCK] \ --rules [PROTOCOL]:[PORT] \ --target-tags [TARGET]
where:
- [FIREWALL_RULE_NAME] is the name you choose for the firewall rule.
- [MASTER_CIDR_BLOCK] is the cluster master's CIDR block that you collected previously.
- [PROTOCOL]:[PORT] is the desired port and its
protocol,
tcp
orudp
. - [TARGET] is the target valued that you collected previously.
Console
Perform the following steps:
Visit the Firewall rules menu in the Cloud Console.
Click Create firewall rule.
Fill the Name box with the desired name for the firewall rule.
From the Network drop-down menu, select the relevant network.
From Direction of traffic, click Ingress.
From Action on match, click Allow.
From the Targets drop-down menu, select Specified target tags.
Fill the Target tags box with the target value that you previously collected.
From the Source filter drop-down menu, select IP ranges.
Fill the Source IP ranges box with the cluster master's CIDR block that you collected previously.
From Protocols and ports, click Specified protocols and ports, check the box for the relevant protocol (TCP or UDP), and fill the box with the desired port.
Click Create.
Protecting a private cluster with VPC Service Controls
To further secure your GKE private clusters, you can protect them using VPC Service Controls.
VPC Service Controls provides additional security for your GKE private clusters to help mitigate the risk of data exfiltration. Using VPC Service Controls, you can add projects to service perimeters that protect resources and services from requests that originate outside the perimeter.
To learn more about service perimeters, refer to the Service perimeter configuration page of the VPC Service Controls documentation.
If you use Container Registry with your GKE private cluster, additional steps are required to use that private cluster with VPC Service Controls. For more information, refer to the Setting up Container Registry for GKE private clusters page.
Cleaning up
After completing the tasks on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:
Delete the clusters
gcloud
gcloud container clusters delete -q private-cluster-0 private-cluster-1
Console
Visit the GKE menu in Cloud Console.
Select each cluster.
Click Delete.
Delete the network
gcloud
gcloud compute networks delete net-0
Console
Go to the VPC networks page in the Cloud Console.
In the list of networks, from net-0, click the trash icon.
Beside each subnet, click the trash icon.
Requirements, restrictions, and limitations
Private clusters have the following requirements:
- A private cluster must be a VPC-native cluster, which has Alias IP Ranges enabled. VPC-native is enabled by default for new clusters. VPC-native clusters are not compatible with legacy networks.
Private clusters have the following restrictions:
- You cannot convert an existing, non-private cluster to a private cluster.
- You cannot use a cluster master, node, Pod, or Service IP range that overlaps with 172.17.0.0/16 if you are running a cluster with a version below 1.14.4.
- Deleting the VPC peering between the cluster master and the cluster nodes, deleting the firewall rules that allow ingress traffic from the cluster master to nodes on port 10250, or deleting the default route to the default internet gateway, causes a private cluster to stop functioning. For a private cluster to come up again after deleting the default route, you need to provision the restricted VIP statically.
- You can add up to 50 authorized networks (whitelisted CIDR blocks) in a project. For more information, refer to Add an authorized network to an existing cluster .
Private clusters have the following limitations:
- Each private cluster you create uses a unique VPC Network Peering. Each VPC network can peer with up to 25 other VPC networks.
- The size of the RFC 1918 block for the cluster master must be /28.
- While GKE can detect overlap with the cluster master address block, it cannot detect overlap within a shared VPC network.
- Nodes in a private cluster do not have outbound access to the public internet; they have limited access to Google APIs and services.
- The private IP address of the master in a regional cluster is only accessible from subnetworks within the same region, or from on-premises environments connected to the same region.
Troubleshooting
The following sections explain how to resolve common issues related to private clusters.
Cluster overlaps with active peer
- Symptoms
- Attempting to create a private cluster returns an error such as
Google Compute Engine: An IP range in the peer network overlaps with an IP range in an active peer of the local network.
. - Potential causes
- You chose an overlapping master CIDR.
- Resolution
- Delete and recreate the cluster using a different master CIDR.
Can't reach cluster
- Symptoms
- After creating a private cluster, attempting to run
kubectl
commands against the cluster returns an error, such asUnable to connect to the server: dial tcp [IP_ADDRESS]: connect: connection timed out
orUnable to connect to the server: dial tcp [IP_ADDRESS]: i/o timeout
. - Potential causes
kubectl
is unable to talk to the cluster master.- Resolution
- You need to add authorized networks for your cluster to whitelist your network's IP addresses.
Can't create cluster due to omitted flag
- Symptoms
gcloud container clusters create
returns an error such asCannot specify --enable-private-endpoint without --enable-private-nodes.
- Potential causes
- You did not specify a necessary flag.
- Resolution
- Ensure that you specify the necessary flags. You cannot enable a private endpoint for the cluster master without also enabling private nodes.
Can't create cluster due to overlapping master IPv4 CIDR block
- Symptoms
gcloud container clusters create
returns an error such asThe given master_ipv4_cidr 10.128.0.0/28 overlaps with an existing network 10.128.0.0/20.
- Potential causes
- You specified a master CIDR block that overlaps with an existing subnet in your VPC.
- Resolution
- Specify a CIDR block for
--master-ipv4-cidr
that does not overlap with an existing subnet.
Can't create subnet
- Symptoms
- When you attempt to create a private cluster with an automatic subnet, or to
create a custom subnet, you might encounter the following error:
An IP range in the peer network overlaps with an IP range in one of the active peers of the local network.
- Potential causes
- The master CIDR range you specified overlaps with another IP range in the cluster. This can also occur if you've recently deleted a private cluster and you're attempting to create a new private cluster using the same master CIDR.
- Resolution
- Try using a different CIDR range.
Can't pull image from public Docker Hub
- Symptoms
- A Pod running in your cluster displays a warning in
kubectl describe
such asFailed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
- Potential causes
- Nodes in a private cluster do not have outbound access to the public internet. They have limited access to Google APIs and services, including Container Registry.
- Resolution
- You cannot fetch images directly from Docker Hub. However, you can use the Container Registry mirror of Docker Hub.