This page explains how to create a private Google Kubernetes Engine (GKE) cluster, which is a type of VPC-native cluster. In a private cluster, nodes only have internal IP addresses, which means that nodes and Pods are isolated from the internet by default.
Internal IP addresses for nodes come from the primary IP address range of the subnet you choose for the cluster. Pod IP addresses and Service IP addresses come from two subnet secondary IP address ranges of that same subnet. See IP ranges for VPC-native clusters for more details.
GKE versions 1.14.2 and later support any internal IP address ranges, including private ranges (RFC 1918 and other private ranges) and privately used public IP address ranges. See the VPC documentation for a list of valid internal IP address ranges.
To learn more about how private clusters work, refer to Private clusters.
Before you begin
Make yourself familiar with the requirements, restrictions, and limitations before moving to the next step.
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
Set up default gcloud
settings using one of the following methods:
- Using
gcloud init
, if you want to be walked through setting defaults. - Using
gcloud config
, to individually set your project ID, zone, and region.
Using gcloud init
If you receive the error One of [--zone, --region] must be supplied: Please specify
location
, complete this section.
-
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
-
Follow the instructions to authorize
gcloud
to use your Google Cloud account. - Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone for zonal clusters or a region for regional or Autopilot clusters.
Using gcloud config
- Set your default project ID:
gcloud config set project PROJECT_ID
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone COMPUTE_ZONE
- If you are working with Autopilot or regional clusters, set your default compute region:
gcloud config set compute/region COMPUTE_REGION
- Update
gcloud
to the latest version:gcloud components update
- Ensure you have the correct permission to create clusters. At minimum, you should be a Kubernetes Engine Cluster Admin.
Access to the control plane
In private clusters, the control plane (master) has a private and public endpoint. There are three configuration combinations to control access to the cluster endpoints:
- Public endpoint access disabled: creates a private cluster with no client access to the public endpoint.
- Public endpoint access enabled, authorized networks enabled: creates a private cluster with limited access to the public endpoint.
- Public endpoint access enabled, authorized networks disabled: creates a private cluster with unrestricted access to the public endpoint.
See Access to cluster endpoints for an overview of the differences between the above configuration options.
Creating a private cluster with no client access to the public endpoint
In this section, you create a private cluster named private-cluster-0
that has
private nodes, and that has no client access to the public endpoint.
gcloud
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-0 \ --create-subnetwork name=my-subnet-0 \ --enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --enable-private-endpoint \ --master-ipv4-cidr 172.16.0.32/28
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-0 \ --create-subnetwork name=my-subnet-0 \ --enable-master-authorized-networks \ --enable-private-nodes \ --enable-private-endpoint
where:
--create-subnetwork name=my-subnet-0
causes GKE to automatically create a subnet namedmy-subnet-0
.--enable-master-authorized-networks
specifies that access to the public endpoint is restricted to IP address ranges that you authorize.--enable-ip-alias
makes the cluster VPC-native (not required for Autopilot).--enable-private-nodes
indicates that the cluster's nodes do not have external IP addresses.--enable-private-endpoint
indicates that the cluster is managed using the private IP address of the control plane API endpoint.--master-ipv4-cidr 172.16.0.32/28
specifies an internal IP address range for the control plane (optional for Autopilot) . This setting is permanent for this cluster. The use of non RFC 1918 internal IP addresses is supported.
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
In the Standard or Autopilot section, click Configure.
For the Name, enter
my-subnet-0
.For Standard clusters, from the navigation pane, under Cluster, click Networking.
Select the Private cluster radio button.
Clear the Access master using its external IP address checkbox.
(Optional for Autopilot): Set Master IP range to
172.16.0.32/28
.Leave Network and Node subnet set to
default
. This causes GKE to generate a subnet for your cluster.Click Create.
API
To create a cluster with a publicly-reachable control plane, specify the
enablePrivateEndpoint: true
field in the privateClusterConfig
resource.
At this point, these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-0
. - The secondary range used for Pods.
For example, suppose you created a VM in the primary range of my-subnet-0
.
Then on that VM, you could
configure kubectl
to use the internal IP address
of the control plane.
If you want to access the control plane from outside my-subnet-0
, you must
authorize at least one address range to have access to the private endpoint.
Suppose you have a VM that is in the default network, in the same region as
your cluster, but not in my-subnet-0
.
For example:
my-subnet-0
:10.0.0.0/22
- Pod secondary range:
10.52.0.0/14
- VM address:
10.128.0.3
You could authorize the VM to access the control plane by using this command:
gcloud container clusters update private-cluster-0 \ --enable-master-authorized-networks \ --master-authorized-networks 10.128.0.3/32
Creating a private cluster with limited access to the public endpoint
When creating a private cluster using this configuration, you can choose to use an automatically generated subnet, or a custom subnet.
Using an automatically generated subnet
In this section, you create a private cluster named private-cluster-1
where
GKE automatically generates a subnet for your cluster nodes.
The subnet has Private Google Access enabled. In the subnet,
GKE automatically creates two secondary ranges: one for Pods
and one for Services.
gcloud
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-1 \ --create-subnetwork name=my-subnet-1 \ --enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --master-ipv4-cidr 172.16.0.0/28
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-1 \ --create-subnetwork name=my-subnet-1 \ --enable-master-authorized-networks \ --enable-private-nodes
where:
--create-subnetwork name=my-subnet-1
causes GKE to automatically create a subnet namedmy-subnet-1
.--enable-master-authorized-networks
specifies that access to the public endpoint is restricted to IP address ranges that you authorize.--enable-ip-alias
makes the cluster VPC-native (not required for Autopilot).--enable-private-nodes
indicates that the cluster's nodes do not have external IP addresses.--master-ipv4-cidr 172.16.0.0/28
specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster. The use of non RFC 1918 internal IP addresses is supported.
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
In the Standard or Autopilot section, click Configure.
For the Name, enter
my-subnet-1
.For Standard clusters, from the navigation pane, under Cluster, click Networking.
Select the Private cluster radio button.
To create a control plane that is accessible from authorized external IP ranges, keep the Access master using its external IP address checkbox selected.
(Optional for Autopilot) Set Master IP range to
172.16.0.0/28
. Ensure that the range does not overlap with an existing subnet in the VPC or on-premises.Leave Network and Node subnet set to
default
. This causes GKE to generate a subnet for your cluster.Select the Enable master authorized networks checkbox.
Click Create.
API
Specify the privateClusterConfig
field in the Cluster
API resource:
{ "name": "private-cluster-1", ... "ipAllocationPolicy": { "createSubnetwork": true, }, ... "privateClusterConfig" { "enablePrivateNodes": boolean # Creates nodes with internal IP addresses only "enablePrivateEndpoint": boolean # false creates a cluster control plane with a publicly-reachable endpoint "masterIpv4CidrBlock": string # CIDR block for the cluster control plane "privateEndpoint": string # Output only "publicEndpoint": string # Output only } }
At this point, these are the only IP addresses that have access to the cluster control plane:
- The primary range of
my-subnet-1
. - The secondary range used for Pods.
Suppose you have a group of machines, outside of your VPC network, that have
addresses in the range 203.0.113.0/29
. You could authorize those machines to
access the public endpoint by entering this command:
gcloud container clusters update private-cluster-1 \ --enable-master-authorized-networks \ --master-authorized-networks 203.0.113.0/29
Now these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-1
. - The secondary range used for Pods.
- Address ranges that you have authorized, for example,
203.0.113.0/29
.
Using a custom subnet
In this section, you create a private cluster named private-cluster-2
. You
create a network, my-net-0
. You create a subnet, my-subnet-2
, with primary
range 192.168.0.0/20
, for your cluster nodes. Your subnet has two secondary
address ranges: my-pods-1
for the Pod IP addresses, and my-services-1
for
the Service IP addresses.
gcloud
Create a network
First, create a network for your cluster. The following command creates a
network, my-net-0
:
gcloud compute networks create my-net-0 \ --subnet-mode custom
Create a subnet and secondary ranges
Next, create a subnet, my-subnet-2
, in the my-net-0
network, with
secondary ranges my-pods-2
for Pods and my-services-2
for Services:
gcloud compute networks subnets create my-subnet-2 \ --network my-net-0\ --region us-central1 \ --range 192.168.0.0/20 \ --secondary-range my-pods-2=10.4.0.0/14,my-services-2=10.0.32.0/20 \ --enable-private-ip-google-access
Create a private cluster
Now, create a private cluster, private-cluster-1
, using the network,
subnet, and secondary ranges you created.
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-1 \ --zone us-central1-c \ --enable-master-authorized-networks \ --network my-net-0 \ --subnetwork my-subnet-2 \ --cluster-secondary-range-name my-pods-2 \ --services-secondary-range-name my-services-2 \ --enable-private-nodes \ --enable-ip-alias \ --master-ipv4-cidr 172.16.0.16/28 \ --no-enable-basic-auth \ --no-issue-client-certificate
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-1 \ --region us-central1 \ --enable-master-authorized-networks \ --network my-net-0 \ --subnetwork my-subnet-2 \ --cluster-secondary-range-name my-pods-2 \ --services-secondary-range-name my-services-2 \ --enable-private-nodes
Console
Create a network, subnet, and secondary ranges
Visit the VPC networks page in Cloud Console.
Click add_box Create VPC network.
For Name, enter
my-net-0
.Ensure that Subnet creation mode is set to Custom.
In the New subnet section, for Name, enter
my-subnet-2
.In the Region drop-down list, select the desired region.
For IP address range, enter
192.168.0.0/20
.Click Create secondary IP range. For Subnet range name, enter
my-services-1
, and for Secondary IP range, enter10.0.32.0/20
.Click Add IP range. For Subnet range name, enter
my-pods-1
, and for Secondary IP range, enter10.4.0.0/14
.Set Private Google Access to On.
Click Done.
Click Create.
Create a private cluster
Create a private cluster that uses your subnet:
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
In the Standard or Autopilot section, click Configure.
For the Name, enter
private-cluster-2
.For Standard clusters, from the navigation pane, under Cluster, click Networking.
Select the Private cluster radio button.
To create a control plane that is accessible from authorized external IP ranges, keep the Access master using its external IP address checkbox selected.
(Optional for Autopilot) Set Master IP range to
172.16.0.16/28
.In the Network drop-down list, select my-net-1.
In the Node subnet drop-down list, select my-subnet-2.
Clear the Automatically create secondary ranges checkbox.
In the Pod secondary CIDR range drop-down list, select my-pods-1.
In the Services secondary CIDR range drop-down list, select my-services-1.
Select the Enable master authorized networks checkbox.
Click Create.
At this point, these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-2
. - The secondary range
my-pods-1
.
Suppose you have a group of machines, outside of my-net-0
, that have addresses
in the range 203.0.113.0/29
. You could authorize those machines to access the
public endpoint by entering this command:
gcloud container clusters update private-cluster-1 \ --enable-master-authorized-networks \ --master-authorized-networks 203.0.113.0/29
At this point, these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-2
. - The secondary range
my-pods-1
. - Address ranges that you have authorized, for example,
203.0.113.0/29
.
Using Cloud Shell to access a private cluster
The private cluster you created in the preceding exercise, private-cluster-1
,
has a public endpoint and has authorized networks enabled. If you want to
use Cloud Shell to access the cluster, you must add
the public IP address of your Cloud Shell to the cluster's list of authorized
networks.
To do this:
In your Cloud Shell command-line window, use
dig
to find the external IP address of your Cloud Shell:dig +short myip.opendns.com @resolver1.opendns.com
Add the external address of your Cloud Shell to your cluster's list of authorized networks:
gcloud container clusters update private-cluster-1 \ --zone us-central1-c \ --enable-master-authorized-networks \ --master-authorized-networks EXISTING_AUTH_NETS,SHELL_IP/32
Replace the following:
EXISTING_AUTH_NETS
: your existing list of authorized networks. You can find your authorized networks in the console or by running the following command:gcloud container clusters describe private-cluster-1 --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])"
SHELL_IP
: the external IP address of your Cloud Shell.
Get credentials, so that you can use
kubectl
to access the cluster:gcloud container clusters get-credentials private-cluster-1 \ --zone us-central1-a \ --project PROJECT_ID
Replace
PROJECT_ID
with your project ID.
Now you can use kubectl
, in Cloud Shell, to access your private cluster. For
example:
kubectl get nodes
Creating a private cluster with unrestricted access to the public endpoint
In this section, you create a private cluster where any IP address can access the control plane.
gcloud
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-2 \ --create-subnetwork name=my-subnet-3 \ --no-enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --master-ipv4-cidr 172.16.0.32/28
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-2 \ --create-subnetwork name=my-subnet-3 \ --no-enable-master-authorized-networks \ --enable-private-nodes
where:
--create-subnetwork name=my-subnet-3
causes GKE to automatically create a subnet namedmy-subnet-3
.--no-enable-master-authorized-networks
disables authorized networks for the cluster.--enable-ip-alias
makes the cluster VPC-native (not required for Autopilot).--enable-private-nodes
indicates that the cluster's nodes do not have external IP addresses.--master-ipv4-cidr 172.16.0.32/28
specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster. The use of non RFC 1918 internal IP addresses is supported.
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
In the Standard or Autopilot section, click Configure.
For the Name, enter
private-cluster-2
.For Standard clusters, from the navigation pane, under Cluster, click Networking.
Select the Private cluster option.
Keep the Access master using its external IP address checkbox selected.
(Optional for Autopilot) Set Master IP range to
172.16.0.32/28
.Leave Network and Node subnet set to
default
. This causes GKE to generate a subnet for your cluster.Clear the Enable master authorized networks checkbox.
Click Create.
Other private cluster configurations
In addition to the preceding configurations, you can run private clusters with the following configurations.
Granting private nodes outbound internet access
To provide outbound internet access for your private nodes, you can use Cloud NAT or you can manage your own NAT gateway.
Creating a private cluster in a shared VPC network
To learn how to create a private cluster in a shared VPC network, refer to the Shared VPC documentation.
Deploying a Windows Server container application to a private cluster
To learn how to deploy a Windows Server container application to a private cluster, refer to the Windows node pool documentation.
Accessing the control plane's private endpoint globally
The control plane's private endpoint is implemented by an internal TCP/UDP load balancer in the control plane's VPC network. Clients that are internal or are connected through Cloud VPN tunnels and Cloud Interconnect VLAN attachments can access internal TCP/UDP load balancers.
By default, these clients must be located in the same region as the load balancer.
When you enable control plane global access, the internal TCP/UDP load balancer is globally accessible: Client VMs and on-premises systems can connect to the control plane's private endpoint, subject to the authorized networks configuration, from any region.
For more information about the internal TCP/UDP load balancers and global access, see Internal load balancers and connected networks.
Enabling control plane private endpoint global access
This section applies only to clusters created in the Standard mode.
By default, global access is not enabled for the control plane's private
endpoint when you create a private cluster. To enable control plane global
access, use gcloud
or the Google Cloud Console.
gcloud
Add the enable-master-global-access
flag to create a private cluster with
global access to the control plane's private endpoint enabled:
gcloud container clusters create CLUSTER_NAME \
--enable-private-nodes \
--enable-ip-alias \
--master-ipv4-cidr=172.16.16.0/28 \
--enable-master-global-access
You can also enable global access to the control plane's private endpoint for an existing private cluster:
gcloud container clusters update CLUSTER_NAME \
--enable-master-global-access
Console
To create a new private cluster with control plane global access enabled, perform the following steps:
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
Enter a Name.
From the navigation pane, under Cluster, click Networking.
Select Private cluster.
Select the Master global access checkbox.
Configure other fields as desired.
Click Create.
To enable control plane global access for an existing private cluster, perform the following steps:
Visit the Google Kubernetes Engine menu in Cloud Console.
Next to the cluster you want to edit, click more_vert Actions, then click edit Edit.
In the Networking section, next to Master global access, click edit Edit.
In the Edit master global access dialog, select the Enable Master global access checkbox.
Click Save Changes.
Verifying control plane private endpoint global access
You can verify that global access to the control plane's private endpoint is enabled by running the following command and looking at its output.
gcloud container clusters describe CLUSTER_NAME
The output includes a privateClusterConfig
section where you can see the
status of masterGlobalAccessConfig
.
privateClusterConfig:
enablePrivateNodes: true
masterIpv4CidrBlock: 172.16.1.0/28
peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer
privateEndpoint: 172.16.1.2
publicEndpoint: 34.68.128.12
masterGlobalAccessConfig:
enabled: true
Connecting to the control plane's private endpoint from on-premises networks
When you create a GKE private cluster
and disable the control plane's public endpoint, you can still connect to the
control plane's private endpoint from an on-premises network using tools like
kubectl
. This section describes the networking requirements necessary to
access the control plane's private endpoint from an on-premises network.
The following diagram shows a routing path between an on-prem network and GKE control plane nodes:
To connect to a control plane's private endpoint from on-premise network complete the following tasks:
Identify the peering between your cluster's VPC network and the control plane's VPC network:
gcloud container clusters describe CLUSTER_NAME
The output of this command includes the cluster's
privateClusterConfig.peeringName
field. This is the name of the peering between your cluster and the control plane's VPC network. For example:privateClusterConfig: enablePrivateNodes: true masterIpv4CidrBlock: 172.16.1.0/28 peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer privateEndpoint: 172.16.1.2 publicEndpoint: 34.68.128.12
Configure your VPC network to export its custom routes in the peering relationship to the control plane's VPC network. The control plane's VPC network is already configured to import custom routes. This provides a path for the control plane to send packets back to on-premises resources.
Update the peering connection, enabling the export of custom routes, for the peering connection you identified in the previous step.
To provide a path for traffic from your on-premises network to the control plane's VPC network, do one of the following:
For Cloud Interconnect and Cloud VPN: Advertise the control plane's CIDR range using a Cloud Router custom route advertisement. If control plane global access is disabled, this custom route advertisement must be on a BGP session of a Cloud Router in the same region as the cluster. If control plane global access is enabled, you can place this custom route advertisement on a Cloud Router in any region. See Advertising Custom IP Ranges for more information.
For a Classic VPN tunnel that does not use dynamic routing: You must configure a static route for the control plane's CIDR range in your on-premises routers.
Considerations for on-premises route advertisements
The CIDRs your on-premises network advertises to Cloud Routers in your cluster's VPC network must adhere to the following conditions:
While your cluster's VPC will accept a default route, the control plane's VPC network always rejects routes with a
0.0.0.0/0
destination (a default route) because the control plane's VPC network already has a local default route, and that local route is always considered first. If your on-premises router advertises a default route in your VPC network, it must also advertise specific on-premises destinations so that the control plane has a return path to the on-premises network. These more specific destinations result in more specific custom dynamic routes in your VPC network, and those more specific routes are accepted by the control plane's VPC network through VPC Network Peering.When the control plane's VPC network accepts other broad routes, they break connectivity to the public IP addresses for Google APIs and services. As a representative example, you should not advertise routes with destinations
0.0.0.0/1
and128.0.0.0/1
. Refer to the previous point for an alternative.Pay close attention to the Cloud Router limits, especially the maximum number of unique destinations for learned routes.
Disabling custom route export
To disable custom route export from your VPC:
gcloud compute networks peerings update PEERING_NAME --network VPC_NAME --no-export-custom-routes
Replace the following:
PEERING_NAME
: the value ofprivateClusterConfig.peeringName
.VPC_NAME
: the name of your VPC.
To find the peeringName
, see the first step of the instructions above to
enable custom route export.
Verify that nodes do not have external IP addresses
After you create a private cluster, verify that the cluster's nodes do not have external IP addresses.
gcloud
Run the following command:
kubectl get nodes --output wide
The output's EXTERNAL-IP
column is empty:
STATUS ... VERSION EXTERNAL-IP OS-IMAGE ...
Ready v.8.7-gke.1 Container-Optimized OS from Google
Ready v1.8.7-gke.1 Container-Optimized OS from Google
Ready v1.8.7-gke.1 Container-Optimized OS from Google
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
In the list of clusters, click the cluster name.
On the Clusters page, click the Nodes tab.
Under Node Pools, click the node pool name.
On the Node pool details page, under Instance groups, click the name of your instance group. For example,
gke-private-cluster-0-default- pool-5c5add1f-grp
.In the list of instances, verify that your instances do not have external IP addresses.
Viewing the cluster's subnet and secondary address ranges
After you create a private cluster, you can view the subnet and secondary address ranges that you or GKE provisioned for the cluster.
gcloud
List all subnets
To list the subnets in your cluster's network, run the following command:
gcloud compute networks subnets list --network NETWORK_NAME
Replace NETWORK_NAME
with the private cluster's
network. If you created the cluster with an automatically-created subnet,
use default
.
In the command output, find the name of the cluster's subnet.
View cluster's subnet
Get information about the automatically created subnet:
gcloud compute networks subnets describe SUBNET_NAME
Replace SUBNET_NAME
with the name of the subnet.
The output shows the primary address range for nodes (the first
ipCidrRange
field) and the secondary ranges for Pods and Services (under
secondaryIpRanges
):
...
ipCidrRange: 10.0.0.0/22
kind: compute#subnetwork
name: gke-private-cluster-1-subnet-163e3c97
...
privateIpGoogleAccess: true
...
secondaryIpRanges:
- ipCidrRange: 10.40.0.0/14
rangeName: gke-private-cluster-1-pods-163e3c97
- ipCidrRange: 10.0.16.0/20
rangeName: gke-private-cluster-1-services-163e3c97
...
Console
Visit the VPC networks page in the Cloud Console.
Click the name of the subnet. For example,
gke-private-cluster-0-subnet-163e3c97
.Under IP address range, you can see the primary address range of your subnet. This is the range used for nodes.
Under Secondary IP ranges, you can see the IP address range for Pods and the range for Services.
Viewing a private cluster's endpoints
You can view a private cluster's endpoints using the gcloud
command-line tool
or Cloud Console.
gcloud
Run the following command:
gcloud container clusters describe CLUSTER_NAME
The output shows both the private and public endpoints:
...
privateClusterConfig:
enablePrivateEndpoint: true
enablePrivateNodes: true
masterIpv4CidrBlock: 172.16.0.32/28
privateEndpoint: 172.16.0.34
publicEndpoint: 35.239.154.67
Console
Visit the Google Kubernetes Engine menu in the Cloud Console.
In the cluster list, click the cluster name.
In the Details tab, under Cluster basics, look for the Endpoint field.
Pulling container images from an image registry
In a private cluster, the container runtime can pull container images from Container Registry; it cannot pull images from any other container image registry on the internet. This is because the nodes in a private cluster do not have external IP addresses, so by default they cannot communicate with services outside of the Google network.
The nodes in a private cluster can communicate with Google services, like Container Registry, if they are on a subnet that has Private Google Access enabled.
The following commands create a Deployment that pulls a sample image from a Google-owned Container Registry repository:
kubectl run hello-deployment --image gcr.io/google-samples/hello-app:2.0
Adding firewall rules for specific use cases
This section explains how to add a firewall rule to a private cluster. By default,
firewall rules restrict your cluster control plane to only initiate TCP connections to
your nodes on ports 443
(HTTPS) and 10250
(kubelet). For some Kubernetes
features, you might need to add firewall rules to allow access on additional
ports.
Kubernetes features that require additional firewall rules include:
- Admission webhooks
- Aggregated API servers
- Webhook conversion
- Dynamic audit configuration
- Generally, any API that has a ServiceReference field requires additional firewall rules.
Adding a firewall rule allows traffic from the cluster control plane to all of the following:
- The specified port of each node (hostPort).
- The specified port of each Pod running on these nodes.
- The specified port of each Service running on these nodes.
To learn about firewall rules, refer to Firewall rules in the Cloud Load Balancing documentation.
To add a firewall rule in a private cluster, you need to record the cluster control plane's CIDR block and the target used. After you have recorded this you can create the rule.
Step 1. View control plane's CIDR block
You need the cluster control plane's CIDR block to add a firewall rule.
gcloud
Run the following command:
gcloud container clusters describe CLUSTER_NAME
Replace CLUSTER_NAME with the name of your private cluster.
In the command output, take note of the value in the masterIpv4CidrBlock field.
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
In the cluster list, click the desired cluster name.
In the Details tab, under Networking, take note of the value in the Master address range field.
Step 2. View existing firewall rules
You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.
gcloud
Run the following command:
gcloud compute firewall-rules list \ --filter 'name~^gke-CLUSTER_NAME' \ --format 'table( name, network, direction, sourceRanges.list():label=SRC_RANGES, allowed[].map().firewall_rule().list():label=ALLOW, targetTags.list():label=TARGET_TAGS )'
In the command output, take note of the value in the Targets field.
Console
Visit the Firewall menu in Cloud Console.
For Filter table, enter
gke-CLUSTER_NAME
.
In the results, take note of the value in the Targets field.
Step 3. Add a firewall rule
gcloud
Run the following command:
gcloud compute firewall-rules create FIREWALL_RULE_NAME \ --action ALLOW \ --direction INGRESS \ --source-ranges MASTER_CIDR_BLOCK \ --rules PROTOCOL:PORT \ --target-tags TARGET
Replace the following:
FIREWALL_RULE_NAME
: the name you choose for the firewall rule.MASTER_CIDR_BLOCK
: the cluster control plane's CIDR block (masterIpv4CidrBlock
) that you collected previously.PROTOCOL:PORT
: the desired port and its protocol,tcp
orudp
.TARGET
: the target (Targets
) value that you collected previously.
Console
Visit the Firewall menu in Cloud Console.
Click add_box Create Firewall Rule.
For Name, enter the desired name for the firewall rule.
In the Network drop-down list, select the relevant network.
In Direction of traffic, click Ingress.
In Action on match, click Allow.
In the Targets drop-down list, select Specified target tags.
For Target tags, enter the target value that you noted previously.
In the Source filter drop-down list, select IP ranges.
For Source IP ranges, enter the cluster control plane's CIDR block.
In Protocols and ports, click Specified protocols and ports, select the checkbox for the relevant protocol (tcp or udp), and enter the port number in the protocol field.
Click Create.
Protecting a private cluster with VPC Service Controls
To further secure your GKE private clusters, you can protect them using VPC Service Controls.
VPC Service Controls provides additional security for your GKE private clusters to help mitigate the risk of data exfiltration. Using VPC Service Controls, you can add projects to service perimeters that protect resources and services from requests that originate outside the perimeter.
To learn more about service perimeters, refer to the Service perimeter configuration page of the VPC Service Controls documentation.
If you use Container Registry with your GKE private cluster, additional steps are required to use that private cluster with VPC Service Controls. For more information, refer to the Setting up Container Registry for GKE private clusters page.
VPC peering reuse
Any private clusters you create after January 15, 2020 reuse VPC Network Peering connections.
Any private clusters you created prior to January 15, 2020 use a unique VPC Network Peering connection. Each VPC network can peer with up to 25 other VPC networks which means for these clusters there is a limit of at most 25 private clusters per network (assuming peerings are not being used for other purposes).
This feature is not backported to previous releases. To enable VPC Network Peering reuse on older private clusters, you can delete a cluster and recreate it. Upgrading a cluster does not cause it to reuse an existing VPC Network Peering connection.
Each location can support a maximum of 75 private clusters if the clusters have
VPC peering reuse enabled. Zones and regions are treated as separate locations.
For example, you can create up to 75 private zonal clusters in us-east1-a
and
another 75 private regional clusters in us-east1
. This also applies if you are
using private clusters in a shared VPC network. The
maximum number of connections to a single VPC network is 25,
which means you can only create private clusters using 25 unique locations.
You can check if your private cluster reuses VPC Peering connections.
gcloud
gcloud container clusters describe CLUSTER_NAME \ --zone=ZONE_NAME \ --format="value(privateClusterConfig.peeringName)"
If your cluster is reusing VPC peering connections, the output
begins with gke-n
. For example, gke-n34a117b968dee3b2221-93c6-40af-peer
.
Console
Check the VPC peering row on the Cluster details page. If your cluster is
reusing VPC peering connections, the output begins with gke-n
.
For example, gke-n34a117b968dee3b2221-93c6-40af-peer
.
Cleaning up
After completing the tasks on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:
Delete the clusters
gcloud
gcloud container clusters delete -q private-cluster-0 private-cluster-1
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Select each cluster.
Click delete Delete.
Delete the network
gcloud
gcloud compute networks delete net-0
Console
Visit the VPC networks page in Cloud Console.
In the list of networks, click
net-0
.On the VPC network details page, click delete Delete VPC Network.
In the Delete a network dialog, click Delete.
Requirements, restrictions, and limitations
Private clusters have the following requirements:
- A private cluster must be a VPC-native cluster, which has Alias IP ranges enabled. VPC-native is enabled by default for new clusters. VPC-native clusters are not compatible with legacy networks.
Private clusters have the following restrictions:
- You cannot convert an existing, non-private cluster to a private cluster.
- For clusters running versions earlier than 1.16.9 or versions between
1.17.0 to 1.17.4, the cluster control plane is not reachable from
172.17.0.0/16
CIDRs. - For clusters running versions earlier than 1.14.4, a cluster control plane, node,
Pod, or Service IP range cannot overlap with
172.17.0.0/16
. - When you use
172.17.0.0/16
for your control plane IP range, you cannot use this range for nodes, Pod, or Services IP addresses. This restriction applies to zonal clusters running versions 1.14.4 or later, and regional clusters running versions 1.16.9 - 1.17.0, or 1.17.4 and later. - Deleting the VPC peering between the cluster control plane and the cluster nodes, deleting the firewall rules that allow ingress traffic from the cluster control plane to nodes on port 10250, or deleting the default route to the default internet gateway, causes a private cluster to stop functioning. For a private cluster to come up again after deleting the default route, you need to provision the restricted VIP statically.
- You can add up to 50 authorized networks (allowed CIDR blocks) in a project. For more information, refer to Add an authorized network to an existing cluster.
Private clusters have the following limitations:
- The size of the RFC 1918 block for the cluster control plane must be
/28
. - While GKE can detect overlap with the control plane address block, it cannot detect overlap within a shared VPC network.
- All nodes in a private cluster are created without a public IP; they have limited access to Google APIs and services. To provide outbound internet access for your private nodes, you can use Cloud NAT or you can manage your own NAT gateway. To allow nodes to communicate with Google APIs and services, enable Private Google Access on your subnet.
- Any private clusters you created prior to January 15, 2020 have a limit of at most 25 private clusters per network (assuming peerings are not being used for other purposes). See VPC peering reuse for more information.
- Every private cluster requires a peering route between your and Google's VPCs, but only one peering operation can happen at a time. If you attempt to create multiple private clusters at the same time, cluster creation may time out. To avoid this, create new private clusters serially so that the VPC peering routes will already exist for each subsequent private cluster. Attempting to create a single private cluster may also time out if there are operations running on your VPC.
- If you expand the primary IP range of a subnet to accommodate additional nodes, then you must add the expanded subnet's primary IP address range to the list of master authorized networks for your cluster. If you don't, ingress-allow firewall rules relevant to the master aren't updated, and new nodes created in the expanded IP address space will not be able to register with the master. This can lead to an outage where new nodes are continuously deleted and replaced. Such an outage can happen when performing node pool upgrades or when nodes are automatically replaced due to liveness probe failures.
Troubleshooting
The following sections explain how to resolve common issues related to private clusters.
Cluster overlaps with active peer
- Symptoms
- Attempting to create a private cluster returns an error such as
Google Compute Engine: An IP range in the peer network overlaps with an IP range in an active peer of the local network.
. - Potential causes
- You chose an overlapping control plane CIDR.
- Resolution
- Delete and recreate the cluster using a different control plane CIDR.
Can't reach cluster
- Symptoms
- After creating a private cluster, attempting to run
kubectl
commands against the cluster returns an error, such asUnable to connect to the server: dial tcp [IP_ADDRESS]: connect: connection timed out
orUnable to connect to the server: dial tcp [IP_ADDRESS]: i/o timeout
. - Potential causes
kubectl
is unable to talk to the cluster control plane.- Resolution
You need to add authorized networks for your cluster to to allow connections to the cluster control plane from your network.
Ensure that control plane private endpoint global access is enabled if clients are using
kubectl
command from a region different than the cluster. See Accessing the control plane's private endpoint globally for more information.
Can't create cluster due to omitted flag
- Symptoms
gcloud container clusters create
returns an error such asCannot specify --enable-private-endpoint without --enable-private-nodes.
- Potential causes
- You did not specify a necessary flag.
- Resolution
- Ensure that you specify the necessary flags. You cannot enable a private endpoint for the cluster control plane without also enabling private nodes.
Can't create cluster due to overlapping IPv4 CIDR block
- Symptoms
gcloud container clusters create
returns an error such asThe given master_ipv4_cidr 10.128.0.0/28 overlaps with an existing network 10.128.0.0/20.
- Potential causes
- You specified a control plane CIDR block that overlaps with an existing subnet in your VPC.
- Resolution
- Specify a CIDR block for
--master-ipv4-cidr
that does not overlap with an existing subnet.
Can't create subnet
- Symptoms
- When you attempt to create a private cluster with an automatic subnet, or to
create a custom subnet, you might encounter the following error:
An IP range in the peer network overlaps with an IP range in one of the active peers of the local network.
- Potential causes
- The control plane CIDR range you specified overlaps with another IP range in the cluster. This can also occur if you've recently deleted a private cluster and you're attempting to create a new private cluster using the same control plane CIDR.
- Resolution
- Try using a different CIDR range.
Can't pull image from public Docker Hub
- Symptoms
- A Pod running in your cluster displays a warning in
kubectl describe
such asFailed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
- Potential causes
- Nodes in a private cluster do not have external IP addresses, so they do not meet the internet access requirements. However, the nodes can access Google APIs and services, including Container Registry, if you have enabled Private Google Access and met its network requirements.
- Resolution
Use one of the following solutions:
Copy the images in your private cluster from Docker Hub to Container Registry. See Migrating containers from a third-party registry for more information.
Use
mirror.gcr.io
, which caches copies of frequently-accessed images from Docker Hub. For more information, see Pulling cached Docker Hub images.If you must pull images from Docker Hub or another public repository, use Cloud NAT or an instance-based proxy that is the target for a static
0.0.0.0/0
route.
API request that triggers admission webhook timing out
- Symptoms
An API request that triggers an admission webhook configured to use a service with a targetPort other than 443 times out, causing the request to fail:
Error from server (Timeout): request did not complete within requested timeout 30s
- Potential causes
By default, the firewall does not allow TCP connections to nodes except on ports 443 (HTTPS) and 10250 (kubelet). An admission webhook attempting to communicate with a Pod on a port other than 443 will fail if there is not a custom firewall rule that permits the traffic.
- Resolution
Add a firewall rule for your specific use case.
What's next
- Read the GKE network overview.
- Learn how to create VPC-native clusters.
- Learn more about VPC peering.