Creating a private cluster

This page explains how to create a private cluster in Google Kubernetes Engine (GKE). In a private cluster, the nodes have internal RFC 1918 IP addresses only, which ensures that their workloads are isolated from the public internet. To learn more about how private clusters work, refer to Private clusters.

Before you begin

Make yourself familar with the requirements, restrictions, and limitations before moving to the next step.

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone.

Using gcloud config

  • Set your default project ID:
    gcloud config set project project-id
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone compute-zone
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region compute-region
  • Update gcloud to the latest version:
    gcloud components update

Access to the master

In private clusters, the master has a private and public endpoint. There are three configuration combinations to control access to the cluster endpoints.

  • Public endpoint access disabled. This creates a private cluster with no client access to the public endpoint.
  • Public endpoint access enabled, master authorized networks enabled. This creates a private cluster with limited access to the public endpoint.
  • Public endpoint access enabled, master authorized networks disabled. This creates a private cluster with unrestricted access to the public endpoint.

See Access to cluster endpoints for an overview of the differences between the above configuration options.

Creating a private cluster with no client access to the public endpoint

In this section, you create a private cluster with private nodes and no access the public endpoint.

gcloud

Run the following command:

gcloud container clusters create private-cluster-0 \
    --create-subnetwork name=my-subnet-0 \
    --enable-master-authorized-networks \
    --enable-ip-alias \
    --enable-private-nodes \
    --enable-private-endpoint \
    --master-ipv4-cidr 172.16.0.32/28 \
    --no-enable-basic-auth \
    --no-issue-client-certificate

where:

  • --create-subnetwork name=my-subnet-0 causes GKE to automatically create a subnet named my-subnet-0.
  • --enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
  • --enable-ip-alias makes the cluster VPC-native.
  • --enable-private-nodes indicates that the cluster's nodes do not have external IP addresses.
  • --enable-private-endpoint indicates that the cluster is managed using the private IP address of the master API endpoint.
  • --master-ipv4-cidr 172.16.0.32/28 specifies an RFC 1918 range for the master. This setting is permanent for this cluster.
  • --no-enable-basic-auth indicates to disable basic auth for the cluster.
  • --no-issue-client-certificate disables issuing a client certificate.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. For the Name, enter my-subnet-0.

  4. From the navigation pane, under Cluster, click Networking.

  5. Leave Network and Node subnet set to default. This causes GKE to generate a subnet for your cluster.

  6. Select the Private cluster checkbox.

  7. Clear the Access master using its external IP address checkbox.

  8. Set Master IP range to 172.16.0.32/28.

  9. From the navigation pane, under Cluster, click Security.

  10. Ensure the Issue a client certificate checkbox is cleared.

  11. From the navigation pane, under Cluster, click Features.

  12. Ensure the Enable Kubernetes Dashboard checkbox is cleared.

  13. Click Create.

API

To create a cluster with a publicly-reachable master, specify the enablePrivateEndpoint: true field in the privateClusterConfig resource.

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-0.
  • The secondary range used for Pods.

For example, suppose you created a VM in the primary range of my-subnet-0. Then on that VM, you could configure kubectl to use the internal IP address of the master.

If you want to access the cluster master from outside my-subnet-0, you must authorize at least one address range to have access to the private endpoint.

Suppose you have a VM that is in the default network, in the same region as your cluster, but not in my-subnet-0.

For example:

  • my-subnet-0: 10.0.0.0/22
  • Pod secondary range: 10.52.0.0/14
  • VM address: 10.128.0.3

You could authorize the VM to access the master by using this command:

gcloud container clusters update private-cluster-0 \
    --enable-master-authorized-networks \
    --master-authorized-networks 10.128.0.3/32

Creating a private cluster with limited access to the public endpoint

When creating a private cluster using this configuration, you can choose to use an automatically generated subnet, or a custom subnet.

Using an automatically generated subnet

In this section, you create a private cluster named where GKE automatically generates a subnet for your cluster nodes. The subnet has Private Google Access enabled. In the subnet, GKE automatically creates two secondary ranges: one for Pods and one for Services.

gcloud

Run the following command:

gcloud container clusters create private-cluster-1 \
    --create-subnetwork name=my-subnet-1 \
    --enable-master-authorized-networks \
    --enable-ip-alias \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.0/28 \
    --no-enable-basic-auth \
    --no-issue-client-certificate

where:

  • --create-subnetwork name=my-subnet-1 causes GKE to automatically create a subnet named my-subnet-1.
  • --enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
  • --enable-ip-alias makes the cluster VPC-native.
  • --enable-private-nodes indicates that the cluster's nodes do not have external IP addresses.
  • --master-ipv4-cidr 172.16.0.0/28 specifies an RFC 1918 range for the master. This setting is permanent for this cluster.
  • --no-enable-basic-auth disables basic auth for the cluster.
  • --no-issue-client-certificate disables issuing a client certificate.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. For the Name, enter my-subnet-1.

  4. From the navigation pane, under Cluster, click Networking.

  5. Leave Network and Node subnet set to default. This causes GKE to generate a subnet for your cluster.

  6. Select the Private cluster checkbox.

  7. To create a master that is accessible from authorized external IP ranges, keep the Access master using its external IP address checkbox selected.

  8. Set Master IP range to 172.16.0.0/28.

  9. Select the Enable master authorized networks checkbox.

  10. From the navigation pane, under Cluster, click Security.

  11. Ensure the Issue a client certificate checkbox is cleared.

  12. From the navigation pane, under Cluster, click Features.

  13. Ensure the Enable Kubernetes Dashboard checkbox is cleared.

  14. Click Create.

API

Specify the privateClusterConfig field in the Cluster API resource:

{
  "name": "private-cluster-1",
  ...
  "ipAllocationPolicy": {
    "createSubnetwork": true,
  },
  ...
    "privateClusterConfig" {
      "enablePrivateNodes": boolean # Creates nodes with internal IP addresses only
      "enablePrivateEndpoint": boolean # false creates a cluster master with a publicly-reachable endpoint
      "masterIpv4CidrBlock": string # CIDR block for the cluster master
      "privateEndpoint": string # Output only
      "publicEndpoint": string # Output only
  }
}

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-1.
  • The secondary range used for Pods.

Suppose you have a group of machines, outside of your VPC network, that have addresses in the range 203.0.113.0/29. You could authorize those machines to access the public endpoint by entering this command:

gcloud container clusters update private-cluster-1 \
    --enable-master-authorized-networks \
    --master-authorized-networks 203.0.113.0/29

Now these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-1.
  • The secondary range used for Pods.
  • Address ranges that you have authorized, for example, 203.0.113.0/29.

Using a custom subnet

In this section, you create a private cluster named private-cluster-2. You create a network, my-net-0. You create a subnet, my-subnet-2, with primary range 192.168.0.0/20, for your cluster nodes. Your subnet has two secondary address ranges: my-pods-1 for the Pod IP addresses, and my-services-1 for the Service IP addresses.

gcloud

Create a network

First, create a network for your cluster. The following command creates a network, my-net-0:

gcloud compute networks create my-net-0 \
    --subnet-mode custom

Create a subnet and secondary ranges

Next, create a subnet, my-subnet-2, in the my-net-0 network, with secondary ranges my-pods-2 for Pods and my-services-2 for Services:

gcloud compute networks subnets create my-subnet-2 \
    --network my-net-0\
    --region us-central1 \
    --range 192.168.0.0/20 \
    --secondary-range my-pods-2=10.4.0.0/14,my-services-2=10.0.32.0/20 \
    --enable-private-ip-google-access

Create a private cluster

Now, create a private cluster, private-cluster-1, using the network, subnet, and secondary ranges you created.

gcloud container clusters create private-cluster-1 \
    --zone us-central1-c \
    --enable-master-authorized-networks \
    --network my-net-0 \
    --subnetwork my-subnet-2 \
    --cluster-secondary-range-name my-pods-2 \
    --services-secondary-range-name my-services-2 \
    --enable-private-nodes \
    --enable-ip-alias \
    --master-ipv4-cidr 172.16.0.16/28 \
    --no-enable-basic-auth \
    --no-issue-client-certificate

Console

Create a network, subnet, and secondary ranges

  1. Visit the VPC networks page in Cloud Console.

    Go to the VPC networks page

  2. Click Create VPC network.

  3. For Name, enter my-net-0.

  4. Ensure that Subnet creation mode is set to Custom.

  5. From New subnet, in Name, enter my-subnet-2.

  6. In the Region drop-down list, select the desired region.

  7. For IP address range, enter 192.168.0.0/20.

  8. Click Create secondary IP range. For Subnet range name, enter my-services-1, and for Secondary IP range, enter 10.0.32.0/20.

  9. Click Add IP range. For Subnet range name, enter my-pods-1, and for Secondary IP range, enter 10.4.0.0/14.

  10. From Private Google Access, click On.

  11. Click Done.

  12. Click Create.

Create a private cluster

Create a private cluster that uses your subnet:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. For the Name, enter private-cluster-2.

  4. From the navigation pane, under Cluster, click Networking.

  5. Select the Private cluster checkbox.

  6. To create a master that is accessible from authorized external IP ranges, keep the Access master using its external IP address checkbox selected.

  7. Set Master IP range to 172.16.0.16/28.

  8. In the Network drop-down list, select my-net-1.

  9. In the Node subnet drop-down list, select my-subnet-2.

  10. Clear the Automatically create secondary ranges checkbox.

  11. In the Pod secondary CIDR range drop-down list, select my-pods-1.

  12. In the Services secondary CIDR range drop-down list, select my-services-1.

  13. Select the Enable master authorized networks checkbox.

  14. From the navigation pane, under Cluster, click Security.

  15. Ensure the Issue a client certificate checkbox is cleared.

  16. From the navigation pane, under Cluster, click Features.

  17. Ensure the Enable Kubernetes Dashboard checkbox is cleared.

  18. Click Create.

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-2
  • The secondary range my-pods-1.

Suppose you have a group of machines, outside of my-net-1, that have addresses in the range 203.0.113.0/29. You could authorize those machines to access the public endpoint by entering this command:

gcloud container clusters update private-cluster-1 \
    --enable-master-authorized-networks \
    --master-authorized-networks 203.0.113.0/29

At this point, these are the only IP addresses that have access to the cluster master:

  • The primary range of my-subnet-2
  • The secondary range my-pods-1.
  • Address ranges that you have authorized, for example, 203.0.113.0/29.

Using Cloud Shell to access a private cluster

The private cluster you created in the preceding exercise, private-cluster-1, has a public endpoint and has master authorized networks enabled. If you want to use Cloud Shell to access the cluster, you must add the public IP address of your Cloud Shell to the cluster's list of master authorized networks.

To do this:

  1. In your Cloud Shell command-line window, use dig to find the external IP address of your Cloud Shell:

    dig +short myip.opendns.com @resolver1.opendns.com
    
  2. Add the external address of your Cloud Shell to your cluster's list of master authorized networks:

    gcloud container clusters update private-cluster-1 \
        --zone us-central1-c \
        --enable-master-authorized-networks \
        --master-authorized-networks existing-auth-nets,shell-IP/32
    

    where:

    • existing-auth-nets is your existing list of master authorized networks.You can find your master authorized networks in the console or by running the following command:

      gcloud container clusters describe private-cluster-1 --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])"
      
    • shell-IP is the external IP address of your Cloud Shell.

  3. Get credentials, so that you can use kubectl to access the cluster:

    gcloud container clusters get-credentials private-cluster-1 \
        --zone us-central1-a \
        --project project-id
    

    where project-id is your project ID.

Now you can use kubectl, in Cloud Shell, to access your private cluster. For example:

kubectl get nodes

Creating a private cluster with unrestricted access to the public endpoint

In this section, you create a private cluster where any IP address can access the master.

gcloud

Run the following command:

gcloud container clusters create private-cluster-2 \
    --create-subnetwork name=my-subnet-3 \
    --no-enable-master-authorized-networks \
    --enable-ip-alias \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.32/28 \
    --no-enable-basic-auth \
    --no-issue-client-certificate

where:

  • --create-subnetwork name=my-subnet-3 causes GKE to automatically create a subnet named my-subnet-3.
  • --no-enable-master-authorized-networks disables authorized networks for the cluster.
  • --enable-ip-alias makes the cluster VPC-native.
  • --enable-private-nodes indicates that the cluster's nodes do not have external IP addresses.
  • --master-ipv4-cidr 172.16.0.32/28 specifies an RFC 1918 range for the master. This setting is permanent for this cluster.
  • --no-enable-basic-auth indicates to disable basic auth for the cluster.
  • --no-issue-client-certificate disables issuing a client certificate.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. For the Name, enter private-cluster-2.

  4. From the navigation pane, under Cluster, click Networking.

  5. Leave Network and Node subnet set to default. This causes GKE to generate a subnet for your cluster.

  6. Select the Private cluster option.

  7. Keep the Access master using its external IP address checkbox selected.

  8. Set Master IP range to 172.16.0.32/28.

  9. Clear the Enable master authorized networks checkbox.

  10. From the navigation pane, under Cluster, click Security.

  11. Ensure the Issue a client certificate checkbox is cleared.

  12. From the navigation pane, under Cluster, click Features.

  13. Ensure the Enable Kubernetes Dashboard checkbox is cleared.

  14. Click Create.

Other private cluster configurations

In addition to the preceding configurations, you can run private clusters with the following configurations.

Granting private nodes outbound internet access

To provide outbound internet access for your private nodes, you can use Cloud NAT or you can manage your own NAT gateway.

Creating a private cluster in a shared VPC network

To learn how to create a private cluster in a shared VPC network, refer to the Shared VPC documentation.

Deploying a Windows Server container application to a private cluster

To learn how to deploy a Windows Server container application to a private cluster, refer to the Windows node pool documentation.

Routing between on-premises/cluster VPC and cluster master

When you create a GKE private cluster with a private master endpoint, the cluster's master is inaccessible from the public internet, but it needs to be accessible to client-side tools like kubectl. To enable traffic between the cluster master and your on-premises network, there must be routes between the on-premises network and the Google-owned VPC network that hosts the cluster master.

Diagram showing the routing between on-prem VPC and cluster master

The Google VPC automatically exports the route to the master CIDR to your VPC, connected to your on-premises network, using Cloud VPN or Cloud Interconnect. However, the route to your on-premises environment must also be exported from your VPC to the Google VPC.

To share the routes, enable the --export-custom-routes flag on the peering between your VPC and the Google-owned VPC.

  1. Identify the peering between the VPC for your cluster and the Google-owned VPC:

    gcloud container clusters describe cluster-name
    

    The output of this command includes the cluster's privateClusterConfig.peeringName field. This is the name of the peering between your cluster and the Google-owned VPC. For example:

    privateClusterConfig:
    enablePrivateNodes: true
    masterIpv4CidrBlock: 172.16.1.0/28
    peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer
    privateEndpoint: 172.16.1.2
    publicEndpoint: 34.68.128.12
    
  2. Enable --export-custom-routes on the peering:

    gcloud compute networks peerings update peering --network network \
       --export-custom-routes
    

    where peering is the value of privateClusterConfig.peeringName identified in the previous step, and network is the name of your VPC.

    This flag causes network to advertise its routes to the Google-owned VPC where your cluster master is located. The next step explains how to advertise the route from your VPC to the Google-owned VPC to your on-premises environment.

  3. The route to the cluster master CIDR must also be advertised from your VPC to your on-premises environment. You can do this in two ways:

    • Recommended technique: Advertise the route to the master subnet as a custom route over eBGP from Cloud Router. See Advertising Custom IP Ranges for more information. This technique is recommended because the advertised routes are withdrawn if eBGP is unavailable, which helps blackhole traffic.

    • Provision a static route on the on-prem router or edge device towards Google Cloud.

Verifying that custom route export is enabled

Use this command to verify that the --export-custom-routes option is enabled on the peering between your VPC and the Google-owned VPC:

gcloud compute networks peerings list

The output of this command lists your Compute Engine network's peerings. Find the peering (whose name you found in step one of the above procedure) and verify that its EXPORT_CUSTOM_ROUTES column is True and the STATE column is ACTIVE.

Disabling custom route export

To disable custom route export from your VPC:

gcloud compute networks peerings update peering --network network --no-export-custom-routes

where:

  • peering is the value of privateClusterConfig.peeringName.
  • network is the name of your VPC.

To find the peeringName, see the first step of the instructions above to enable custom route export.

Verify that nodes do not have external IPs

After you create a private cluster, verify that the cluster's nodes do not have external IP addresses.

gcloud

Run the following command:

kubectl get nodes --output wide

The output's EXTERNAL-IP column is empty:

STATUS ... VERSION        EXTERNAL-IP  OS-IMAGE ...
Ready      v.8.7-gke.1                 Container-Optimized OS from Google
Ready      v1.8.7-gke.1                Container-Optimized OS from Google
Ready      v1.8.7-gke.1                Container-Optimized OS from Google

Console

  1. Visit the GKE menu in the Cloud Console.

    Visit the GKE menu

  2. In the list of clusters, click the desired cluster.

  3. Under Node pools, click the name of your instance group. For example, gke-private-cluster-0-default-pool-5c5add1f-grp.

  4. In the list of instances, verify that your instances do not have external IP addresses.

Viewing the cluster's subnet and secondary address ranges

After you create a private cluster, you can view the subnet and secondary address ranges that you or GKE provisioned for the cluster.

gcloud

List all subnets

To list the subnets in your cluster's network, run the following command:

gcloud compute networks subnets list --network network

where network is the private cluster's network. If you created the cluster with an automatically-created subnet, use default.

In the command output, find the name of the cluster's subnet.

View cluster's subnet

Get information about the automatically created subnet:

gcloud compute networks subnets describe subnet-name

where subnet-name is the name of the subnet.

The output shows the primary address range for nodes (the first ipCidrRange field) and the secondary ranges for Pods and Services (under secondaryIpRanges):

...
ipCidrRange: 10.0.0.0/22
kind: compute#subnetwork
name: gke-private-cluster-1-subnet-163e3c97
...
privateIpGoogleAccess: true
...
secondaryIpRanges:
- ipCidrRange: 10.40.0.0/14
  rangeName: gke-private-cluster-1-pods-163e3c97
- ipCidrRange: 10.0.16.0/20
  rangeName: gke-private-cluster-1-services-163e3c97
...

Console

  1. Visit the VPC networks page in the Cloud Console.

    Go to the VPC networks page

  2. Click the name of the subnet. For example, gke-private-cluster-0-subnet-163e3c97.

  3. Under IP address range, you can see the primary address range of your subnet. This is the range that is used for nodes.

  4. Under Secondary IP ranges, you can see the IP address range for Pods and the range for Services.

Viewing a private cluster's endpoints

You can view a private cluster's endpoints using the gcloud command-line tool or Cloud Console.

gcloud

Run the following command:

gcloud container clusters describe cluster-name

The output shows both the private and public endpoints:

...
privateClusterConfig:
enablePrivateEndpoint: true
enablePrivateNodes: true
masterIpv4CidrBlock: 172.16.0.32/28
privateEndpoint: 172.16.0.34
publicEndpoint: 35.239.154.67

Console

  1. Visit the GKE menu in the Cloud Console.

    Visit the GKE menu

  2. From the list, click the desired cluster.

  3. From the Details tab, under Cluster, look for the Endpoint field.

Pulling container images from an image registry

In a private cluster, the container runtime can pull container images from Container Registry; it cannot pull images from any other container image registry on the internet. This is because the nodes in a private cluster do not have external IP addresses, so by default they cannot communicate with services outside of the Google network.

The nodes in a private cluster can communicate with Google services, like Container Registry, if they are on a subnet that has Private Google Access enabled.

The following commands create a Deployment that pulls a sample image from a Google-owned Container Registry repository:

kubectl run hello-deployment --image gcr.io/google-samples/hello-app:2.0

Adding firewall rules for specific use cases

This section explains how to add a firewall rule to a private cluster. By default, firewall rules restrict your cluster master to only initiate TCP connections to your nodes on ports 443 (HTTPS) and 10250 (kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. For example, in Kubernetes 1.9 and older, kubectl top accesses heapster, which needs a firewall rule to allow TCP connections on port 8080. To grant such access, you can add firewall rules.

Adding a firewall rule allows traffic from the cluster master to all of the following:

  • The specified port of each node (hostPort).
  • The specified port of each Pod running on these nodes.
  • The specified port of each Service running on these nodes.

To learn about firewall rules, refer to Firewall Rules in the Cloud Load Balancing documentation

To add a firewall rule in a private cluster, you need to record the cluster master's CIDR block and the target used. After you have recorded this you can create the rule.

Step 1. View cluster master's CIDR block

You need the cluster master's CIDR block to add a firewall rule.

gcloud

Run the following command:

gcloud container clusters describe cluster-name

where cluster-name is the name of your private cluster.

In the command output, take note of the value in the masterIpv4CidrBlock field.

Console

  1. Visit the GKE menu in the Cloud Console.

    Visit the GKE menu

  2. Select the desired cluster.

From the Details tab, under Cluster, take note of the value in the Master address range field.

Step 2. View existing firewall rules

You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.

gcloud

Run the following command:

gcloud compute firewall-rules list \
    --filter 'name~^gke-cluster-name' \
    --format 'table(
        name,
        network,
        direction,
        sourceRanges.list():label=SRC_RANGES,
        allowed[].map().firewall_rule().list():label=ALLOW,
        targetTags.list():label=TARGET_TAGS
    )'

In the command output, take note of the value in the Targets field.

Console

  1. Visit the Firewall rules menu in the Cloud Console.

    Visit the Firewall rules menu

  2. Fill the Filter resources box with gke-cluster-name

In the results, take note of the value in the Targets field.

Step 3. Add a firewall rule

gcloud

Run the following command:

gcloud compute firewall-rules create firewall-rule-name \
    --action ALLOW \
    --direction INGRESS \
    --source-ranges master-CIDR-block \
    --rules protocol:port \
    --target-tags target

where:

  • firewall-rule-name is the name you choose for the firewall rule.
  • master-CIDR-block is the cluster master's CIDR block (masterIpv4CidrBlock) that you collected previously.
  • protocol:port is the desired port and its protocol, tcp or udp.
  • target is the target (Targets) value that you collected previously.

Console

  1. Visit the Firewall rules menu in the Cloud Console.

    Visit the Firewall rules menu

  2. Click Create firewall rule.

  3. Fill the Name box with the desired name for the firewall rule.

  4. In the Network drop-down list, select the relevant network.

  5. From Direction of traffic, click Ingress.

  6. From Action on match, click Allow.

  7. In the Targets drop-down list, select Specified target tags.

  8. Fill the Target tags box with the target value that you previously collected.

  9. In the Source filter drop-down list, select IP ranges.

  10. Fill the Source IP ranges box with the cluster master's CIDR block that you collected previously.

  11. From Protocols and ports, click Specified protocols and ports, check the box for the relevant protocol (TCP or UDP), and fill the box with the desired port.

  12. Click Create.

Protecting a private cluster with VPC Service Controls

To further secure your GKE private clusters, you can protect them using VPC Service Controls.

VPC Service Controls provides additional security for your GKE private clusters to help mitigate the risk of data exfiltration. Using VPC Service Controls, you can add projects to service perimeters that protect resources and services from requests that originate outside the perimeter.

To learn more about service perimeters, refer to the Service perimeter configuration page of the VPC Service Controls documentation.

If you use Container Registry with your GKE private cluster, additional steps are required to use that private cluster with VPC Service Controls. For more information, refer to the Setting up Container Registry for GKE private clusters page.

VPC peering reuse

Any private clusters you create after January 15, 2020 reuse VPC Network Peering connections.

Any private clusters you created prior to January 15, 2020 use a unique VPC Network Peering connection. Each VPC network can peer with up to 25 other VPC networks which means for these clusters there is a limit of at most 25 private clusters per network (assuming peerings are not being used for other purposes).

This feature is not backported to previous releases. To enable VPC Network Peering reuse on older private clusters, you can delete a cluster and recreate it. Upgrading a cluster does not cause it to reuse an existing VPC Network Peering connection.

Each location can support a maximum of 75 private clusters if the clusters have VPC peering reuse enabled. Zones and regions are treated as separate locations. For example, you can create up to 75 private zonal clusters in us-east1-a and another 75 private regional clusters in us-east1. This also applies if you are using private clusters in a shared VPC network. The maximum number of connections to a single VPC network is 25, which means you can only create private clusters using 25 unique locations.

You can check if your private cluster reuses VPC Peering connections.

gcloud

gcloud container clusters describe cluster-name \
--zone=zone-name \
--format="value(privateClusterConfig.peeringName)"

If your cluster is resuing VPC peering connections, the output begins with gke-n. For example, gke-n34a117b968dee3b2221-93c6-40af-peer.

Console

Check the VPC peering row in the cluster details page. If your cluster is resuing VPC peering connections, the output begins with gke-n. For example, gke-n34a117b968dee3b2221-93c6-40af-peer.

Cleaning up

After completing the tasks on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:

Delete the clusters

gcloud

gcloud container clusters delete -q private-cluster-0 private-cluster-1

Console

  1. Visit the GKE menu in Cloud Console.

    Visit the GKE menu

  2. Select each cluster.

  3. Click Delete.

Delete the network

gcloud

gcloud compute networks delete net-0

Console

  1. Go to the VPC networks page in the Cloud Console.

    Go to the VPC networks page

  2. In the list of networks, from net-0, click the trash icon.

  3. Beside each subnet, click the trash icon.

Requirements, restrictions, and limitations

Private clusters have the following requirements:

Private clusters have the following restrictions:

  • You cannot convert an existing, non-private cluster to a private cluster.
  • For clusters running versions earlier than 1.14.4, a cluster master, node, Pod, or Service IP range cannot overlap with 172.17.0.0/16.
  • For clusters running 1.14.4 or later, you can use 172.17.0.0/16 for your master IP range; however, a node, Pod, or Service IP range cannot overlap with 172.17.0.0/16.
  • Deleting the VPC peering between the cluster master and the cluster nodes, deleting the firewall rules that allow ingress traffic from the cluster master to nodes on port 10250, or deleting the default route to the default internet gateway, causes a private cluster to stop functioning. For a private cluster to come up again after deleting the default route, you need to provision the restricted VIP statically.
  • You can add up to 50 authorized networks (whitelisted CIDR blocks) in a project. For more information, refer to Add an authorized network to an existing cluster.

Private clusters have the following limitations:

  • The size of the RFC 1918 block for the cluster master must be /28.
  • While GKE can detect overlap with the cluster master address block, it cannot detect overlap within a shared VPC network.
  • All nodes in a private cluster are created without a public IP; they have limited access to Google APIs and services. To provide outbound internet access for your private nodes, you can use Cloud NAT or you can manage your own NAT gateway. To allow nodes to communicate with Google APIs and services, enable Private Google Access on your subnet.
  • The private IP address of the master in a regional cluster is only accessible from subnetworks within the same region, or from on-premises environments connected to the same region.
  • Any private clusters you created prior to January 15, 2020 have a limit of at most 25 private clusters per network (assuming peerings are not being used for other purposes). See VPC peering reuse for more information.

Troubleshooting

The following sections explain how to resolve common issues related to private clusters.

Cluster overlaps with active peer

Symptoms
Attempting to create a private cluster returns an error such as Google Compute Engine: An IP range in the peer network overlaps with an IP range in an active peer of the local network..
Potential causes
You chose an overlapping master CIDR.
Resolution
Delete and recreate the cluster using a different master CIDR.

Can't reach cluster

Symptoms
After creating a private cluster, attempting to run kubectl commands against the cluster returns an error, such as Unable to connect to the server: dial tcp [IP_ADDRESS]: connect: connection timed out or Unable to connect to the server: dial tcp [IP_ADDRESS]: i/o timeout.
Potential causes
kubectl is unable to talk to the cluster master.
Resolution
You need to add authorized networks for your cluster to whitelist your network's IP addresses.

Can't create cluster due to omitted flag

Symptoms
gcloud container clusters create returns an error such as Cannot specify --enable-private-endpoint without --enable-private-nodes.
Potential causes
You did not specify a necessary flag.
Resolution
Ensure that you specify the necessary flags. You cannot enable a private endpoint for the cluster master without also enabling private nodes.

Can't create cluster due to overlapping master IPv4 CIDR block

Symptoms
gcloud container clusters create returns an error such as The given master_ipv4_cidr 10.128.0.0/28 overlaps with an existing network 10.128.0.0/20.
Potential causes
You specified a master CIDR block that overlaps with an existing subnet in your VPC.
Resolution
Specify a CIDR block for --master-ipv4-cidr that does not overlap with an existing subnet.

Can't create subnet

Symptoms
When you attempt to create a private cluster with an automatic subnet, or to create a custom subnet, you might encounter the following error: An IP range in the peer network overlaps with an IP range in one of the active peers of the local network.
Potential causes
The master CIDR range you specified overlaps with another IP range in the cluster. This can also occur if you've recently deleted a private cluster and you're attempting to create a new private cluster using the same master CIDR.
Resolution
Try using a different CIDR range.

Can't pull image from public Docker Hub

Symptoms
A Pod running in your cluster displays a warning in kubectl describe such as Failed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Potential causes
Nodes in a private cluster do not have outbound access to the public internet. They have limited access to Google APIs and services, including Container Registry.
Resolution
You cannot fetch images directly from Docker Hub. Instead, use images hosted on Container Registry. Note that while Container Registry's Docker Hub mirror is accessible from a private cluster, it should not be exclusively relied upon. The mirror is only a cache, so images are periodically removed, and a private cluster is not able to fall back to Docker Hub.

What's next