Creating a private cluster


This page explains how to create a private Google Kubernetes Engine (GKE) cluster, which is a type of VPC-native cluster. In a private cluster, nodes only have internal IP addresses, which means that nodes and Pods are isolated from the internet by default. You may choose to have no client access, limited access, or unrestricted access to the control plane.

You cannot convert an existing, non-private cluster to a private cluster. To learn more about how private clusters work, see Overview of private clusters.

Restrictions and limitations

Private clusters must be VPC-native clusters. VPC-native clusters don't support legacy networks.

Expand the following sections to view the rules around IP address ranges and traffic when creating a private cluster.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.
  • Ensure you have the correct permission to create clusters. At minimum, you should be a Kubernetes Engine Cluster Admin.

  • Ensure you have a route to the Default Internet Gateway.

Creating a private cluster with no client access to the public endpoint

In this section, you create the following resources:

  • A private cluster named private-cluster-0 that has private nodes, and that has no client access to the public endpoint.
  • A network named my-net-0.
  • A subnet named my-subnet-0.

Console

Create a network and subnet

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. Click Create VPC network.

  3. For Name, enter my-net-0.

  4. For Subnet creation mode, select Custom.

  5. In the New subnet section, for Name, enter my-subnet-0.

  6. In the Region list, select the region that you want.

  7. For IP address range, enter 10.2.204.0/22.

  8. Set Private Google Access to On.

  9. Click Done.

  10. Click Create.

Create a private cluster

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create then in the Standard or Autopilot section, click Configure.

  3. For the Name, specify private-cluster-0.

  4. In the navigation pane, click Networking.

  5. In the Network list, select my-net-0.

  6. In the Node subnet list, select my-subnet-0.

  7. Select the Private cluster radio button.

  8. Clear the Access control plane using its external IP address checkbox.

  9. (Optional for Autopilot): Set Control plane IP range to 172.16.0.32/28.

  10. Click Create.

gcloud

  • For Autopilot clusters, run the following command:

    gcloud container clusters create-auto private-cluster-0 \
        --create-subnetwork name=my-subnet-0 \
        --enable-master-authorized-networks \
        --enable-private-nodes \
        --enable-private-endpoint
    
  • For Standard clusters, run the following command:

    gcloud container clusters create private-cluster-0 \
        --create-subnetwork name=my-subnet-0 \
        --enable-master-authorized-networks \
        --enable-ip-alias \
        --enable-private-nodes \
        --enable-private-endpoint \
        --master-ipv4-cidr 172.16.0.32/28
    

where:

  • --create-subnetwork name=my-subnet-0 causes GKE to automatically create a subnet named my-subnet-0.
  • --enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
  • --enable-ip-alias makes the cluster VPC-native (not required for Autopilot).
  • --enable-private-nodes indicates that the cluster's nodes don't have external IP addresses.
  • --enable-private-endpoint indicates that the cluster is managed using the internal IP address of the control plane API endpoint.
  • --master-ipv4-cidr 172.16.0.32/28 specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster and must be unique within the VPC. The use of non RFC 1918 internal IP addresses is supported.

API

To create a cluster without a publicly-reachable control plane, specify the enablePrivateEndpoint: true field in the privateClusterConfig resource.

At this point, these are the only IP addresses that have access to the control plane:

  • The primary range of my-subnet-0.
  • The secondary range used for Pods.

For example, suppose you created a VM in the primary range of my-subnet-0. Then on that VM, you could configure kubectl to use the internal IP address of the control plane.

If you want to access the control plane from outside my-subnet-0, you must authorize at least one address range to have access to the private endpoint.

Suppose you have a VM that is in the default network, in the same region as your cluster, but not in my-subnet-0.

For example:

  • my-subnet-0: 10.0.0.0/22
  • Pod secondary range: 10.52.0.0/14
  • VM address: 10.128.0.3

You could authorize the VM to access the control plane by using this command:

gcloud container clusters update private-cluster-0 \
    --enable-master-authorized-networks \
    --master-authorized-networks 10.128.0.3/32

Creating a private cluster with limited access to the public endpoint

When creating a private cluster using this configuration, you can choose to use an automatically generated subnet, or a custom subnet.

Using an automatically generated subnet

In this section, you create a private cluster named private-cluster-1 where GKE automatically generates a subnet for your cluster nodes. The subnet has Private Google Access enabled. In the subnet, GKE automatically creates two secondary ranges: one for Pods and one for Services.

You can use the Google Cloud CLI or the GKE API.

gcloud

  • For Autopilot clusters, run the following command:

    gcloud container clusters create-auto private-cluster-1 \
        --create-subnetwork name=my-subnet-1 \
        --enable-master-authorized-networks \
        --enable-private-nodes
    
  • For Standard clusters, run the following command:

    gcloud container clusters create private-cluster-1 \
        --create-subnetwork name=my-subnet-1 \
        --enable-master-authorized-networks \
        --enable-ip-alias \
        --enable-private-nodes \
        --master-ipv4-cidr 172.16.0.0/28
    

where:

  • --create-subnetwork name=my-subnet-1 causes GKE to automatically create a subnet named my-subnet-1.
  • --enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
  • --enable-ip-alias makes the cluster VPC-native (not required for Autopilot).
  • --enable-private-nodes indicates that the cluster's nodes don't have external IP addresses.
  • --master-ipv4-cidr 172.16.0.0/28 specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster and must be unique within the VPC. The use of non RFC 1918 internal IP addresses is supported.

API

Specify the privateClusterConfig field in the Cluster API resource:

{
  "name": "private-cluster-1",
  ...
  "ipAllocationPolicy": {
    "createSubnetwork": true,
  },
  ...
    "privateClusterConfig" {
      "enablePrivateNodes": boolean # Creates nodes with internal IP addresses only
      "enablePrivateEndpoint": boolean # false creates a cluster control plane with a publicly-reachable endpoint
      "masterIpv4CidrBlock": string # CIDR block for the cluster control plane
      "privateEndpoint": string # Output only
      "publicEndpoint": string # Output only
  }
}

At this point, these are the only IP addresses that have access to the cluster control plane:

  • The primary range of my-subnet-1.
  • The secondary range used for Pods.

Suppose you have a group of machines, outside of your VPC network, that have addresses in the range 203.0.113.0/29. You could authorize those machines to access the public endpoint by entering this command:

gcloud container clusters update private-cluster-1 \
    --enable-master-authorized-networks \
    --master-authorized-networks 203.0.113.0/29

Now these are the only IP addresses that have access to the control plane:

  • The primary range of my-subnet-1.
  • The secondary range used for Pods.
  • Address ranges that you have authorized, for example, 203.0.113.0/29.

Using a custom subnet

In this section, you create the following resources:

  • A private cluster named private-cluster-2.
  • A network named my-net-2.
  • A subnet named my-subnet-2, with primary range 192.168.0.0/20, for your cluster nodes. Your subnet has the following secondary address ranges:
    • my-pods for the Pod IP addresses.
    • my-services for the Service IP addresses.

Console

Create a network, subnet, and secondary ranges

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. Click Create VPC network.

  3. For Name, enter my-net-2.

  4. For Subnet creation mode , select Custom.

  5. In the New subnet section, for Name, enter my-subnet-2.

  6. In the Region list, select the region that you want.

  7. For IP address range, enter 192.168.0.0/20.

  8. Click Create secondary IP range. For Subnet range name, enter my-services, and for Secondary IP range, enter 10.0.32.0/20.

  9. Click Add IP range. For Subnet range name, enter my-pods, and for Secondary IP range, enter 10.4.0.0/14.

  10. Set Private Google Access to On.

  11. Click Done.

  12. Click Create.

Create a private cluster

Create a private cluster that uses your subnet:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create then in the Standard or Autopilot section, click Configure.

  3. For the Name, enter private-cluster-2.

  4. From the navigation pane, click Networking.

  5. Select the Private cluster radio button.

  6. To create a control plane that is accessible from authorized external IP ranges, keep the Access control plane using its external IP address checkbox selected.

  7. (Optional for Autopilot) Set Control plane IP range to 172.16.0.16/28.

  8. In the Network list, select my-net-2.

  9. In the Node subnet list, select my-subnet-2.

  10. Clear the Automatically create secondary ranges checkbox.

  11. In the Pod secondary CIDR range list, select my-pods.

  12. In the Services secondary CIDR range list, select my-services.

  13. Select the Enable control plane authorized networks checkbox.

  14. Click Create.

gcloud

Create a network

First, create a network for your cluster. The following command creates a network, my-net-2:

gcloud compute networks create my-net-2 \
    --subnet-mode custom

Create a subnet and secondary ranges

Next, create a subnet, my-subnet-2, in the my-net-2 network, with secondary ranges my-pods for Pods and my-services for Services:

gcloud compute networks subnets create my-subnet-2 \
    --network my-net-2 \
    --range 192.168.0.0/20 \
    --secondary-range my-pods=10.4.0.0/14,my-services=10.0.32.0/20 \
    --enable-private-ip-google-access

Create a private cluster

Now, create a private cluster, private-cluster-2, using the network, subnet, and secondary ranges you created.

  • For Autopilot clusters, run the following command:

    gcloud container clusters create-auto private-cluster-2 \
        --enable-master-authorized-networks \
        --network my-net-2 \
        --subnetwork my-subnet-2 \
        --cluster-secondary-range-name my-pods \
        --services-secondary-range-name my-services \
        --enable-private-nodes
    
  • For Standard clusters, run the following command:

    gcloud container clusters create private-cluster-2 \
        --enable-master-authorized-networks \
        --network my-net-2 \
        --subnetwork my-subnet-2 \
        --cluster-secondary-range-name my-pods \
        --services-secondary-range-name my-services \
        --enable-private-nodes \
        --enable-ip-alias \
        --master-ipv4-cidr 172.16.0.16/28 \
        --no-enable-basic-auth \
        --no-issue-client-certificate
    

At this point, these are the only IP addresses that have access to the control plane:

  • The primary range of my-subnet-2.
  • The secondary range my-pods.

Suppose you have a group of machines, outside of my-net-2, that have addresses in the range 203.0.113.0/29. You could authorize those machines to access the public endpoint by entering this command:

gcloud container clusters update private-cluster-2 \
    --enable-master-authorized-networks \
    --master-authorized-networks 203.0.113.0/29

At this point, these are the only IP addresses that have access to the control plane:

  • The primary range of my-subnet-2.
  • The secondary range my-pods.
  • Address ranges that you have authorized, for example, 203.0.113.0/29.

Using Cloud Shell to access a private cluster

The private cluster you created in the Using an automatically generated subnet section, private-cluster-1, has a public endpoint and has authorized networks enabled. If you want to use Cloud Shell to access the cluster, you must add the external IP address of your Cloud Shell to the cluster's list of authorized networks.

To do this:

  1. In your Cloud Shell command-line window, use dig to find the external IP address of your Cloud Shell:

    dig +short myip.opendns.com @resolver1.opendns.com
    
  2. Add the external address of your Cloud Shell to your cluster's list of authorized networks:

    gcloud container clusters update private-cluster-1 \
        --enable-master-authorized-networks \
        --master-authorized-networks EXISTING_AUTH_NETS,SHELL_IP/32
    

    Replace the following:

    • EXISTING_AUTH_NETS: the IP addresses of your existing list of authorized networks. You can find your authorized networks in the console or by running the following command:

      gcloud container clusters describe private-cluster-1 --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])"
      
    • SHELL_IP: the external IP address of your Cloud Shell.

  3. Get credentials, so that you can use kubectl to access the cluster:

    gcloud container clusters get-credentials private-cluster-1 \
        --project=PROJECT_ID \
        --internal-ip
    

    Replace PROJECT_ID with your project ID.

  4. Use kubectl, in Cloud Shell, to access your private cluster:

    kubectl get nodes
    

    The output is similar to the following:

    NAME                                               STATUS   ROLES    AGE    VERSION
    gke-private-cluster-1-default-pool-7d914212-18jv   Ready    <none>   104m   v1.21.5-gke.1302
    gke-private-cluster-1-default-pool-7d914212-3d9p   Ready    <none>   104m   v1.21.5-gke.1302
    gke-private-cluster-1-default-pool-7d914212-wgqf   Ready    <none>   104m   v1.21.5-gke.1302
    

Creating a private cluster with unrestricted access to the public endpoint

In this section, you create a private cluster where any IP address can access the control plane.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create then in the Standard or Autopilot section, click Configure.

  3. For the Name, enter private-cluster-3.

  4. In the navigation pane, click Networking.

  5. Select the Private cluster option.

  6. Keep the Access control plane using its external IP address checkbox selected.

  7. (Optional for Autopilot) Set Control plane IP range to 172.16.0.32/28.

  8. Leave Network and Node subnet set to default. This causes GKE to generate a subnet for your cluster.

  9. Clear the Enable control plane authorized networks checkbox.

  10. Click Create.

gcloud

  • For Autopilot clusters, run the following command:

    gcloud container clusters create-auto private-cluster-3 \
        --create-subnetwork name=my-subnet-3 \
        --no-enable-master-authorized-networks \
        --enable-private-nodes
    
  • For Standard clusters, run the following command:

    gcloud container clusters create private-cluster-3 \
        --create-subnetwork name=my-subnet-3 \
        --no-enable-master-authorized-networks \
        --enable-ip-alias \
        --enable-private-nodes \
        --master-ipv4-cidr 172.16.0.32/28
    

where:

  • --create-subnetwork name=my-subnet-3 causes GKE to automatically create a subnet named my-subnet-3.
  • --no-enable-master-authorized-networks disables authorized networks for the cluster.
  • --enable-ip-alias makes the cluster VPC-native (not required for Autopilot).
  • --enable-private-nodes indicates that the cluster's nodes don't have external IP addresses.
  • --master-ipv4-cidr 172.16.0.32/28 specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster and must be unique within the VPC. The use of non RFC 1918 internal IP addresses is supported.

Adding firewall rules for specific use cases

This section explains how to add a firewall rule to a private cluster. By default, firewall rules restrict your cluster control plane to only initiate TCP connections to your nodes and Pods on ports 443 (HTTPS) and 10250 (kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. Don't create firewall rules or hierarchical firewall policy rules that have a higher priority than the automatically created firewall rules.

Kubernetes features that require additional firewall rules include:

Adding a firewall rule allows traffic from the cluster control plane to all of the following:

  • The specified port of each node (hostPort).
  • The specified port of each Pod running on these nodes.
  • The specified port of each Service running on these nodes.

To learn about firewall rules, refer to Firewall rules in the Cloud Load Balancing documentation.

To add a firewall rule in a private cluster, you need to record the cluster control plane's CIDR block and the target used. After you have recorded this you can create the rule.

Step 1. View control plane's CIDR block

You need the cluster control plane's CIDR block to add a firewall rule.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the cluster name.

In the Details tab, under Networking, take note of the value in the Control plane address range field.

gcloud

Run the following command:

gcloud container clusters describe CLUSTER_NAME

Replace CLUSTER_NAME with the name of your private cluster.

In the command output, take note of the value in the masterIpv4CidrBlock field.

Step 2. View existing firewall rules

You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.

Console

  1. Go to the Firewall policies page in the Google Cloud console.

    Go to Firewall policies

  2. For Filter table for VPC firewall rules, enter gke-CLUSTER_NAME.

In the results, take note of the value in the Targets field.

gcloud

Run the following command:

gcloud compute firewall-rules list \
    --filter 'name~^gke-CLUSTER_NAME' \
    --format 'table(
        name,
        network,
        direction,
        sourceRanges.list():label=SRC_RANGES,
        allowed[].map().firewall_rule().list():label=ALLOW,
        targetTags.list():label=TARGET_TAGS
    )'

In the command output, take note of the value in the Targets field.

Step 3. Add a firewall rule

Console

  1. Go to the Firewall policies page in the Google Cloud console.

    Go to Firewall policies

  2. Click Create Firewall Rule.

  3. For Name, enter the name for the firewall rule.

  4. In the Network list, select the relevant network.

  5. In Direction of traffic, click Ingress.

  6. In Action on match, click Allow.

  7. In the Targets list, select Specified target tags.

  8. For Target tags, enter the target value that you noted previously.

  9. In the Source filter list, select IPv4 ranges.

  10. For Source IPv4 ranges, enter the cluster control plane's CIDR block.

  11. In Protocols and ports, click Specified protocols and ports, select the checkbox for the relevant protocol (tcp or udp), and enter the port number in the protocol field.

  12. Click Create.

gcloud

Run the following command:

gcloud compute firewall-rules create FIREWALL_RULE_NAME \
    --action ALLOW \
    --direction INGRESS \
    --source-ranges CONTROL_PLANE_RANGE \
    --rules PROTOCOL:PORT \
    --target-tags TARGET

Replace the following:

  • FIREWALL_RULE_NAME: the name you choose for the firewall rule.
  • CONTROL_PLANE_RANGE: the cluster control plane's IP address range (masterIpv4CidrBlock) that you collected previously.
  • PROTOCOL:PORT: the port and its protocol, tcp or udp.
  • TARGET: the target (Targets) value that you collected previously.

Verify that nodes don't have external IP addresses

After you create a private cluster, verify that the cluster's nodes don't have external IP addresses.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the list of clusters, click the cluster name.

  3. For Autopilot clusters, in the Cluster basics section, check the External endpoint field. The value is Disabled.

For Standard clusters, do the following:

  1. On the Clusters page, click the Nodes tab.
  2. Under Node Pools, click the node pool name.
  3. On the Node pool details page, under Instance groups, click the name of your instance group. For example, gke-private-cluster-0-default-pool-5c5add1f-grp`.
  4. In the list of instances, verify that your instances do not have external IP addresses.

gcloud

Run the following command:

kubectl get nodes --output wide

The output's EXTERNAL-IP column is empty:

STATUS ... VERSION        EXTERNAL-IP  OS-IMAGE ...
Ready      v.8.7-gke.1                 Container-Optimized OS
Ready      v1.8.7-gke.1                Container-Optimized OS
Ready      v1.8.7-gke.1                Container-Optimized OS

Verify VPC peering reuse in cluster

Any private clusters you create after January 15, 2020 reuse VPC Network Peering connections.

You can check if your private cluster reuses VPC Network Peering connections using the gcloud CLI or the Google Cloud console.

Console

Check the VPC peering row on the Cluster details page. If your cluster is reusing VPC peering connections, the output begins with gke-n. For example, gke-n34a117b968dee3b2221-93c6-40af-peer.

gcloud

gcloud container clusters describe CLUSTER_NAME \
    --format="value(privateClusterConfig.peeringName)"

If your cluster is reusing VPC peering connections, the output begins with gke-n. For example, gke-n34a117b968dee3b2221-93c6-40af-peer.

Cleaning up

After completing the tasks on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:

Delete the clusters

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Select each cluster.

  3. Click Delete.

gcloud

gcloud container clusters delete -q private-cluster-0 private-cluster-1 private-cluster-2 private-cluster-3

Delete the network

Console

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the list of networks, click my-net-0.

  3. On the VPC network details page, click Delete VPC Network.

  4. In the Delete a network dialog, click Delete.

gcloud

gcloud compute networks delete my-net-0

What's next