Private clusters in GKE


This page explains how private clusters work in Google Kubernetes Engine (GKE). You can also learn how to create and manage private clusters.

A private cluster is a type of VPC-native cluster that only depends on internal IP addresses. Nodes, Pods, and Services in a private cluster require unique subnet IP address ranges.

You can create and configure private clusters in Standard or Autopilot.

If you want to provide outbound internet access for certain private nodes, you can use Cloud NAT.

Architecture of private clusters

Private clusters use nodes that do not have external IP addresses. This means that clients on the internet cannot connect to the IP addresses of the nodes. For example, a Service of type NodePort hosted in a private cluster is inaccessible to external clients because the node does not have an internet-routable public IP address.

Unlike a public cluster, a private cluster has both a control plane private endpoint and a control plane public endpoint. You must specify a unique /28 IP address range for the control plane's private endpoint, and you can choose to disable the control plane's public endpoint.

The following diagram provides an overview of the architecture for a private cluster:

Private cluster architecture

Even though the nodes use internal IP addresses, external clients can connect to Services in your cluster. For example:

Using Private Google Access in private clusters

For private clusters using VPC networks in the same project as the cluster, GKE ensures Private Google Access is enabled on the subnet used by the private cluster when you create the cluster. A network admin, project owner, or project editor for a Shared VPC host project must manually enable Private Google Access on subnets used by private clusters if the private clusters are created in Shared VPC service projects.

Private Google Access is enabled by default in private clusters, except for Shared VPC clusters. You must enable Private Google Access manually for Shared VPC clusters.

Private Google Access provides private nodes and their workloads access to Google Cloud APIs and services over Google's private network. For example, Private Google Access is required for private clusters to access container images from Artifact Registry, and to send logs to Cloud Logging.

The control plane in private clusters

Every GKE cluster has a Kubernetes API server that is managed by the control plane.

The control plane runs on a virtual machine (VM) that is in a VPC network in a Google-managed project. A regional cluster has multiple replicas of the control plane, each of which runs on its own VM.

In private clusters, the control plane's VPC network is connected to your cluster's VPC network with VPC Network Peering. Your VPC network contains the cluster nodes, and the Google-managed Google Cloud VPC network contains your cluster's control plane.

Traffic between nodes and the control plane is routed entirely using internal IP addresses. If you use VPC Network Peering to connect your cluster's VPC network to a third network, the third network cannot reach resources in the control plane's VPC network. This is because VPC Network Peering only supports communication between directly peered networks, and the third network cannot be peered with the control plane network. For more information, see VPC Network Peering restrictions.

VPC Network Peering reuse

Private clusters created after January 15, 2020 use a common VPC Network Peering connection if the clusters are in the same location and use the same VPC network. In this context, location exclusively refers to a Google Cloud region or a Google Cloud zone.

  • For zonal clusters: the first private cluster you create in a zone generates a new VPC Network Peering connection to the cluster's VPC network. Additional zonal private clusters that you create in the same zone and VPC network use the same peering connection.

  • For regional clusters: The first private cluster you create in a region generates a new VPC Network Peering connection to the cluster's VPC network. Additional regional private clusters that you create in the same region and VPC network use the same peering connection.

  • GKE does not use a common peering for zonal clusters and regional clusters, even when the zonal clusters belong to the same region as the regional clusters.

The following examples clarify this behavior. Each example uses one VPC Network Peering connection:

  • Two or more zonal private clusters in the us-east1-b zone using the same VPC network.

  • Two or more regional private clusters in the us-east1 region using the same VPC network.

However, one or more zonal private clusters in us-east1-b and one or more regional clusters in us-east1 using the same VPC network require two VPC Network Peering connections.

For more information about private clusters and connections, see VPC Network Peering reuse

Endpoints in private clusters

The control plane for a private cluster has a private endpoint in addition to a public endpoint. The control plane for a non-private cluster only has a public endpoint.

Private endpoint
The private endpoint is an internal IP address in the control plane's VPC network. In a private cluster, nodes always communicate with the control plane's private endpoint. Depending on your configuration, you can manage the cluster with tools like kubectl that connect to the private endpoint as well. The access to the private endpoint depends on the authorized networks configuration on the cluster.
Public endpoint
This is the external IP address of the control plane. By default, tools like kubectl communicate with the control plane on its public endpoint.

Access to cluster endpoints

You can control access to the endpoints using one of the following configurations:

  • Public endpoint access disabled: This is the most secure option as it prevents all internet access to the control plane. This is a good choice if you have configured your on-premises network to connect to Google Cloud using Cloud Interconnect or Cloud VPN.

    If you disable public endpoint access, then you must configure authorized networks for the private endpoint. If you don't do this, you can only connect to the private endpoint from cluster nodes or VMs in the same subnet as the cluster. With this setting, authorized networks must be internal IP addresses.

  • Public endpoint access enabled, authorized networks enabled: In this configuration, the authorized networks apply to the control plane's public endpoint. This is a good choice if you need to administer the cluster from source networks that are not connected to your cluster's VPC network using Cloud Interconnect or Cloud VPN.

  • Public endpoint access enabled, authorized networks disabled: This is the default and it is also the least restrictive option. Since authorized networks are not enabled, you can administer your cluster from any source IP address as long as you authenticate.

The following table summarizes the different ways you can access the endpoints:

Public endpoint access disabled Public endpoint access enabled,
authorized networks enabled
Public endpoint access enabled,
authorized networks disabled
Security Highest level of restricted access to the control plane. Client access to the control plane's public endpoint is blocked. Access to the control plane must be from internal IP addresses. Restricted access to the control plane from both internal and external IP addresses that you define. Access to the control plane from any IP address.
Detailed configuration steps Creating a private cluster with no client access to the public endpoint. Creating a private cluster with limited access to the public endpoint. Creating a private cluster with unrestricted access to the public endpoint.
Google Cloud console configuration options
  1. Select Enable VPC-native.
  2. Select Private cluster.
  3. Clear Access control plane using its external IP address.
    Enable control plane authorized networks is automatically enabled.
  1. Select Enable VPC-native.
  2. Select Private cluster.
  3. Select Access control plane using its external IP address.
  4. Select Enable control plane authorized networks.
  1. Select Enable VPC-native.
  2. Select Private cluster.
  3. Select Access control plane using its external IP address.
  4. Clear Enable control plane authorized networks.
gcloud cluster creation flags --enable-ip-alias
--enable-private-nodes
--enable-private-endpoint
--enable-master-authorized-networks
--enable-ip-alias
--enable-private-nodes
--enable-master-authorized-networks
--enable-ip-alias
--enable-private-nodes
--no-enable-master-authorized-networks
Communication between nodes and control plane

Nodes always contact the control plane using the private endpoint.

Webhook communication between nodes and API server

Webhooks that use a service with a targetPort other than 443 require a firewall rule to permit this. See Adding firewall rules for specific use cases for more information.

Control plane authorized networks

Required for access to the control plane from internal IP addresses other than nodes and Pods.

You do not need to explicitly authorize the internal IP address range of nodes. Addresses in the primary IP address range of the cluster's subnet are always authorized to communicate with the private endpoint.

Use --master-authorized-networks to specify additional internal IP addresses that can access the control plane. You cannot include external IP addresses in the list of authorized networks, because access to the public endpoint is disabled.

Required for access to the control plane from external IP addresses, and from internal IP addresses other than nodes and Pods.

Use --master-authorized-networks to specify external and internal IP addresses, other than nodes and Pods, that can access the control plane.

Not used.

If you enable access to the control plane's public endpoint without enabling authorized networks, access to the control plane's public endpoint is not restricted.

Access using kubectl

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if either their internal IP addresses are included in the list of authorized networks or they are located in the same subnet as the cluster's nodes. By default only VMs on the same region can reach the private endpoint. If you need to reach the private endpoint outside the cluster region consider configuring control plane private endpoint global access. kubectl must be configured to use the private endpoint.

From public IP addresses: Never.

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if either their internal IP addresses are included in the list of authorized networks or they are located in the same subnet as the cluster's nodes. By default only VMs on the same region can reach the private endpoint. If you need to reach the private endpoint outside the cluster region consider configuring control plane private endpoint global access. kubectl must be configured to use the private endpoint.

From public IP addresses: Machines with public IP addresses can use kubectl to communicate with the public endpoint only if their public IP addresses are included in the list of authorized networks. This includes machines outside of Google Cloud and Google Cloud VMs with external IP addresses.

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: By default only VMs on the same region can reach the private endpoint. If you need to reach the private endpoint outside the cluster region consider configuring control plane private endpoint global access. kubectl must be configured to use the private endpoint.

From public IP addresses: Any machine with a public IP address can use kubectl to communicate with the public endpoint. This includes machines outside of Google Cloud and Google Cloud VMs with external IP addresses.

What's next