About private clusters


This page explains how private clusters work in Google Kubernetes Engine (GKE). You can also learn how to create private clusters.

Private clusters use nodes that don't have external IP addresses. This means that clients on the internet cannot connect to the IP addresses of the nodes. Private clusters are ideal for workloads that—for example—require controlled access due to data privacy and security regulations.

Private clusters are available in both Standard or Autopilot modes.

Architecture of private clusters

Unlike a public cluster, a private cluster has both a control plane internal endpoint and a control plane external endpoint.

The following diagram provides an overview of the architecture for a private cluster:

Private cluster architecture

The following are the core components of a private cluster:

  • Control plane: The control plane has both an internal endpoint for internal cluster communication and an external endpoint. You may choose to disable the external endpoint.

  • Nodes: Nodes only use internal IP addresses, isolating them from the public internet.

  • VPC network: This is a virtual network in which you create subnets with internal IP address ranges specifically for the cluster's nodes and Pods.

  • Private Google Access: This is enabled on the cluster's subnet and allows nodes with internal IP addresses to reach essential Google Cloud APIs and services without needing public IP addresses. For example, Private Google Access is required for private clusters to access container images from Artifact Registry and to send logs to Cloud Logging. Private Google Access is enabled by default in private clusters, except for Shared VPC clusters, which require manual enablement.

The control plane in private clusters

Every GKE cluster has a Kubernetes API server that is managed by the control plane.

The control plane runs on a virtual machine (VM) that is in a VPC network in a Google-managed project. A regional cluster has multiple replicas of the control plane, each of which runs on its own VM.

In private clusters, the control plane's VPC network is connected to your cluster's VPC network with VPC Network Peering. Your VPC network contains the cluster nodes, and the Google-managed Google Cloud VPC network contains your cluster's control plane.

Traffic between nodes and the control plane is routed entirely using internal IP addresses. If you use VPC Network Peering to connect your cluster's VPC network to a third network, the third network cannot reach resources in the control plane's VPC network. This is because VPC Network Peering only supports communication between directly peered networks, and the third network cannot be peered with the control plane network. For more information, see VPC Network Peering restrictions.

Endpoints in private clusters

The control plane for a private cluster has an internal endpoint in addition to an external endpoint.

The internal endpoint is an internal IP address in the control plane's VPC network. In a private cluster, nodes always communicate with the control plane's internal endpoint. Depending on your configuration, you can manage the cluster with tools like kubectl that connect to the private endpoint as well. Any VM that uses the same subnet as your private cluster can also access the internal endpoint.

The external endpoint is the external IP address of the control plane. By default, tools like kubectl communicate with the control plane on its external endpoint.

Options for access to cluster endpoints

You can control access to the endpoints using one of the following configurations:

  • External endpoint access disabled: This is the most secure option as it prevents all internet access to the control plane. This is a good choice if you have configured your on-premises network to connect to Google Cloud using Cloud Interconnect or Cloud VPN.

    If you disable external endpoint access, then you must configure authorized networks for the internal endpoint. If you don't do this, you can only connect to the internal endpoint from cluster nodes or VMs in the same subnet as the cluster. With this setting, authorized networks must be internal IP addresses.

  • External endpoint access enabled, authorized networks enabled: In this configuration, the authorized networks apply to the control plane's external endpoint. This is a good choice if you need to administer the cluster from source networks that are not connected to your cluster's VPC network using Cloud Interconnect or Cloud VPN.

  • External endpoint access enabled, authorized networks disabled: This is the default and it is also the least restrictive option. Since authorized networks are not enabled, you can administer your cluster from any source IP address as long as you authenticate.

VPC Network Peering reuse

Private clusters created after January 15, 2020 use a common VPC Network Peering connection if the clusters are in the same Google Cloud zone or region and use the same VPC network.

  • For zonal clusters: The first private cluster you create in a zone generates a new VPC Network Peering connection to the cluster's VPC network. Additional zonal private clusters that you create in the same zone and VPC network use the same peering connection.

  • For regional clusters: The first private cluster you create in a region generates a new VPC Network Peering connection to the cluster's VPC network. Additional regional private clusters that you create in the same region and VPC network use the same peering connection.

Zonal and regional clusters use their own peering connections, even if they are in the same region. For example:

  • You create two or more zonal private clusters in the us-east1-b zone and configure them to use the same VPC network. Both clusters use the same peering connection.

  • You create two or more regional private clusters in the us-east1 region and configure them to use the same VPC network as the zonal clusters. These regional clusters use the same VPC Network Peering connection with each other, but will need a different peering connection to communicate with the zonal clusters.

All private clusters created prior to January 15, 2020 use a unique VPC Network Peering connection. In other words, these clusters don't use the same peering connection with other zonal or regional clusters. To enable VPC Network Peering reuse on these clusters, you can delete a cluster and recreate it. Upgrading a cluster does not cause it to reuse an existing VPC Network Peering connection.

To check whether your private cluster is using a common VPC Network Peering connection, see Verify VPC peering reuse.

Restrictions

  • Each zone or region can support a maximum of 75 private clusters if the clusters have VPC Network Peering reuse enabled.

    For example, you can create up to 75 private zonal clusters in us-east1-a and another 75 private regional clusters in us-east1. This also applies if you are using private clusters in a Shared VPC network.

  • The maximum number of connections to a single VPC network is 25, which means you can only create private clusters using 25 unique locations.

  • VPC Network Peering reuse only applies to clusters in the same location, for example regional clusters in the same region or zonal clusters in the same zone. At maximum, you can have four VPC Network Peerings per region if you create both regional clusters and zonal clusters in all of the zones of that region.

  • For clusters created prior to January 15, 2020, each VPC network can peer with up to 25 other VPC networks which means for these clusters there is a limit of at most 25 private clusters per network (assuming peerings are not being used for other purposes).

What's next