Private clusters

This page explains how private clusters work in Google Kubernetes Engine (GKE). You can also learn how to create and manage private clusters.

Private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet. This is achieved as the nodes have internal RFC 1918 IP addresses only.

If you want to provide outbound internet access for certain private nodes, you can use Cloud NAT or manage your own NAT gateway.

Even though the node IP addresses are private, external clients can reach Services in your cluster. For example, you can create a Service of type LoadBalancer, and external clients can call the IP address of the load balancer. Or you can create a Service of type NodePort and then create an Ingress. GKE uses information in the Service and the Ingress to configure an HTTP(S) load balancer. External clients can then call the external IP address of the HTTP(S) load balancer.

Using Private Google Access in private clusters

By default, Private Google Access is enabled. Private Google Access provides private nodes and their workloads with limited outbound access to Google Cloud APIs and services over Google's private network. For example, Private Google Access makes it possible for private nodes to pull container images from Google Container Registry, and to send logs to Stackdriver.

The master in private clusters

Every GKE cluster has a Kubernetes API server called the master. The master is in a Google-owned project that is separate from your project. It runs on a VM that is in a VPC network in the Google-owned project. A regional cluster has multiple masters, each of which runs on its own VM.

In private clusters, the master's VPC network is connected to your cluster's VPC network with VPC Network Peering. Your VPC network contains the cluster nodes, and a separate Google Cloud VPC network contains your cluster's master. The master's VPC network is located in a project controlled by Google. Your VPC network and the master's VPC network are connected using VPC Network Peering. Traffic between nodes and the master is routed entirely using internal IP addresses.

VPC Network Peering reuse

All newly created private clusters automatically reuse existing VPC Network Peering connections. The first zonal or regional private cluster you create generates a new VPC Network Peering connection. Additional private clusters in the same zone or region and network can use the same peering, without the need to create any additional VPC Network Peering connections. For example, if you create two single-zone private clusters in the us-east1-b zone, and three regional private clusters in the asia-east1 region, only two peering connections are created. However, if you create a regional cluster and a zonal cluster in the same region (for example, asia-east1 and asia-east1-a) two different peering connections are created.

Endpoints in private clusters

The master for a private cluster has a private endpoint in addition to a public endpoint. The master for a non-private cluster only has a public endpoint.

Private endpoint
The private endpoint is an internal IP address in the master's VPC network. In a private cluster, nodes always communicate with the master's private endpoint. Depending on your configuration, you can manage the cluster with tools like kubectl that connect to the private endpoint as well. Any VM that uses the same subnet that your private cluster uses can also access the private endpoint.
Public endpoint
This is the external IP address of the master. By default, tools like kubectl communicate with the master on its public endpoint. You can control access to this endpoint using master authorized networks or you can disable access to the public endpoint.

Access to cluster endpoints

You can control the level of access to the endpoints using one of the following configurations.

  • Public endpoint access disabled: This is the most secure option as it prevents all internet access to the master. This is a good choice if you have configured your on-premises network to connect to Google Cloud using Cloud Interconnect and Cloud VPN. Those technologies effectively connect your company network to your VPC without the traffic having to traverse the public internet.

    If you disable public endpoint access, then you must configure master authorized networks for the private endpoint. If you don't do this, you can only connect to the private endpoint from cluster nodes or VMs in the same subnet as the cluster. In addition, master authorized networks must be RFC 1918 IP addresses.

  • Public endpoint access enabled, master authorized networks enabled: Using private clusters with master authorized networks enabled provides restricted access to the master from source IP addresses that you define. This is a good choice if you don't have existing VPN infrastructure or have remote users or branch offices that connect over the public internet instead of the corporate VPN and Cloud Interconnect or Cloud VPN.

  • Public endpoint access enabled, master authorized networks disabled: This is the default and it is also the least restrictive option. Since master authorized networks are not enabled, you can administer your cluster from any source IP address as long as you authenticate.

The following table summarizes the different ways you can access the endpoints.

Public endpoint access disabled Public endpoint access enabled,
master authorized networks enabled
Public endpoint access enabled,
master authorized networks disabled
Security Highest level of restricted access to the master. Client access to the master's public endpoint is blocked. Access to the master must be from internal IP addresses. Restricted access to the master from both internal and external IP addresses that you define. Access to the master from any IP address.
Detailed configuration steps Creating a private cluster with no client access to the public endpoint. Creating a private cluster with limited access to the public endpoint. Creating a private cluster with unrestricted access to the public endpoint.
Cloud Console configuration options Select Enable VPC-native.
Select Private cluster.
Clear Access master using its external IP address.
Enable master authorized networks is automatically enabled.
Select Enable VPC-native.
Select Private cluster.
Select Access master using its external IP address.
Select Enable master authorized networks.
Select Enable VPC-native.
Select Private cluster.
Select Access master using its external IP address.
Clear Enable master authorized networks.
gcloud cluster creation flags --enable-ip-alias
--enable-private-nodes
--enable-private-endpoint
--enable-master-authorized-networks
--enable-ip-alias
--enable-private-nodes
--enable-master-authorized-networks
--enable-ip-alias
--enable-private-nodes
--no-enable-master-authorized-networks
Communication between nodes and master

Nodes always contact the master using the private endpoint.

Master authorized networks

Required for access to the master from internal IP addresses other than nodes and Pods.

You do not need to explicitly authorize the internal IP address range of nodes. Addresses in the primary IP address range of the cluster's subnet are always authorized to communicate with the private endpoint.

Use --master-authorized-networks to specify additional internal IP addresses that can access the master. You cannot include external IP addresses in the list of master authorized networks, because access to the public endpoint is disabled.

Required for access to the master from external IP addresses, and from internal IP addresses other than nodes and Pods.

Use --master-authorized-networks to specify external and internal IP addresses, other than nodes and Pods, that can access the master.

Not used.

If you enable access to the master's public endpoint without enabling master authorized networks, access to the master's public endpoint is not restricted.

Access using kubectl

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if they are in the same region as the cluster and either their internal IP addresses are included in the list of master authorized networks or they are located in the same subnet as the cluster's nodes. kubectl must be configured to use the private endpoint.

From public IP addresses: Never.

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if they are in the same region as the cluster and either their internal IP addresses are included in the list of master authorized networks or they are located in the same subnet as the cluster's nodes. kubectl must be configured to use the private endpoint.

From public IP addresses: Machines with public IP addresses can use kubectl to communicate with the public endpoint only if their public IP addresses are included in the list of master authorized networks. This includes machines outside of Google Cloud and Google Cloud VMs with external IP addresses.

From nodes: Always uses the private endpoint. kubectl must be configured to use the private endpoint.

From other VMs in the cluster's VPC network: Other VMs can use kubectl to communicate with the private endpoint only if they are in the same region as the cluster. kubectl must be configured to use the private endpoint.

From public IP addresses: Any machine with a public IP address can use kubectl to communicate with the public endpoint. This includes machines outside of Google Cloud and Google Cloud VMs with external IP addresses.

What's next