Advanced private cluster configurations


This page describes some advanced configurations that you might want when creating a private cluster. To learn the basic configuration of a private cluster, see Creating a private cluster.

Granting private nodes outbound internet access

To provide outbound internet access for your private nodes, such as to pull images from an external registry, use Cloud NAT to create and configure a Cloud Router. Cloud NAT lets private clusters establish outbound connections over the internet to send and receive packets.

The Cloud Router allows all your nodes in the region to use Cloud NAT for all primary and alias IP ranges. It also automatically allocates the external IP addresses for the NAT gateway.

For instructions to create and configure a Cloud Router, refer to Create a Cloud NAT configuration using Cloud Router in the Cloud NAT documentation.

Creating a private cluster in a Shared VPC network

To learn how to create a private cluster in a Shared VPC network, see Creating a private cluster in a Shared VPC.

Deploying a Windows Server container application to a private cluster

To learn how to deploy a Windows Server container application to a private cluster, refer to the Windows node pool documentation.

Accessing the control plane's private endpoint globally

The control plane's private endpoint is implemented by an internal passthrough Network Load Balancer in the control plane's VPC network. Clients that are internal or are connected through Cloud VPN tunnels and Cloud Interconnect VLAN attachments can access internal passthrough Network Load Balancers.

By default, these clients must be located in the same region as the load balancer.

When you enable control plane global access, the internal passthrough Network Load Balancer is globally accessible: Client VMs and on-premises systems can connect to the control plane's private endpoint, subject to the authorized networks configuration, from any region.

For more information about the internal passthrough Network Load Balancers and global access, see Internal load balancers and connected networks.

Enabling control plane private endpoint global access

By default, global access is not enabled for the control plane's private endpoint when you create a private cluster. To enable control plane global access, use the following tools based on your cluster mode:

  • For Standard clusters, you can use Google Cloud CLI or the Google Cloud console.
  • For Autopilot clusters, you can use the google_container_cluster Terraform resource.

Console

To create a new private cluster with control plane global access enabled, perform the following steps:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create then in the Standard or Autopilot section, click Configure.

  3. Enter a Name.

  4. In the navigation pane, click Networking.

  5. Select Private cluster.

  6. Select the Enable Control plane global access checkbox.

  7. Configure other fields as you want.

  8. Click Create.

To enable control plane global access for an existing private cluster, perform the following steps:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Next to the cluster you want to edit, click Actions, then click Edit.

  3. In the Networking section, next to Control plane global access, click Edit.

  4. In the Edit control plane global access dialog, select the Enable Control plane global access checkbox.

  5. Click Save Changes.

gcloud

Add the --enable-master-global-access flag to create a private cluster with global access to the control plane's private endpoint enabled:

gcloud container clusters create CLUSTER_NAME \
    --enable-private-nodes \
    --enable-master-global-access

You can also enable global access to the control plane's private endpoint for an existing private cluster:

gcloud container clusters update CLUSTER_NAME \
    --enable-master-global-access

Verifying control plane private endpoint global access

You can verify that global access to the control plane's private endpoint is enabled by running the following command and looking at its output.

gcloud container clusters describe CLUSTER_NAME

The output includes a privateClusterConfig section where you can see the status of masterGlobalAccessConfig.

privateClusterConfig:
  enablePrivateNodes: true
  masterIpv4CidrBlock: 172.16.1.0/28
  peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer
  privateEndpoint: 172.16.1.2
  publicEndpoint: 34.68.128.12
  masterGlobalAccessConfig:
    enabled: true

Accessing the control plane's private endpoint from other networks

When you create a GKE private cluster and disable the control plane's public endpoint, you must administer the cluster with tools like kubectl using its control plane's private endpoint. You can access the cluster's control plane's private endpoint from another network, including the following:

  • An on-premises network that's connected to the cluster's VPC network using Cloud VPN tunnels or Cloud Interconnect VLAN attachments
  • Another VPC network that's connected to the cluster's VPC network using Cloud VPN tunnels

The following diagram shows a routing path between an on-premises network and GKE control plane nodes:

Diagram showing the routing between on-premises VPC and cluster control plane

To allow systems in another network to connect to a cluster's control plane private endpoint, complete the following requirements:

  1. Identify and record relevant network information for the cluster and its control plane's private endpoint.

    gcloud container clusters describe CLUSTER_NAME \
       --location=COMPUTE_LOCATION \
       --format="yaml(network, privateClusterConfig)"
    

    Replace the following:

    From the output of the command, identify and record the following information to use in the next steps:

    • network: The name or URI for the cluster's VPC network.
    • privateEndpoint: The IPv4 address of the control plane's private endpoint or the enclosing IPv4 CIDR range (masterIpv4CidrBlock).
    • peeringName: The name of the VPC Network Peering connection used to connect the cluster's VPC network to the control plane's VPC network.

    The output is similar to the following:

    network: cluster-network
    privateClusterConfig:
      enablePrivateNodes: true
      masterGlobalAccessConfig:
        enabled: true
      masterIpv4CidrBlock: 172.16.1.0/28
      peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer
      privateEndpoint: 172.16.1.2
      publicEndpoint: 34.68.128.12
    
  2. Consider enabling control plane private endpoint global access to allow packets to enter from any region in the cluster's VPC network. Enabling control plane private endpoint global access lets you connect to the private endpoint using Cloud VPN tunnels or Cloud Interconnect VLAN attachments located in any region, not just the cluster's region.

  3. Create a route for the privateEndpoint IP address or the masterIpv4CidrBlock IP address range in the other network. Because the control plane's private endpoint IP address always fits within the masterIpv4CidrBlock IPv4 address range, creating a route for either the privateEndpoint IP address or its enclosing range provides a path for packets from the other network to the control plane's private endpoint if:

    • The other network connects to the cluster's VPC network using Cloud Interconnect VLAN attachments or Cloud VPN tunnels that use dynamic (BGP) routes: Use a Cloud Router custom route advertisement. For more information, see Advertising Custom IP Ranges in the Cloud Router documentation.

    • The other network connects to the cluster's VPC network using Classic VPN tunnels that do not use dynamic routes: You must configure a static route in the other network.

  4. Configure the cluster's VPC network to export its custom routes in the peering relationship to the control plane's VPC network. Google Cloud always configures the control plane's VPC network to import custom routes from the cluster's VPC network. This step provides a path for packets from the control plane's private endpoint back to the other network.

    To enable custom route export from your cluster's VPC network, use the following command:

    gcloud compute networks peerings update PEERING_NAME \
        --network=CLUSTER_VPC_NETWORK \
        --export-custom-routes
    

    Replace the following:

    • PEERING_NAME: the name for the peering that connects the cluster's VPC network to the control plane VPC network
    • CLUSTER_VPC_NETWORK: the name or URI of the cluster's VPC network

    When custom route export is enabled for the VPC, creating routes that overlap with Google Cloud IP ranges might break your cluster.

    For more details about how to update route exchange for existing VPC Network Peering connections, see Update the peering connection.

    Custom routes in the cluster's VPC network include routes whose destinations are IP address ranges in other networks, for example, an on-premises network. To ensure that these routes become effective as peering custom routes in the control plane's VPC network, see Supported destinations from the other network.

Supported destinations from the other network

The address ranges that the other network sends to Cloud Routers in the cluster's VPC network must adhere to the following conditions:

  • While your cluster's VPC might accept a default route (0.0.0.0/0), the control plane's VPC network always rejects default routes because it already has a local default route. If the other network sends a default route to your VPC network, the other network must also send the specific destinations of systems that need to connect to the control plane's private endpoint. For more details, see Routing order.

  • If the control plane's VPC network accepts routes that effectively replace a default route, those routes break connectivity to Google Cloud APIs and services, interrupting the cluster control plane. As a representative example, the other network must not advertise routes with destinations 0.0.0.0/1 and 128.0.0.0/1. Refer to the previous point for an alternative.

Monitor the Cloud Router limits, especially the maximum number of unique destinations for learned routes.

Protecting a private cluster with VPC Service Controls

To further secure your GKE private clusters, you can protect them using VPC Service Controls.

VPC Service Controls provides additional security for your GKE private clusters to help mitigate the risk of data exfiltration. Using VPC Service Controls, you can add projects to service perimeters that protect resources and services from requests that originate outside the perimeter.

To learn more about service perimeters, see Service perimeter details and configuration.

If you use Artifact Registry with your GKE private cluster in a VPC Service Controls service perimeter, you must configure routing to the restricted virtual IP to prevent exfiltration of data.