Change isolation in clusters that use Private Service Connect


This page shows you how to change the network isolation for your cluster's control plane and cluster nodes. Changing the isolation mode of a cluster is only supported for clusters that use Private Service Connect to privately connect the control plane and nodes.

Why change cluster isolation

By default, when you create clusters that use Private Service Connect, GKE assigns an an external IP address (external endpoint) to the control plane. This means that any VM with an external IP address can reach the control plane.

If you configure authorized networks, you can limit the IP address ranges that have access to your cluster control plane, but the cluster control plane is still accessible from Google Cloud-owned IP addresses. For example, any VM with an external IP address assigned in Google Cloud can reach your control plane external IP address. However, a VM without the corresponding credentials cannot reach your nodes

Benefits

Network isolation provides the following benefits:

  • You can configure in the same cluster a mix of private and public nodes. This can reduce costs for nodes that don't require an external IP address to access public services on the internet.
  • You can block control plane access from Google Cloud-owned IP addresses or from external IP addresses to fully isolate the cluster control plane.

This page shows you how to change this default behavior by taking the following actions:

  • Enabling or disabling access to the control plane from Google Cloud-owned IP addresses. This action prevents any VM with a Google Cloud-owned IP addresses from reaching your control plane. For more information, see block control plane access from Google Cloud-owned IP addresses.
  • Enabling or disabling public access to the external endpoint of the control plane. This action fully isolates your cluster and control plane is not accessible from any public IP addresses. For more information, see isolate the cluster control plane.
  • Removing public IP addresses from nodes. This action fully isolates your workloads. For more information, see isolate node pools.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

Block control plane access from Google Cloud VMs, Cloud Run, and Cloud Functions

By default, if you created a cluster with Private Service Connect predefined as public, the authorized networks feature is disabled by default.

If you created cluster with Private Service Connect predefined as private, the authorized networks feature is enable by default. To learn what IP addresses can always access the GKE control plane, see Access to control plane endpoints.

To remove access to the control plane of your cluster from Google Cloud VMs, Cloud Run, and Cloud Functions use the gcloud CLI or Google Cloud console:

gcloud

  1. Update your cluster to use the --no-enable-google-cloud-access flag:

    gcloud container clusters update CLUSTER_NAME \
        --no-enable-google-cloud-access
    

    Replace CLUSTER_NAME with the name of the cluster.

  2. Confirm that the --no-enable-google-cloud-access flag is applied:

    gcloud container clusters describe CLUSTER_NAME | grep "gcpPublicCidrsAccessEnabled"
    

    The output is similar to the following:

    gcpPublicCidrsAccessEnabled: false
    

Console

  1. Go to the Google Kubernetes Engine page in Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click the name of the cluster you want to modify.

  3. Under Networking, in the Control plane authorized networks field, click Edit control plane authorized networks.

  4. Clear the Allow access through Google Cloud public IP addresses checkbox.

  5. Click Save Changes.

Allow access to the control plane from Google Cloud-owned IP addresses

To allow access from public IP addresses owned by Google Cloud to the cluster control plane, run the following command:

gcloud

gcloud container clusters update CLUSTER_NAME \
    --enable-google-cloud-access

Replace CLUSTER_NAME with the name of the cluster.

Google Cloud-owned IP addresses can access your cluster control plane.

Console

  1. Go to the Google Kubernetes Engine page in Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click the name of the cluster you want to modify.

  3. Under Networking, in the Control plane authorized networks field, click Edit control plane authorized networks.

  4. Check the Allow access through Google Cloud public IP addresses checkbox.

  5. Click Save Changes.

Disable external access to the control plane in clusters that use Private Service Connect

Clusters created as public

By default, when you create a GKE public cluster, GKE assigns an external IP address (external endpoint) to the control plane. If you instruct GKE to unassign this external endpoint, GKE enables a private endpoint. Access to your control plane from external IP addresses is disabled except from Google Cloud services that run cluster management processes. For more information about the enabled private endpoint and its limitation, see public clusters with Private Service Connect.

To change control plane isolation for a cluster created as public, use the gcloud CLI:

  1. Update your cluster to use the --enable-private-endpoint flag:

    gcloud container clusters update CLUSTER_NAME \
        --enable-private-endpoint
    

    Replace CLUSTER_NAME with the name of the public cluster.

  2. Confirm that the --enable-private-endpoint flag is applied:

    gcloud container clusters describe CLUSTER_NAME | grep "enablePrivateEndpoint"
    

    The output is similar to the following:

    enablePrivateEndpoint:true
    

Clusters created as private

By default, when you create a GKE private cluster, GKE assigns an external IP address (external endpoint) and an internal IP address (internal endpoint) to the control plane. You can unassign this external endpoint, but you can't unassign the internal endpoint.

If you instruct GKE to unassign the external endpoint, external access to your control plane from external IP addresses is disabled.

To remove the external endpoint in a cluster created as private, use the gcloud CLI or Google Cloud console:

gcloud

  1. Update your cluster to use the --enable-private-endpoint flag:

    gcloud container clusters update CLUSTER_NAME \
        --enable-private-endpoint
    

    Replace CLUSTER_NAME with the name of the public cluster.

  2. Confirm that the --enable-private-endpoint flag is applied:

    gcloud container clusters describe CLUSTER_NAME | grep "enablePrivateEndpoint"
    

    The output is similar to the following:

    enablePrivateEndpoint: true
    

Console

  1. Go to the Google Kubernetes Engine page in Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click the name of the cluster you want to modify.

  3. Under Cluster basics, in the External endpoint field, click Edit external control plane access.

  4. Clear the Allow access through Google Cloud public IP addresses checkbox.

  5. Click Save Changes.

Enable external access to the control plane in clusters that use Private Service Connect

To assign an external IP address (external endpoint) to the control plane in clusters created as public or private, use the gcloud CLI or Google Cloud console:

gcloud

Run the following command:

gcloud container clusters update CLUSTER_NAME \
    --no-enable-private-endpoint

Replace CLUSTER_NAME with the name of the public cluster.

External IP addresses can access your cluster control plane.

Console

  1. Go to the Google Kubernetes Engine page in Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click the name of the cluster you want to modify.

  3. Under Cluster basics, in the External endpoint field, click Edit external control plane access.

  4. Select the Access control plane using its external IP address checkbox.

  5. Click Save Changes.

Isolate node pools

You can instruct GKE to provision node pools with only private IP addresses. After you update a public node pool to private mode, workloads requiring public internet access might fail. Before you change the nodes isolation, see the Private Service Connect clusters limitations. You can edit this setting on clusters created as public or private:

Autopilot

In Autopilot clusters, add a taint on existing Pods so that GKE provisions them only on private nodes:

  1. To request that GKE schedules a Pod on private nodes, add the following nodeSelector to your Pod specification:

     cloud.google.com/private-node=true
    

    GKE recreates your Pods on private nodes. To avoid workload disruption, migrate each workload independently and monitor the migration.

  2. If you are using Shared VPC, enable Private Google Access after changing the cluster isolation mode. If you are using Cloud NAT, you don't need to enable Private Google Access.

Standard

gcloud

To provision nodes through private IP addresses in an existing node pool, run the following command:

  gcloud container node-pools update NODE_POOL_NAME \
      --cluster=CLUSTER_NAME \
      --enable-private-nodes

Replace the following:

  • NODE_POOL_NAME: the name of the node pool that you want to edit.
  • CLUSTER_NAME: the name of the GKE cluster.

    If you are using Shared VPC, enable Private Google Access after changing the cluster isolation mode. If you are using Cloud NAT, you don't need to enable Private Google Access.

Console

  1. Go to the Google Kubernetes Engine page in Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the list of clusters, click the cluster name.

  3. On the Clusters page, click the Nodes tab.

  4. Under Node Pools, click the node pool name.

  5. Click Edit.

  6. Select the Enable private nodes checkbox.

  7. Click Save.

Revert node pool isolation

In Standard clusters, to instruct GKE to provision node pools with public IP addresses, run the following command:

gcloud container node-pools update NODE_POOL_NAME \
    --cluster=CLUSTER_NAME \
    --no-enable-private-nodes

Replace the following:

  • NODE_POOL_NAME: the name of the node pool that you want to edit.
  • CLUSTER_NAME: the name of the GKE cluster.

Public IP addresses can access your cluster nodes.

Limitations

Before you change the cluster isolation mode, consider the following limitations:

  • Changing the isolation mode is not supported on public clusters running on legacy networks.
  • After you update a public node pool to private mode, workloads that require public internet access might fail in the following scenarios:
    • Clusters in a Shared VPC network where Private Google Access is not enabled. Manually enable Private Google Access to ensure GKE downloads the assigned node image. For clusters that aren't in a Shared VPC networks, GKE automatically enables Private Google Access.
    • Workloads that require access to the internet where Cloud NAT is not enabled or a custom NAT solution is not defined. To allow egress traffic to the internet, enable Cloud NAT or a custom NAT solution.

Private Service Connect clusters created as public or private

Clusters that were created as public and use Private Service Connect have a private endpoint enabled. In this private endpoint, the internal IP addresses in URLs for new or existing webhooks you configure are not supported. To mitigate this incompatibility do the following:

  1. Set up a webhook with a private address by URL.
  2. Create a headless service without a selector.
  3. Create a corresponding endpoint for the required destination.

Private Service Connect clusters created as public

Only in Private Service Connect clusters created as public, all private IP addresses from the cluster's network always can reach the cluster private endpoint. To learn more about control plane access, see Authorized networks for control plane access.

What's next