Protecting cluster metadata


Google Kubernetes Engine (GKE) uses instance metadata to configure node virtual machines (VMs), but some of this metadata is potentially sensitive and should be protected from workloads running on the cluster.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

Configure node service account

Each node's service account credentials continue to be exposed to workloads. By default, your nodes use the Compute Engine default service account. You should configure a minimally-privileged service account for your nodes to use instead of the Compute Engine default service account. Then, attach this service account to your nodes, so that an attacker cannot circumvent GKE metadata protections by using the Compute Engine API to access the underlying VM instances directly.

For more information, refer to Use least privilege node service accounts.

To create a minimally privileged node service account, perform the following steps:

  1. Create a new Identity and Access Management (IAM) service account and save the email address in an environment variable:

    gcloud iam service-accounts create NODE_SA_NAME \
        --display-name="DISPLAY_NAME"
    export NODE_SA_EMAIL=$(gcloud iam service-accounts list --format='value(email)' \
        --filter='displayName:DISPLAY_NAME')
    

    Replace the following:

    • NODE_SA_NAME: the name of your new node service account.
    • DISPLAY_NAME: the display name of the new service account.

    The node service account email address has the format NODE_SA_NAME@PROJECT_ID.iam.gserviceaccount.com.

  2. Configure your service account with the minimum roles and permissions to run your GKE nodes:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member=serviceAccount:$NODE_SA_EMAIL \
        --role=roles/monitoring.metricWriter
    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member=serviceAccount:$NODE_SA_EMAIL \
        --role=roles/monitoring.viewer
    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member=serviceAccount:$NODE_SA_EMAIL \
        --role=roles/logging.logWriter
    

    Replace PROJECT_ID with your Google Cloud project ID.

    Additionally, if your cluster pulls private images from Artifact Registry, add the roles/artifactregistry.reader role:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member=serviceAccount:$NODE_SA_EMAIL \
        --role=roles/artifactregistry.reader
    

Metadata concealment

GKE metadata concealment protects some potentially sensitive system metadata from user workloads running on your cluster.

You can enable metadata concealment to prevent user Pods from accessing certain VM metadata in your cluster's nodes, such as kubelet credentials and VM instance information. Specifically, metadata concealment protects access to kube-env (which contains Kubelet credentials) and the VM's instance identity token.

Metadata concealment firewalls traffic from user Pods (Pods not running on HostNetwork) to the cluster metadata server, only allowing safe queries. The firewall prevents user Pods from using kubelet credentials for privilege escalation attacks, or from using VM identity for instance escalation attacks.

You can only enable metadata concealment when creating a new cluster, or when adding a new node pool to an existing cluster.

Limitations

  • Metadata concealment only protects access to kube-env and the node's instance identity token.
  • Metadata concealment does not restrict access to the node's service account.
  • Metadata concealment does not restrict access to other related instance metadata.
  • Metadata concealment does not restrict access to other legacy metadata APIs.
  • Metadata concealment doesn't restrict traffic from Pods running on the host network (hostNetwork: true in the Pod specification).

Creating a new cluster or node pool with metadata concealment

After creating a service account, you can create a new cluster or node pool with metadata concealment enabled by using the Google Cloud CLI.

Create a new cluster

To create a cluster with metadata concealment enabled, run the following command:

gcloud beta container clusters create CLUSTER_NAME \
  --workload-metadata-from-node=SECURE \
  --service-account=$NODE_SA_EMAIL

Replace CLUSTER_NAME with the name of your new cluster.

The --workload-metadata-from-node flag takes the following values:

  • SECURE: enable metadata concealment.
  • EXPOSED or UNSPECIFIED: disable metadata concealment.

Create a new node pool

To create a node pool with metadata concealment enabled, run the following command:

gcloud beta container node-pools create NODE_POOL_NAME \
  --cluster=CLUSTER_NAME \
  --workload-metadata-from-node=SECURE \
  --service-account=$NODE_SA_EMAIL

Replace NODE_POOL_NAME with the name of the new node pool.

Verifying identity token metadata concealed from cluster's workload

When you conceal metadata, it should not be possible to request a signature via the node's instance identity token. To verify that requests explicitly inform users of concealed metadata, do the following:

  1. Open a shell session in a new Pod:

    kubectl run metadata-concealment -it --image=google/cloud-sdk:slim -- /bin/bash
    
  2. In the Pod, try to get a concealed endpoint:

    curl -H "Metadata-Flavor: Google" \
    'http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://www.example.com'
    

    The output is similar to the following:

    This metadata endpoint is concealed.
    

Disabling and transitioning from legacy metadata APIs

The v0.1 and v1beta1 Compute Engine metadata server endpoints were deprecated and shutdown on September 30, 2020.

For the shutdown schedule, refer to v0.1 and v1beta1 metadata server endpoints deprecation.

What's next