Enabling Workload Identity with Helm charts

This topic explains how to enable Workload Identity for Apigee hybrid using Helm Charts.

If you are using apigeectl to install and manage Apigee hybrid, see Enabling Workload Identity with apigeectl.

Overview

Workload Identity is a way for applications running within GKE (Google Kubernetes Engine) to access Google Cloud services. For overviews of Workload Identity, see:

A Google Cloud IAM service account is an identity that an application can use to make requests to Google APIs. These service accounts are referred to as GSA (Google Service Accounts) in the document. For more information about GSAs, see Service accounts.

Separately, Kubernetes also has the concept of service accounts. A service account provides an identity for processes that run in a Pod. Kubernetes service accounts are Kubernetes resources, while Google service accounts are specific to Google Cloud. For information on Kubernetes service accounts, see Configure Service Accounts for Pods in the Kubernetes documentation.

Apigee creates and uses a Kubernetes service account for each type of component when you first install the Helm charts for those components. Enabling Workload Identity allows the hybrid components to interact with the Kubernetes service accounts.

Environment variables used in these procedures

These procedures use the following environment variables. Either set these in your command shell or replace them in the code samples with the actual values:

  • CLUSTER_LOCATION: The region or zone of your Kubernetes cluster, for example: us-west1.
  • CLUSTER_NAME: The name of your cluster.
  • ENV_NAME: Then name of the Apigee environment.
  • ORG_NAME: The name of your Apigee organization.
  • PROJECT_ID: The ID of your Google Cloud project.
  • NAMESPACE: Your Apigee namespace (usually "apigee").

Verify the environment variables:

echo $PROJECT_ID
echo $ORG_NAME
echo $ENV_NAME
echo $NAMESPACE
echo $CLUSTER_LOCATION
echo $CLUSTER_NAME
CLUSTER_NAME

Initialize any of the variables you need:

export PROJECT_ID=my-project-id
export ORG_NAME=$PROJECT_ID
export ENV_NAME=my-environment-name
export NAMESPACE=apigee
export CLUSTER_LOCATION=my-cluster-location
export CLUSTER_NAME=hybrid-base-directory/apigeectl

Workload Identity and service account key files

When running Apigee hybrid on GKE, the standard practice is to create and download private keys (.json files) for each of the service accounts. When using Workload Identity, you do not need to download service account private keys and add them to GKE clusters.

If you have downloaded service account key files as part of your Apigee hybrid installation, you can delete them after enabling Workload Identity. In most installations, they reside in the directory for each component's char.

Enable Workload Identity for Apigee hybrid

Follow these instructions to configure Workload Identity for your project.

Migrated installation and Workload Identity

If you migrated your cluster from apigeectl management with the Apigee hybrid Helm migration tool, the overrides syntax for Workload Identity will have changed. You will need to check the following properties in your overrides file:

  • namespace is required. For example:
    instanceID: "hybrid-instance-1"
    namespace: "apigee"
    
  • The gcp.workloadIdentity.enabled property replaces the gcp.workloadIdentityEnabled property. For example:
    gcp:
      workloadIdentity:
        enabled: true
  • For production installations, each component has a gsa property. The value for these properties is the email address for the Google IAM service account for the corresponding component. For example:
    watcher
      gsa: apigee-watcher@my-hybrid-project.iam.gserviceaccount.com
    
  • For non-production installations, you can supply a single GSA in the gcp.workloadIdentity.gsa property.
    gcp
      workloadIdentity
        gsa: apigee-watcher@my-hybrid-project.iam.gserviceaccount.com
    
  • With Helm charts for Apigee hybrid, mix prod and non-prod GSAs for Workload Identity. You can specify a single for the gcp.workloadIdentity.gsa property and specify individual GSAs for specific components. The values you provide for the individual components will override the value you provide for gcp.workloadIdentity.gsa for that component only.

Prepare to configure Workload Identity

  1. Verify that Workload Identity is enabled in your overrides file. It should be enabled at the overrides file and you should have values for the in the following configuration properties:
  2. Check that the current gcloud configuration is set to your Google Cloud project ID with the following command:
    gcloud config get project
  3. If needed, set the current gcloud configuration:

    gcloud config set project $PROJECT_ID
  4. Verify that Workload Identity is enabled for your GKE Cluster. When you created the cluster in Step 1: Create a cluster, step 6 was to Enable Workload Identity. You can confirm if Workload Identity is enabled by running the following command:

    Regional clusters

    gcloud container clusters describe $CLUSTER_NAME \
      --region $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --flatten 'workloadIdentityConfig'

    Zonal clusters

    gcloud container clusters describe $CLUSTER_NAME \
      --zone $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --flatten 'workloadIdentityConfig'

    Your output should look like the following:

      ---
    workloadPool: PROJECT_ID.svc.id.goog

    If you see null instead in your results, run the following command to enable Workload Identity for your cluster:

    Regional clusters

    gcloud container clusters update $CLUSTER_NAME \
      --workload-pool=$PROJECT_ID.svc.id.goog \
      --project $PROJECT_ID \
      --region $CLUSTER_LOCATION

    Zonal clusters

    gcloud container clusters update  $CLUSTER_NAME \
      --workload-pool=$PROJECT_ID.svc.id.goog \
      --zone $CLUSTER_LOCATION \
      --project $PROJECT_ID
  5. Enable Workload Identity for each node pool with the following commands. This operation can take up to 30 minutes for each node:

    Regional clusters

    gcloud container node-pools update NODE_POOL_NAME \
      -cluster=$CLUSTER_NAME \
      --region $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --workload-metadata=GKE_METADATA

    Zonal clusters

    gcloud container node-pools update NODE_POOL_NAME \
      --cluster=$CLUSTER_NAME \
      --zone $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --workload-metadata=GKE_METADATA

    Where NODE_POOL_NAME is the name of each node pool. In most Apigee hybrid installations, the two default node pools are named apigee-data and apigee-runtime.

  6. Verify that Workload Identity is enabled on your node pools with the following commands:

    Regional clusters

    gcloud container node-pools describe apigee-data \
      --cluster $CLUSTER_NAME \
      --region $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --flatten "config:"
    gcloud container node-pools describe apigee-runtime \
      --cluster $CLUSTER_NAME \
      --region $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --flatten "config:"

    Zonal clusters

    gcloud container node-pools describe apigee-data \
      --cluster $CLUSTER_NAME \
      --zone $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --flatten "config:"
    gcloud container node-pools describe apigee-runtime \
      --cluster $CLUSTER_NAME \
      --zone $CLUSTER_LOCATION \
      --project $PROJECT_ID \
      --flatten "config:"

    Your output should look something like:

    ---
    diskSizeGb: 100
    diskType: pd-standard
    ...
    workloadMetadataConfig:
    mode: GKE_METADATA
      

Configure Workload Identity

Use the following procedure to enable Workload Identity for the following Hybrid components:

  • apigee-telemetry
  • apigee-org
  • apigee-env

When you run the helm upgrade with the --dry-run flag for the apigee-datastore, apigee-env, apigee-org, and apigee-telemetry charts, the output will include the commands you will need to configure Workload Identity with the correct GSA and KSA names.

For example:

helm upgrade datastore apigee-datastore/ \
  --namespace $NAMESPACE \
  -f overrides.yaml \
  --dry-run
NAME: datastore
...
For C* backup GKE Workload Identity, please make sure to add the below membership to the IAM policy binding using the respective kubernetes SA (KSA).
gcloud iam service-accounts add-iam-policy-binding  \
      --role roles/iam.workloadIdentityUser \
      --member "serviceAccount:my-project.svc.id.goog[apigee/apigee-cassandra-backup-sa]" \
      --project :my-project
  1. Get the command to set up Workload Identity for apigee-datastore and run the command under NOTES: in the output.
    helm upgrade datastore apigee-datastore/ \
      --namespace $NAMESPACE \
      -f overrides.yaml \
      --dry-run
  2. Get the commands to set up Workload Identity for apigee-telemetry and run the command under NOTES: in the output.
    helm upgrade telemetry apigee-telemetry/ \
      --namespace $NAMESPACE \
      -f overrides.yaml \
      --dry-run
  3. Get the commands to set up Workload Identity for apigee-org and run the command under NOTES: in the output.
    helm upgrade $ORG_NAME apigee-org/ \
      --namespace $NAMESPACE \
      -f overrides.yaml \
      --dry-run
  4. Get the commands to set up Workload Identity for apigee-env and run the command under NOTES: in the output.
    helm upgrade $ENV_NAME apigee-env/ \
      --namespace $NAMESPACE \
      --set env=ENV_NAME \
      -f overrides.yaml \
      --dry-run

    Repeat this step for each environment in your installation.

Verify Workload Identity

  1. Validate if the steps worked:
    gcloud config set project $PROJECT_ID
    
    kubectl run --rm -it --image google/cloud-sdk:slim \
      --namespace $NAMESPACE workload-identity-test\
      -- gcloud auth list

    If you don't see a command prompt, try pressing Enter.

    If the steps were correctly run, you should see a response like the following:

                       Credentialed Accounts
    ACTIVE  ACCOUNT
    *       GSA@PROJECT_ID.iam.gserviceaccount.com
    
  2. If upgrading from a previous install, clean up secrets that contained service account private keys:
    kubectl delete secrets -n $NAMESPACE $(k get secrets -n $NAMESPACE | grep svc-account | awk '{print $1}')
    
  3. Check logs:
    kubectl logs -n $NAMESPACE -l app=apigee=synchronizer,env=$ENV_NAME,org=$ORG_NAME apigee-synchronizer
    
  4. (Optional) You can see the status of your Kubernetes service accounts in the Kubernetes: Workloads Overview page in the Google Cloud Console.

    Go to Workloads