Creating policy-compliant Google Cloud resources

This tutorial shows how platform administrators can use either Anthos Config Management Policy Controller or Open Policy Agent (OPA) Gatekeeper policies to govern how to create Google Cloud resources using Config Connector.

It assumes basic knowledge of Kubernetes or Google Kubernetes Engine (GKE). In the tutorial, you define a policy that restricts permitted locations for Cloud Storage buckets.

Overview

Policy Controller checks, audits, and enforces the compliance of your Kubernetes cluster resources with policies related to security, regulations, or business rules. Policy Controller is built from the OPA Gatekeeper open source project.

Config Connector creates and manages the lifecycle of Google Cloud resources, such as Cloud Storage buckets and Compute Engine virtual machine instances, by describing them as Kubernetes custom resources. To create a Google Cloud resource, you create a Kubernetes resource in a namespace that Config Connector manages. The following example shows how to describe a Cloud Storage bucket using Config Connector:

apiVersion: storage.cnrm.cloud.google.com/v1beta1
kind: StorageBucket
metadata:
  name: my-bucket
spec:
  location: us-east1

By managing your Google Cloud resources with Config Connector, you can apply Policy Controller or OPA Gatekeeper policies to those resources as you create them in your GKE cluster. These policies let you prevent or report actions that create or modify resources in ways that violate your policies. For example, you can enforce a policy that restricts the locations of Cloud Storage buckets.

This approach, based on the Kubernetes resource model (KRM), lets you use a consistent set of tools and workflows to manage both Kubernetes and Google Cloud resources. This tutorial demonstrates how you can:

  • Define policies that govern your Google Cloud resources.
  • Implement controls that prevent developers and administrators from creating Google Cloud resources that violate your policies.
  • Implement controls that audit your existing Google Cloud resources against your policies, even if you created those resources outside Config Connector.
  • Provide fast feedback to developers and administrators as they create and update resource definitions.
  • Validate Google Cloud resource definitions against your policies before attempting to apply the definitions to a Kubernetes cluster.

Objectives

  • Create a GKE cluster that includes the Config Connector add-on.
  • Install Policy Controller or OPA Gatekeeper.
  • Create a policy to restrict permitted Cloud Storage bucket locations.
  • Verify that the policy prevents creation of Cloud Storage buckets in non-permitted locations.
  • Evaluate policy compliance of Cloud Storage bucket definition during development.
  • Audit existing Cloud Storage buckets for policy compliance.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  2. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  3. In the Cloud Console, activate Cloud Shell.

    Activate Cloud Shell

  4. In Cloud Shell, set the Google Cloud project that you want to use for this tutorial:

    gcloud config set project PROJECT_ID
    

    Replace PROJECT_ID with the Cloud project ID of your project. When you run this command, Cloud Shell creates an exported environment variable called GOOGLE_CLOUD_PROJECT that contains your project ID. If you do not use Cloud Shell, you can create the environment variable with this command:

    export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value core/project)
    
  5. Enable the Cloud APIs and the GKE API:

    gcloud services enable \
      cloudapis.googleapis.com \
      container.googleapis.com
    
  6. Create a directory to store the files created for this tutorial:

    mkdir -p ~/cnrm-gatekeeper-tutorial
    
  7. Go to the directory that you created:

    cd ~/cnrm-gatekeeper-tutorial
    

Creating a GKE cluster

  1. In Cloud Shell, create a GKE cluster with the Config Connector add-on and Workload Identity:

    gcloud container clusters create CLUSTER_NAME \
      --addons ConfigConnector \
      --enable-ip-alias \
      --enable-stackdriver-kubernetes \
      --num-nodes 4 \
      --release-channel regular \
      --scopes cloud-platform \
      --workload-pool $GOOGLE_CLOUD_PROJECT.svc.id.goog \
      --zone ZONE
    

    Replace the following:

    • CLUSTER_NAME: The name of the cluster that you want to use for this project, for example, cnrm-gatekeeper-tutorial.
    • ZONE: A Compute Engine zone close to your location, for example, asia-southeast1-b.

    The Config Connector add-on installs custom resource definitions (CRDs) for Google Cloud resources in your GKE cluster.

  2. (Optional) If you use a private cluster in your own environment, add a firewall rule that allows the GKE cluster control plane to connect to the Policy Controller or OPA Gatekeeper webhook:

    gcloud compute firewall-rules create allow-cluster-control-plane-tcp-8443 \
      --allow tcp:8443 \
      --network default \
      --source-ranges CONTROL_PLANE_CIDR \
      --target-tags NODE_TAG
    

    Replace the following:

    • CONTROL_PLANE_CIDR: The IP range for your GKE cluster control plane, for example, 172.16.0.16/28.
    • NODE_TAG: A tag applied to all the nodes in your GKE cluster.

    This optional firewall rule is required for the Policy Controller or OPA Gatekeeper webhook to work when your cluster uses private nodes.

Setting up Config Connector

The Google Cloud Project where you install Config Connector is known as the host project. The projects where you use Config Connector to manage resources are known as managed projects. In this tutorial, you use Config Connector to create Google Cloud resources in the same project as your GKE cluster, so that the host project and the managed project are the same project.

  1. In Cloud Shell, create a Google service account for Config Connector:

    gcloud iam service-accounts create SERVICE_ACCOUNT_NAME \
      --display-name "Config Connector Gatekeeper tutorial"
    

    Replace SERVICE_ACCOUNT_NAME with the name that you want to use for this service account, for example, cnrm-gatekeeper-tutorial. Config Connector uses this Google service account to create resources in your managed project.

  2. Grant the Storage Admin role to the Google service account:

    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
      --member "serviceAccount:SERVICE_ACCOUNT_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
      --role roles/storage.admin
    

    In this tutorial, you use the Storage Admin role because you use Config Connector to create Cloud Storage buckets. In your own environment, grant the roles that are required in order to manage the Google Cloud resources that you want to create for Config Connector. For more information about predefined roles, see understanding roles in the IAM documentation.

  3. Create a Kubernetes namespace for the Config Connector resources that you create in this tutorial:

    kubectl create namespace NAMESPACE
    

    Replace NAMESPACE with the Kubernetes namespace that you will work with in the tutorial, for example, tutorial.

  4. Annotate the namespace to specify which project Config Connector should use to create Google Cloud resources (the managed project):

    kubectl annotate namespace NAMESPACE \
        cnrm.cloud.google.com/project-id=$GOOGLE_CLOUD_PROJECT
    
  5. Create a ConfigConnectorContext resource that enables Config Connector for the Kubernetes namespace and associates it with the Google service account you created:

    cat << EOF | kubectl apply -f -
    apiVersion: core.cnrm.cloud.google.com/v1beta1
    kind: ConfigConnectorContext
    metadata:
      name: configconnectorcontext.core.cnrm.cloud.google.com
      namespace: NAMESPACE
    spec:
      googleServiceAccount: SERVICE_ACCOUNT_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com
    EOF
    

    When you create the ConfigConnectorContext resource, Config Connector creates a Kubernetes service account and StatefulSet in the cnrm-system namespace to manage the Config Connector resources in your namespace.

  6. Wait for the Config Connector controller pod for your namespace:

    kubectl wait --namespace cnrm-system --for=condition=Ready pod \
      -l cnrm.cloud.google.com/component=cnrm-controller-manager,cnrm.cloud.google.com/scoped-namespace=NAMESPACE
    

    When the pod is ready, the Cloud Shell prompt appears.

  7. Bind your Config Connector Kubernetes service account to your Google service account by creating an IAM policy binding:

    gcloud iam service-accounts add-iam-policy-binding \
      SERVICE_ACCOUNT_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com \
      --member "serviceAccount:$GOOGLE_CLOUD_PROJECT.svc.id.goog[cnrm-system/cnrm-controller-manager-NAMESPACE]" \
      --role roles/iam.workloadIdentityUser
    

    This binding allows the cnrm-controller-manager-NAMESPACE Kubernetes service account in the cnrm-system namespace to act as the Google service account that you created.

Installing the policy tool

If you have a managed Anthos cluster, follow the instructions to install Policy Controller, otherwise install the OPA Gatekeeper distribution.

Policy Controller

Install Policy Controller by following the installation instructions.

Use an audit interval of 60 seconds.

OPA Gatekeeper

  1. In Cloud Shell, define the OPA Gatekeeper version that you want to install:

    GATEKEEPER_VERSION=v3.4.0
    
  2. Install OPA Gatekeeper:

    kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/$GATEKEEPER_VERSION/deploy/gatekeeper.yaml
    
  3. Verify that OPA Gatekeeper is installed:

    kubectl rollout status deploy gatekeeper-controller-manager \
        -n gatekeeper-system
    

    When the installation completes, the output displays deployment "gatekeeper-controller-manager" successfully rolled out.

Creating a Google Cloud resource using Config Connector

  1. In Cloud Shell, create a Config Connector manifest that represents a Cloud Storage bucket in the us-central1 region:

    cat << EOF > tutorial-storagebucket-us-central1.yaml
    apiVersion: storage.cnrm.cloud.google.com/v1beta1
    kind: StorageBucket
    metadata:
      name: tutorial-us-central1-$GOOGLE_CLOUD_PROJECT
      namespace: NAMESPACE
    spec:
      location: us-central1
    EOF
    
  2. Apply the manifest to create the Cloud Storage bucket:

    kubectl apply -f tutorial-storagebucket-us-central1.yaml
    
  3. Verify that Config Connector created the Cloud Storage bucket:

    gsutil ls | grep tutorial
    

    The output appears as follows after Config Connector creates the Cloud Storage bucket:

    gs://tutorial-us-central1-GOOGLE_CLOUD_PROJECT/
    

    where GOOGLE_CLOUD_PROJECT is your Cloud project ID.

    If you don't see this output, wait a minute and perform the step again.

Creating a policy

A policy in Policy Controller and OPA Gatekeeper consists of a constraint template and a constraint. The constraint template contains the policy logic. The constraint specifies where the policy applies and the input parameters to the policy logic.

  1. In Cloud Shell, create a constraint template that restricts Cloud Storage bucket locations:

    cat << EOF > tutorial-storagebucket-location-template.yaml
    apiVersion: templates.gatekeeper.sh/v1beta1
    kind: ConstraintTemplate
    metadata:
      name: gcpstoragelocationconstraintv1
    spec:
      crd:
        spec:
          names:
            kind: GCPStorageLocationConstraintV1
          validation:
            openAPIV3Schema:
              properties:
                locations:
                  type: array
                  items:
                    type: string
                exemptions:
                  type: array
                  items:
                    type: string
      targets:
      - target: admission.k8s.gatekeeper.sh
        rego: |
          package gcpstoragelocationconstraintv1
    
          allowedLocation(reviewLocation) {
              locations := input.parameters.locations
              satisfied := [ good | location = locations[_]
                                    good = lower(location) == lower(reviewLocation)]
              any(satisfied)
          }
    
          exempt(reviewName) {
              input.parameters.exemptions[_] == reviewName
          }
    
          violation[{"msg": msg}] {
              bucketName := input.review.object.metadata.name
              bucketLocation := input.review.object.spec.location
              not allowedLocation(bucketLocation)
              not exempt(bucketName)
              msg := sprintf("Cloud Storage bucket <%v> uses a disallowed location <%v>, allowed locations are %v", [bucketName, bucketLocation, input.parameters.locations])
          }
    
          violation[{"msg": msg}] {
              not input.parameters.locations
              bucketName := input.review.object.metadata.name
              msg := sprintf("No permitted locations provided in constraint for Cloud Storage bucket <%v>", [bucketName])
          }
    EOF
    
  2. Apply the template to create the Cloud Storage bucket:

    kubectl apply -f tutorial-storagebucket-location-template.yaml
    
  3. Create a constraint that only allows buckets in the Singapore and Jakarta regions (asia-southeast1 and asia-southeast2). The constraint applies to the namespace that you created earlier. It exempts the default Cloud Storage bucket for Cloud Build.

    cat << EOF > tutorial-storagebucket-location-constraint.yaml
    apiVersion: constraints.gatekeeper.sh/v1beta1
    kind: GCPStorageLocationConstraintV1
    metadata:
      name: singapore-and-jakarta-only
    spec:
      enforcementAction: deny
      match:
        kinds:
        - apiGroups:
          - storage.cnrm.cloud.google.com
          kinds:
          - StorageBucket
        namespaces:
        - NAMESPACE
      parameters:
        locations:
        - asia-southeast1
        - asia-southeast2
        exemptions:
        - ${GOOGLE_CLOUD_PROJECT}_cloudbuild
    EOF
    
  4. Apply the constraint to limit the zones in which buckets can exist:

    kubectl apply -f tutorial-storagebucket-location-constraint.yaml
    

Verifying the policy

  1. Create a manifest that represents a Cloud Storage bucket in a location that isn't allowed (us-west1):

    cat << EOF > tutorial-storagebucket-us-west1.yaml
    apiVersion: storage.cnrm.cloud.google.com/v1beta1
    kind: StorageBucket
    metadata:
      name: tutorial-us-west1-$GOOGLE_CLOUD_PROJECT
      namespace: NAMESPACE
    spec:
      location: us-west1
    EOF
    
  2. Apply the manifest to create the Cloud Storage bucket:

    kubectl apply -f tutorial-storagebucket-us-west1.yaml
    

    The output is as follows:

    Error from server (denied by singapore-and-jakarta-only Cloud Storage bucket
    <tutorial-us-west1-GOOGLE_CLOUD_PROJECT> uses a disallowed
    location <us-west1>, allowed locations are
    "asia-southeast1", "asia-southeast2"): error when creating
    "tutorial-storagebucket-us-west1.yaml": admission webhook
    "validation.gatekeeper.sh" denied the request: (denied by
    singapore-and-jakarta-only) Cloud Storage bucket
    <tutorial-us-west1-GOOGLE_CLOUD_PROJECT> uses a
    disallowed location <us-west1>, allowed locations are
    "asia-southeast1", "asia-southeast2"
    
  3. (Optional) You can view a record of the decision to deny the request in Cloud Audit Logs. Query the Admin Activity logs for your project:

    gcloud logging read --limit=1 \
        "logName=\"projects/$GOOGLE_CLOUD_PROJECT/logs/cloudaudit.googleapis.com%2Factivity\""'
        resource.type="k8s_cluster"
        resource.labels.cluster_name="CLUSTER_NAME"
        resource.labels.location="ZONE"
        protoPayload.authenticationInfo.principalEmail!~"system:serviceaccount:cnrm-system:.*"
        protoPayload.methodName:"com.google.cloud.cnrm."
        protoPayload.status.code=7'
    

    The output looks similar to this:

    insertId: 3c6940bb-de14-4d18-ac4d-9a6becc70828
    labels:
      authorization.k8s.io/decision: allow
      authorization.k8s.io/reason: ''
      mutation.webhook.admission.k8s.io/round_0_index_0: '{"configuration":"mutating-webhook.cnrm.cloud.google.com","webhook":"container-annotation-handler.cnrm.cloud.google.com","mutated":true}'
      mutation.webhook.admission.k8s.io/round_0_index_1: '{"configuration":"mutating-webhook.cnrm.cloud.google.com","webhook":"management-conflict-annotation-defaulter.cnrm.cloud.google.com","mutated":true}'
    logName: projects/GOOGLE_CLOUD_PROJECT/logs/cloudaudit.googleapis.com%2Factivity
    operation:
      first: true
      id: 3c6940bb-de14-4d18-ac4d-9a6becc70828
      last: true
      producer: k8s.io
    protoPayload:
      '@type': type.googleapis.com/google.cloud.audit.AuditLog
      authenticationInfo:
        principalEmail: user@example.com
      authorizationInfo:
      - permission: com.google.cloud.cnrm.storage.v1beta1.storagebuckets.create
        resource: storage.cnrm.cloud.google.com/v1beta1/namespaces/NAMESPACE/storagebuckets/tutorial-us-west1-GOOGLE_CLOUD_PROJECT
      methodName: com.google.cloud.cnrm.storage.v1beta1.storagebuckets.create
      requestMetadata:
        callerIp: 203.0.113.1
        callerSuppliedUserAgent: kubectl/v1.21.1 (linux/amd64) kubernetes/5e58841
      resourceName: storage.cnrm.cloud.google.com/v1beta1/namespaces/NAMESPACE/storagebuckets/tutorial-us-west1-GOOGLE_CLOUD_PROJECT
      serviceName: k8s.io
      status:
        code: 7
        message: Forbidden
    receiveTimestamp: '2021-05-21T06:56:24.940264678Z'
    resource:
      labels:
        cluster_name: CLUSTER_NAME
        location: CLUSTER_ZONE
        project_id: GOOGLE_CLOUD_PROJECT
      type: k8s_cluster
    timestamp: '2021-05-21T06:56:09.060635Z'
    

    The methodName field shows the attempted operation, the resourceName shows the full name of the Config Connector resource, and the status section shows that the request was unsuccessful, with error code 7 and message Forbidden.

  4. Create a manifest that represents a Cloud Storage bucket in a permitted location (asia-southeast1):

    cat << EOF > tutorial-storagebucket-asia-southeast1.yaml
    apiVersion: storage.cnrm.cloud.google.com/v1beta1
    kind: StorageBucket
    metadata:
      name: tutorial-asia-southeast1-$GOOGLE_CLOUD_PROJECT
      namespace: NAMESPACE
    spec:
      location: asia-southeast1
    EOF
    
  5. Apply the manifest to create the Cloud Storage bucket:

    kubectl apply -f tutorial-storagebucket-asia-southeast1.yaml
    

    The output is as follows:

    storagebucket.storage.cnrm.cloud.google.com/tutorial-asia-southeast1-GOOGLE_CLOUD_PROJECT created
    

    where GOOGLE_CLOUD_PROJECT is your Cloud project ID.

  6. Check that Config Connector created the Cloud Storage bucket:

    gsutil ls | grep tutorial
    

    After Config Connector creates the Cloud Storage bucket, the output is as follows:

    gs://tutorial-asia-southeast1-GOOGLE_CLOUD_PROJECT/
    gs://tutorial-us-central1-GOOGLE_CLOUD_PROJECT/
    

    If you don't see this output, wait a minute and perform this step again.

Auditing constraints

The audit controller in Policy Controller and OPA Gatekeeper periodically evaluates resources against their constraints. The controller detects policy violations for resources created before the constraint, and for resources created outside Config Connector.

  1. In Cloud Shell, view violations for all constraints that use the GCPStorageLocationConstraintV1 constraint template:

    kubectl get gcpstoragelocationconstraintv1 -o json \
      | jq '.items[].status.violations'
    

    The output is as follows:

    [
      {
        "enforcementAction": "deny",
        "kind": "StorageBucket",
        "message": "Cloud Storage bucket <tutorial-us-central1-GOOGLE_CLOUD_PROJECT>
        uses a disallowed location <us-central1>, allowed locations are
        \"asia-southeast1\", \"asia-southeast2\"",
        "name": "tutorial-us-central1-GOOGLE_CLOUD_PROJECT",
        "namespace": "NAMESPACE"
      }
    ]
    

    You see the Cloud Storage bucket that you created in us-central1 before you created the constraint.

Validating resources during development

During development and continuous integration builds, it's helpful to validate resources against constraints before you apply those resources to your GKE cluster. Validating provides fast feedback and lets you discover issues with resources and constraints early. These steps show you how to validate resources with kpt. The kpt command-line tool lets you manage and apply your Kubernetes resource manifests.

If you want to use an environment other than Cloud Shell, see the kpt website for installation instructions.

  1. In Cloud Shell, install kpt:

    sudo apt-get install -y google-cloud-sdk-kpt
    
  2. Create and run a kpt pipeline:

    kpt fn source tutorial-*.yaml \
      | kpt fn run --image gcr.io/kpt-functions/gatekeeper-validate
    

    This pipeline uses a kpt source function to create a Kubernetes resource list that contains the constraint template, the constraint, and the Config Connector Cloud Storage bucket resources. The pipeline uses the gatekeeper-validate kpt config function to validate the Config Connector Cloud Storage bucket resources against the constraint. This function is packaged as a container image that is available in Container Registry.

    The function reports that the manifest files for Cloud Storage buckets in the us-central1 and us-west1 regions violate the constraint.

    The output is as follows:

    Error: Found 2 violations:
    
    [1] Cloud Storage bucket <tutorial-us-central1-GOOGLE_CLOUD_PROJECT>
    uses a disallowed location <us-central1>, allowed locations are
    "asia-southeast1", "asia-southeast2"
    
    name: "tutorial-us-central1-GOOGLE_CLOUD_PROJECT"
    path: tutorial-storagebucket-us-central1.yaml
    
    [2] Cloud Storage bucket <tutorial-us-west1-GOOGLE_CLOUD_PROJECT>
    uses a disallowed location <us-west1>, allowed locations are
    "asia-southeast1", "asia-southeast2"
    
    name: "tutorial-us-west1-GOOGLE_CLOUD_PROJECT"
    path: tutorial-storagebucket-us-west1.yaml
    
    error: exit status 1
    

Validating resources created outside Config Connector

You can validate Google Cloud resources that were created outside Config Connector by exporting the resources. After you export the resources, use either of the following options to evaluate your Policy Controller policies against the exported resources:

  • Validate the resources in a kpt pipeline.

  • Import the resources into Config Connector.

To export the resources, you use Cloud Asset Inventory.

  1. In Cloud Shell, enable the Cloud Asset API:

    gcloud services enable cloudasset.googleapis.com
    
  2. Export all Cloud Storage resources in your current project, and store the output in the bucket tutorial-asia-southeast1-GOOGLE_CLOUD_PROJECT:

    gcloud asset export \
      --asset-types "storage.googleapis.com/Bucket" \
      --content-type resource \
      --project $GOOGLE_CLOUD_PROJECT \
      --output-path gs://tutorial-asia-southeast1-$GOOGLE_CLOUD_PROJECT/export.ndjson
    

    This command starts a background process to export the resources. The output looks similar to the following:

    Export in progress for root asset projects/GOOGLE_CLOUD_PROJECT.
    Use gcloud asset operations describe projects/GOOGLE_CLOUD_PROJECT/operations/ExportAssets/RESOURCE/UNIQUE_ID to check the status of the operation.
    
  3. Check if the export is finished by using the command displayed in the terminal output of the previous step:

    gcloud asset operations describe --format 'value(done)' \
      projects/PROJECT_NUMBER/operations/ExportAssets/RESOURCE/UNIQUE_ID
    

    Replace the following:

    • PROJECT_NUMBER: The Cloud project number for your project.
    • UNIQUE_ID: The export operation ID for your project.

    To show only the done status, add the --format flag. When the export is finished, the output of the command is as follows:

    True
    

    Use the following command if you want to block and check every 3 seconds whether the operation is complete:

    until gcloud asset operations describe --format 'value(done)' \
      projects/PROJECT_NUMBER/operations/ExportAssets/RESOURCE/UNIQUE_ID \
      | grep True ; do sleep 3 ; done
    
  4. Copy the file that contains the exported resources to your current directory:

    gsutil cp gs://tutorial-asia-southeast1-$GOOGLE_CLOUD_PROJECT/export.ndjson .
    

    The file is a newline-delimited JSON file (NDJSON) that contains one resource per line.

  5. Download the config-connector command-line tool to your current directory:

    gsutil cp gs://cnrm/latest/cli.tar.gz - \
      | tar xz --strip-components 3 ./linux/amd64/config-connector
    

    This download can take a minute to complete.

  6. Use the config-connector tool to convert the NDJSON file to a YAML file that contains Config Connector resource manifests:

    ./config-connector -iam-format=none < export.ndjson > export.yaml
    

    The -iam-format=none flag skips IAM policies in the output file. If you want to validate constraints for IAM policies in your own environment, remove the flag.

  7. Use a kpt pipeline to validate the resources against the Cloud Storage bucket location policy:

    kpt fn source tutorial-storagebucket-location-*.yaml export.yaml \
      | kpt fn run --image gcr.io/kpt-functions/set-namespace -- namespace=NAMESPACE \
      | kpt fn run --image gcr.io/kpt-functions/gatekeeper-validate
    

    This pipeline uses a kpt config function called set-namespace to set the namespace metadata attribute value of all the resources. Setting the namespace is necessary because the constraint applies only to resources in the namespace managed by Config Connector. The exported resources do not have a value for the namespace attribute.

    The output shows violations for the resources that you exported:

    Error: Found 1 violation:
    
    [1] Cloud Storage bucket <tutorial-us-central1-GOOGLE_CLOUD_PROJECT> uses a disallowed location <us-central1>, allowed locations are ["asia-southeast1", "asia-southeast2"]
    
    name: "tutorial-us-central1-GOOGLE_CLOUD_PROJECT"
    path: export.yaml
    
    error: exit status 1
    

    If your Cloud project contains Cloud Storage buckets that you created before working on this tutorial, and their location violates the constraint, the previously created buckets will appear in the output.

Congratulations, you have successfully set up a policy that governs the permitted location of Cloud Storage buckets. The tutorial is complete. You can now continue to add your own policies for other Google Cloud resources.

Troubleshooting

If Config Connector doesn't create the expected Google Cloud resources, use the following command in Cloud Shell to view the logs of the Config Connector controller manager:

kubectl logs --namespace cnrm-system --container manager \
  --selector cnrm.cloud.google.com/component=cnrm-controller-manager,cnrm.cloud.google.com/scoped-namespace=NAMESPACE

If Policy Controller or OPA Gatekeeper don't enforce policies correctly, use the following command to view the logs of the controller manager:

kubectl logs deployment/gatekeeper-controller-manager \
  --namespace gatekeeper-system

If Policy Controller or OPA Gatekeeper don't report violations in the status field of the constraint objects, view the logs of the audit controller using this command:

kubectl logs deployment/gatekeeper-audit --namespace gatekeeper-system

If you run into other problems with this tutorial, we recommend that you review these documents:

Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the project

  1. In the Cloud Console, go to the Manage resources page.

    Go to Manage resources

  2. If the project that you plan to delete is attached to an organization, expand the Organization list in the Name column.
  3. In the project list, select the project that you want to delete, and then click Delete.
  4. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete the resources

If you want to keep the Cloud project you used in this tutorial, delete the individual resources.

  1. In Cloud Shell, delete the Cloud Storage bucket location constraint:

    kubectl delete -f tutorial-storagebucket-location-constraint.yaml
    
  2. Add the cnrm.cloud.google.com/force-destroy annotation with a string value of true to all storagebucket resources in the namespace managed by Config Connector:

    kubectl annotate storagebucket --all --namespace NAMESPACE \
      cnrm.cloud.google.com/force-destroy=true
    

    This annotation is a directive that allows Config Connector to delete a Cloud Storage bucket when you delete the corresponding storagebucket resource in the GKE cluster, even if the bucket contains objects.

  3. Delete the Config Connector resources that represents the Cloud Storage buckets:

    kubectl delete --namespace NAMESPACE storagebucket --all
    
  4. Delete the GKE cluster:

    gcloud container clusters delete CLUSTER_NAME \
      --zone ZONE --async --quiet
    
  5. Delete the Workload Identity policy binding in IAM:

    gcloud iam service-accounts remove-iam-policy-binding \
      SERVICE_ACCOUNT_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com \
      --member "serviceAccount:$GOOGLE_CLOUD_PROJECT.svc.id.goog[cnrm-system/cnrm-controller-manager-NAMESPACE]" \
      --role roles/iam.workloadIdentityUser
    
  6. Delete the Cloud Storage Admin role binding for the Google service account:

    gcloud projects remove-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
      --member "serviceAccount:SERVICE_ACCOUNT_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
      --role roles/storage.admin
    
  7. Delete the Google service account that you created for Config Connector:

    gcloud iam service-accounts delete --quiet \
      SERVICE_ACCOUNT_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com
    

What's next