Reporting Policy Controller audit violations in Security Command Center

This tutorial shows platform security administrators how to view and manage policy violations for Kubernetes resources alongside other vulnerability and security findings in Security Command Center. In this tutorial, you can use either Policy Controller or Open Policy Agent (OPA) Gatekeeper.

Architecture

Policy Controller checks, audits, and enforces your Kubernetes cluster resources' compliance with policies related to security, regulations, or business rules. Policy Controller is built from the OPA Gatekeeper open source project.

The audit functionality in Policy Controller and OPA Gatekeeper lets you implement detective controls that periodically evaluate resources against policies. If an issue is detected, the controls create violations for resources that don't conform to the policies. These violations are stored in the cluster, and you can query them using Kubernetes tools such as kubectl.

To make these violations visible and to help you take actions, you can use Security Command Center. Security Command Center provides a dashboard and APIs for surfacing, understanding, and remediating security and data risks across an organization for Google Cloud resources, Kubernetes resources, and hybrid or multi-cloud resources.

Security Command Center displays possible security risks and policy violations, called findings. Findings come from sources, which are mechanisms that can detect and report risks and violations. Security Command Center includes built-in services, and you can add third-party sources and your own sources.

This tutorial and associated source code shows you how to create a source and findings in Security Command Center for Policy Controller and OPA Gatekeeper policy violations.

The following diagram shows the architecture that is implemented in this tutorial:

Architecture with a source, controller, and sync.

As the preceding diagram shows, in this tutorial you create a source in Security Command Center using a command-line tool. You deploy a controller to a Google Kubernetes Engine (GKE) cluster to synchronize Policy Controller and OPA Gatekeeper constraint violations to findings in Security Command Center.

If you want to see how to synchronize policy violations for Google Cloud resources, try out our tutorial about how to create policy-compliant Google Cloud resources using Config Connector and Policy Controller.

Objectives

  • Create a policy and a resource that violates the policy.
  • Create a source in Security Command Center.
  • Create a finding in Security Command Center from an OPA Gatekeeper policy violation using a command-line tool.
  • Deploy a controller to the GKE cluster to periodically synchronize findings in Security Command Center from OPA Gatekeeper policy violations.
  • View findings in your terminal and in the Cloud Console.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  2. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  3. To complete this tutorial, you must have an appropriate editor role for Security Command Center at the organization level, such as Security Center Admin Editor. Your organization administrator can grant you this role.
  4. In the Cloud Console, activate Cloud Shell.

    Activate Cloud Shell

Preparing the environment

  1. In Cloud Shell, set the Cloud project that you want to use for this tutorial:

    gcloud config set project PROJECT_ID
    

    Replace PROJECT_ID with your Cloud project ID. When you run this command, Cloud Shell creates an exported environment variable called GOOGLE_CLOUD_PROJECT that contains your project ID.

  2. Enable the Google Kubernetes Engine API and the Security Command Center API:

    gcloud services enable \
        container.googleapis.com \
        securitycenter.googleapis.com
    

Creating a GKE cluster

  1. In Cloud Shell, create a GKE cluster with Workload Identity enabled:

    gcloud container clusters create gatekeeper-securitycenter-tutorial \
        --enable-ip-alias \
        --enable-stackdriver-kubernetes \
        --release-channel regular \
        --workload-pool $GOOGLE_CLOUD_PROJECT.svc.id.goog \
        --zone us-central1-f
    

    This command creates the cluster in the us-central1-f zone. You can use a different zone or region.

  2. Grant yourself the cluster-admin cluster role:

    kubectl create clusterrolebinding cluster-admin-binding \
        --clusterrole cluster-admin \
        --user $(gcloud config get-value core/account)
    

    You need this role later to create some of the Kubernetes resources used by the controller. You also need it if you install the open source OPA Gatekeeper distribution.

Installing the policy tool

If you have a managed Anthos cluster, follow the instructions to install Policy Controller, otherwise install the OPA Gatekeeper distribution.

Policy Controller

Install Policy Controller by following the installation instructions.

Use an audit interval of 60 seconds.

OPA Gatekeeper

  1. In Cloud Shell, define the OPA Gatekeeper version that you want to install:

    GATEKEEPER_VERSION=v3.3.0
    
  2. Install OPA Gatekeeper:

    kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/$GATEKEEPER_VERSION/deploy/gatekeeper.yaml
    
  3. Verify that OPA Gatekeeper is installed:

    kubectl rollout status deploy gatekeeper-controller-manager \
        -n gatekeeper-system
    

    When the installation completes, the output displays deployment "gatekeeper-controller-manager" successfully rolled out.

Creating a policy

A policy in Policy Controller and OPA Gatekeeper consists of a constraint template and a constraint. The constraint template contains the policy logic. The constraint specifies where the policy applies and specifies input parameters for the policy logic.

In this section, you create a policy for Kubernetes Pods and a Pod that violates the policy.

  1. In Cloud Shell, clone the OPA Gatekeeper library repository, go to the repository directory, and check out a known commit:

    git clone https://github.com/open-policy-agent/gatekeeper-library.git \
        ~/gatekeeper-library
    
    cd ~/gatekeeper-library
    
    git checkout ce24dd6802b8c845f80a27731b9095cc0864726f
    
  2. Create a Pod called nginx-disallowed in the default namespace:

    kubectl apply -f library/general/allowedrepos/samples/repo-must-be-openpolicyagent/example_disallowed.yaml
    

    The following is the manifest that you apply to create the Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx-disallowed
    spec:
      containers:
        - name: nginx
          image: nginx
          resources:
            limits:
              cpu: "100m"
              memory: "30Mi"
    

    This Pod uses a container image from a repository that isn't approved by the policy.

  3. Create a constraint template called k8sallowedrepos:

    kubectl apply -f library/general/allowedrepos/template.yaml
    

    The following is the constraint template manifest:

    apiVersion: templates.gatekeeper.sh/v1beta1
    kind: ConstraintTemplate
    metadata:
      name: k8sallowedrepos
    spec:
      crd:
        spec:
          names:
            kind: K8sAllowedRepos
          validation:
            # Schema for the `parameters` field
            openAPIV3Schema:
              properties:
                repos:
                  type: array
                  items:
                    type: string
      targets:
        - target: admission.k8s.gatekeeper.sh
          rego: |
            package k8sallowedrepos
    
            violation[{"msg": msg}] {
              container := input.review.object.spec.containers[_]
              satisfied := [good | repo = input.parameters.repos[_] ; good = startswith(container.image, repo)]
              not any(satisfied)
              msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v", [container.name, container.image, input.parameters.repos])
            }
    
            violation[{"msg": msg}] {
              container := input.review.object.spec.initContainers[_]
              satisfied := [good | repo = input.parameters.repos[_] ; good = startswith(container.image, repo)]
              not any(satisfied)
              msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v", [container.name, container.image, input.parameters.repos])
            }
    
  4. Create a constraint called repo-is-openpolicyagent:

    kubectl apply -f library/general/allowedrepos/samples/repo-must-be-openpolicyagent/constraint.yaml
    

    The following is the constraint manifest:

    apiVersion: constraints.gatekeeper.sh/v1beta1
    kind: K8sAllowedRepos
    metadata:
      name: repo-is-openpolicyagent
    spec:
      match:
        kinds:
          - apiGroups: [""]
            kinds: ["Pod"]
        namespaces:
          - "default"
      parameters:
        repos:
          - "openpolicyagent"
    

Auditing constraints

The audit controller in Policy Controller and OPA Gatekeeper periodically evaluates resources against constraints. This auditing lets you detect policy-violating resources that were created before you created the constraint.

  1. In Cloud Shell, view violations for all constraints by querying using the constraint category:

    kubectl get constraint -o json | jq '.items[].status.violations'
    

    The output is the following:

    [
      {
        "enforcementAction": "deny",
        "kind": "Pod",
        "message": "container <nginx> has an invalid image repo <nginx>, allowed repos are [\"openpolicyagent\"]",
        "name": "nginx-disallowed",
        "namespace": "default"
      }
    ]
    

    There is a violation for the Pod that you created before you created the constraint. If you see null instead of the preceding output, the Policy Controller or OPA Gatekeeper audit hasn't run since you created the constraint. By default, the audit runs every minute. Wait a minute and try again.

Creating a Security Command Center source

Security Command Center records findings against sources. Follow these steps to create a source for findings from Policy Controller and OPA Gatekeeper:

  1. In Cloud Shell, create a Google service account and store the service account name in an environment variable:

    SOURCES_ADMIN_SA=$(gcloud iam service-accounts create \
        securitycenter-sources-admin \
        --display-name "Security Command Center sources admin" \
        --format 'value(email)')
    

    You use this Google service account to administer Security Command Center sources.

  2. Define an environment variable that contains your Google Cloud organization ID:

    ORGANIZATION_ID=$(gcloud projects get-ancestors $GOOGLE_CLOUD_PROJECT \
        --format json | jq -r '.[] | select (.type=="organization") | .id')
    
  3. Grant the Security Center Sources Admin role to the sources admin Google service account at the organization level:

    gcloud organizations add-iam-policy-binding $ORGANIZATION_ID \
        --member "serviceAccount:$SOURCES_ADMIN_SA" \
        --role roles/securitycenter.sourcesAdmin
    

    This role provides the securitycenter.sources.* permissions that are required to administer sources.

  4. Grant the Service Usage Consumer role to the sources admin Google service account at the organization level:

    gcloud organizations add-iam-policy-binding $ORGANIZATION_ID \
        --member "serviceAccount:$SOURCES_ADMIN_SA" \
        --role roles/serviceusage.serviceUsageConsumer
    

    This role provides the serviceusage.services.use permission to use projects in the organization for quota and billing purposes.

  5. Grant yourself the Service Account Token Creator role for the sources admin Google service account:

    gcloud iam service-accounts add-iam-policy-binding \
        $SOURCES_ADMIN_SA \
        --member "user:$(gcloud config get-value account)" \
        --role roles/iam.serviceAccountTokenCreator
    

    This role allows your user identity to impersonate, or act as, the Google service account.

  6. Download the latest version of the gatekeeper-securitycenter command-line tool for your platform and make it executable:

    VERSION=$(curl -s https://api.github.com/repos/GoogleCloudPlatform/gatekeeper-securitycenter/releases/latest | jq -r '.tag_name')
    
    curl -Lo gatekeeper-securitycenter "https://github.com/GoogleCloudPlatform/gatekeeper-securitycenter/releases/download/${VERSION}/gatekeeper-securitycenter_$(uname -s)_$(uname -m)"
    
    chmod +x gatekeeper-securitycenter
    
  7. Use the gatekeeper-securitycenter tool to create a Security Command Center source for your organization. Capture the full source name in an environment variable.

    export SOURCE_NAME=$(./gatekeeper-securitycenter sources create \
        --organization $ORGANIZATION_ID \
        --display-name "Gatekeeper" \
        --description "Reports violations from Policy Controller audits" \
        --impersonate-service-account $SOURCES_ADMIN_SA | jq -r '.name')
    

    This command creates a source with the display name Gatekeeper. This display name is visible in Security Command Center. You can use a different display name and description.

    If you get a response with the error message, The caller does not have permission, wait a minute, and then try again. This error can happen if the Identity and Access Management (IAM) bindings haven't taken effect yet.

Creating findings using the command line

You can create Security Command Center findings from Policy Controller and OPA Gatekeeper constraint violations using the gatekeeper-securitycenter tool as part of a build pipeline or scheduled task.

  1. In Cloud Shell, create a Google service account and store the service account name in an environment variable:

    FINDINGS_EDITOR_SA=$(gcloud iam service-accounts create \
        gatekeeper-securitycenter \
        --display-name "Security Command Center Gatekeeper findings editor" \
        --format 'value(email)')
    

    You use this Google service account to create findings for your Security Command Center source.

  2. Grant the Security Center Findings Editor role to the Google service account for the source:

    ./gatekeeper-securitycenter sources add-iam-policy-binding \
        --source $SOURCE_NAME \
        --member "serviceAccount:$FINDINGS_EDITOR_SA" \
        --role roles/securitycenter.findingsEditor \
        --impersonate-service-account $SOURCES_ADMIN_SA
    

    This role provides the securitycenter.findings.* permissions required to create and edit findings. When you run this command, you impersonate the sources admin Google service account.

  3. Grant the Service Usage Consumer role to the findings editor Google service account at the organization level:

    gcloud organizations add-iam-policy-binding $ORGANIZATION_ID \
        --member "serviceAccount:$FINDINGS_EDITOR_SA" \
        --role roles/serviceusage.serviceUsageConsumer
    
  4. Grant your user identity the Service Account Token Creator role for the findings editor Google service account:

    gcloud iam service-accounts add-iam-policy-binding \
        $FINDINGS_EDITOR_SA \
        --member "user:$(gcloud config get-value account)" \
        --role roles/iam.serviceAccountTokenCreator
    
  5. Print findings to the terminal instead of creating them in Security Command Center:

    ./gatekeeper-securitycenter findings sync --dry-run=true
    

    This command uses your current kubeconfig context by default. If you want to use a different kubeconfig file, use the --kubeconfig flag.

    The output looks similar to the following:

    [
      {
        "finding_id": "0be44bcf181ef03162eed40126a500a0",
        "finding": {
          "resource_name": "https://API_SERVER/api/v1/namespaces/default/pods/nginx-disallowed",
          "state": 1,
          "category": "K8sAllowedRepos",
          "external_uri": "https://API_SERVER/apis/constraints.gatekeeper.sh/v1beta1/k8sallowedrepos/repo-is-openpolicyagent",
          "source_properties": {
            "Cluster": "",
            "ConstraintName": "repo-is-openpolicyagent",
            "ConstraintSelfLink": "https://API_SERVER/apis/constraints.gatekeeper.sh/v1beta1/k8sallowedrepos/repo-is-openpolicyagent",
            "ConstraintTemplateSelfLink": "https://API_SERVER/apis/templates.gatekeeper.sh/v1beta1/constrainttemplates/k8sallowedrepos",
            "ConstraintTemplateUID": "e35b1c39-15f7-4a7a-afae-1637b44e81b2",
            "ConstraintUID": "b904dddb-0a23-4f4f-81bb-0103de838d3e",
            "Explanation": "container \u003cnginx\u003e has an invalid image repo \u003cnginx\u003e, allowed repos are [\"openpolicyagent\"]",
            "ProjectId": "",
            "ResourceAPIGroup": "",
            "ResourceAPIVersion": "v1",
            "ResourceKind": "Pod",
            "ResourceName": "nginx-disallowed",
            "ResourceNamespace": "default",
            "ResourceSelfLink": "https://API_SERVER/api/v1/namespaces/default/pods/nginx-disallowed",
            "ResourceStatusSelfLink": "",
            "ResourceUID": "8ddd752f-e620-43ea-b966-4ae2ae507c67",
            "ScannerName": "GATEKEEPER"
          },
          "event_time": {
            "seconds": 1606287680
          }
        }
      }
    ]
    

    In the preceding output, API_SERVER is the IP address or hostname of your GKE cluster API server.

    To learn what the fields mean, see the Security Command Center API Finding resource page.

  6. Create findings in Security Command Center:

    ./gatekeeper-securitycenter findings sync \
        --source $SOURCE_NAME \
        --impersonate-service-account $FINDINGS_EDITOR_SA
    

    When you run this command, you impersonate the findings editor Google service account.

    The output includes create finding, which means that the gatekeeper-securitycenter command-line tool created a finding. The findingID attribute of that output contains the full name of the finding in the format:

    organizations/ORGANIZATION_ID/sources/SOURCE_ID/findings/FINDING_ID
    

    In this output:

    • ORGANIZATION_ID is your Google Cloud organization ID
    • SOURCE_ID is your Security Command Center source ID
    • FINDING_ID is the finding ID

    To view the finding, see the Viewing findings section.

Creating findings using a Kubernetes controller

You can deploy gatekeeper-securitycenter as a controller in your GKE cluster. This controller periodically checks for constraint violations and creates a finding in Security Command Center for each violation.

If the resource becomes compliant, the controller sets the state of the existing finding to INACTIVE.

  1. In Cloud Shell, create a Workload Identity IAM policy binding to allow the gatekeeper-securitycenter-controllerKubernetes service account in the gatekeeper-securitycenternamespace to impersonate the findings editor Google service account:

    gcloud iam service-accounts add-iam-policy-binding \
        $FINDINGS_EDITOR_SA \
        --member "serviceAccount:$GOOGLE_CLOUD_PROJECT.svc.id.goog[gatekeeper-securitycenter/gatekeeper-securitycenter-controller]" \
        --role roles/iam.workloadIdentityUser
    

    You create the Kubernetes service account and namespace when you deploy the controller.

  2. Fetch the latest version of the kpt package for the gatekeeper-securitycenter controller:

    VERSION=$(curl -s https://api.github.com/repos/GoogleCloudPlatform/gatekeeper-securitycenter/releases/latest | jq -r '.tag_name')
    
    kpt pkg get https://github.com/GoogleCloudPlatform/gatekeeper-securitycenter.git/manifests@$VERSION manifests
    

    This command creates a directory called manifests that contains the resource manifests files for the controller.

    Kpt is a command-line tool that lets you manage, manipulate, customize, and apply Kubernetes resources. You use kpt in this tutorial to customize the resource manifests for your environment.

  3. Set the Security Command Center source name:

    kpt cfg set manifests source $SOURCE_NAME
    
  4. Set the cluster name:

    kpt cfg set manifests cluster $(kubectl config current-context)
    

    The controller adds the cluster name as a source property to the findings that it creates in Security Command Center. If you have multiple clusters, this name helps you find which cluster a finding belongs to.

  5. To bind the controller Kubernetes service account to the findings editor Google service account, add the Workload Identity annotation:

    kpt cfg annotate manifests \
        --kind ServiceAccount \
        --name gatekeeper-securitycenter-controller \
        --namespace gatekeeper-securitycenter \
        --kv iam.gke.io/gcp-service-account=$FINDINGS_EDITOR_SA
    
  6. Apply the controller resources to your cluster:

    kpt live apply manifests --reconcile-timeout 3m --output table
    

    This command creates the following resources in your cluster:

    • A namespace called gatekeeper-securitycenter.
    • A service account called gatekeeper-securitycenter-controller.
    • A cluster role that provides get and list access to all resources in all API groups. This role is required because the controller retrieves the resources that caused policy violations.
    • A cluster role binding that grants the cluster role to the service account.
    • A deployment called gatekeeper-securitycenter-controller-manager.
    • A config map called gatekeeper-securitycenter-config that contains configuration values for the deployment.

    The command also waits for the resources to be ready.

  7. Verify that the controller can read constraint violations and communicate with the Security Command Center API by following the controller log:

    kubectl logs deployment/gatekeeper-securitycenter-controller-manager \
        --namespace gatekeeper-securitycenter --follow
    

    You see log entries with the message syncing findings.

    To stop following the log, press Ctrl+C.

  8. To verify that the controller can create new findings, create a policy and a resource that violates the policy. The Pod uses image digests to refer to container images.

    Go to the OPA Gatekeeper library repository directory:

    cd ~/gatekeeper-library
    
  9. Create a Pod called opa-disallowed in the default namespace:

    kubectl apply --namespace default -f \
        library/general/imagedigests/samples/container-image-must-have-digest/example_disallowed.yaml
    

    The following is the manifest that you apply to create the Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: opa-disallowed
    spec:
      containers:
        - name: opa
          image: openpolicyagent/opa:0.9.2
          args:
            - "run"
            - "--server"
            - "--addr=localhost:8080"
    

    This Pod specification refers to a container image by tag instead of by digest.

  10. Create a constraint template called k8simagedigests:

    kubectl apply -f library/general/imagedigests/template.yaml
    

    The following is the constraint template manifest:

    apiVersion: templates.gatekeeper.sh/v1beta1
    kind: ConstraintTemplate
    metadata:
      name: k8simagedigests
    spec:
      crd:
        spec:
          names:
            kind: K8sImageDigests
      targets:
        - target: admission.k8s.gatekeeper.sh
          rego: |
            package k8simagedigests
    
            violation[{"msg": msg}] {
              container := input.review.object.spec.containers[_]
              satisfied := [re_match("@[a-z0-9]+([+._-][a-z0-9]+)*:[a-zA-Z0-9=_-]+", container.image)]
              not all(satisfied)
              msg := sprintf("container <%v> uses an image without a digest <%v>", [container.name, container.image])
            }
    
            violation[{"msg": msg}] {
              container := input.review.object.spec.initContainers[_]
              satisfied := [re_match("@[a-z0-9]+([+._-][a-z0-9]+)*:[a-zA-Z0-9=_-]+", container.image)]
              not all(satisfied)
              msg := sprintf("initContainer <%v> uses an image without a digest <%v>", [container.name, container.image])
            }
    
  11. Create a constraint called container-image-must-have-digest:

    kubectl apply -f library/general/imagedigests/samples/container-image-must-have-digest/constraint.yaml
    

    The following is the constraint manifest:

    apiVersion: constraints.gatekeeper.sh/v1beta1
    kind: K8sImageDigests
    metadata:
      name: container-image-must-have-digest
    spec:
      match:
        kinds:
          - apiGroups: [""]
            kinds: ["Pod"]
        namespaces:
          - "default"
    

    This constraint only applies to the default namespace.

  12. Follow the controller log:

    kubectl logs deployment/gatekeeper-securitycenter-controller-manager \
        --namespace gatekeeper-securitycenter --follow
    

    After a few minutes, you see a log entry with the message create finding. This message means that the gatekeeper-securitycenter controller created a finding.

    To stop following the log, press Ctrl+C.

  13. To verify that the controller can set the finding state to INACTIVE when a violation is no longer reported by Policy Controller or OPA Gatekeeper, delete the Pod called opa-disallowed in the default namespace:

    kubectl delete pod opa-disallowed --namespace default
    
  14. Follow the controller log:

    kubectl logs deployment/gatekeeper-securitycenter-controller-manager \
        --namespace gatekeeper-securitycenter --follow
    

    After a few minutes, you see a log entry with the message updating finding state and the attribute "state":"INACTIVE". This message means that the controller set the finding state to inactive.

    To stop following the log, press Ctrl+C.

Viewing findings

You can view Security Command Center findings on the terminal and in the Google Cloud Console.

  1. In Cloud Shell, use the gcloud tool to list findings for your organization and source:

    gcloud scc findings list $ORGANIZATION_ID \
        --source $(basename $SOURCE_NAME) \
        --format json
    

    You use the basename command to get the numeric source ID from the full source name.

    The output looks similar to the following:

    [
      {
        "finding": {
          "category": "K8sAllowedRepos",
          "createTime": "2020-11-25T06:58:47.213Z",
          "eventTime": "2020-11-25T06:58:20Z",
          "externalUri": "https://API_SERVER/apis/constraints.gatekeeper.sh/v1beta1/k8sallowedrepos/repo-is-openpolicyagent",
          "name": "organizations/ORGANIZATION_ID/sources/SOURCE_ID/findings/FINDING_ID",
          "parent": "organizations/ORGANIZATION_ID/sources/SOURCE_ID",
          "resourceName": "https://API_SERVER/api/v1/namespaces/default/pods/nginx-disallowed",
          "securityMarks": {
            "name": "organizations/ORGANIZATION_ID/sources/SOURCE_ID/findings/FINDING_ID/securityMarks"
          },
          "sourceProperties": {
            "Cluster": "cluster-name",
            "ConstraintName": "repo-is-openpolicyagent",
            "ConstraintSelfLink": "https://API_SERVER/apis/constraints.gatekeeper.sh/v1beta1/k8sallowedrepos/repo-is-openpolicyagent",
            "ConstraintTemplateSelfLink": "https://API_SERVER/apis/templates.gatekeeper.sh/v1beta1/constrainttemplates/k8sallowedrepos",
            "ConstraintTemplateUID": "e35b1c39-15f7-4a7a-afae-1637b44e81b2",
            "ConstraintUID": "b904dddb-0a23-4f4f-81bb-0103de838d3e",
            "Explanation": "container <nginx> has an invalid image repo <nginx>, allowed repos are [\"openpolicyagent\"]",
            "ProjectId": "",
            "ResourceAPIGroup": "",
            "ResourceAPIVersion": "v1",
            "ResourceKind": "Pod",
            "ResourceName": "nginx-disallowed",
            "ResourceNamespace": "default",
            "ResourceSelfLink": "https://API_SERVER/api/v1/namespaces/default/pods/nginx-disallowed",
            "ResourceStatusSelfLink": "",
            "ResourceUID": "8ddd752f-e620-43ea-b966-4ae2ae507c67",
            "ScannerName": "GATEKEEPER"
          },
          "state": "ACTIVE"
        },
        "resource": {
          "name": "https://API_SERVER/api/v1/namespaces/default/pods/nginx-disallowed"
        }
      },
      {
        "finding": {
          "category": "K8sImageDigests",
          [...]
      }
    ]
    

    In this output:

    • API_SERVER is the IP address or hostname of your GKE cluster API server
    • ORGANIZATION_ID is your Google Cloud organization ID
    • SOURCE_ID is your Security Command Center source ID
    • FINDING_ID is the finding ID

    To learn what the finding attributes mean, see the Finding resource in the Security Command Center API.

  2. To view the findings in the Cloud Console, go to the Findings tab of Security Command Center.

    Go to Findings

  3. Select your organization and click Select.

  4. Click View by Source type.

  5. In the Source type list, click Gatekeeper. If Gatekeeper isn't in the Source type list, clear any filters in the list of findings.

  6. In the list of findings, click a finding to see the finding attributes and source properties.

    If a resource no longer causes a violation because of a change to the resource or the policy, the controller sets the finding state to inactive. It can take a few minutes for this change to be visible in Security Command Center.

    By default, Security Command Center shows active findings. To see inactive findings, click Show Only Active Findings to turn off that setting.

Troubleshooting

  • If Policy Controller or OPA Gatekeeper don't report violations in the status field of the constraint objects, use Cloud Shell to view logs of the audit controller:

    kubectl logs deployment/gatekeeper-audit --namespace gatekeeper-system
    
  • If the gatekeeper-securitycenter controller doesn't create findings in Security Command Center, you can view logs of the controller manager:

    kubectl logs deployment/gatekeeper-securitycenter-controller-manager \
        --namespace gatekeeper-securitycenter
    
  • If the gatekeeper-securitycenter command-line tool reports errors, you can increase the verbosity of the log output by setting the DEBUG environment variable to true before running the gatekeeper-securitycenter command:

    export DEBUG=true
    

If you run into other problems with this tutorial, we recommend that you review the following documents:

Automating the setup

For future deployments, you can automate the steps in this tutorial by following the instructions in the gatekeeper-securitycenter GitHub repository.

Cleaning up

To avoid incurring further charges to your Google Cloud account for the resources used in this tutorial, delete the individual resources.

Delete the individual resources

  1. In Cloud Shell, delete the GKE cluster:

    gcloud container clusters delete gatekeeper-securitycenter-tutorial \
        --zone us-central1-f --async --quiet
    
  2. Delete the gatekeeper-library files:

    rm -rf ~/gatekeeper-library
    
  3. Delete the IAM policy bindings:

    GOOGLE_CLOUD_PROJECT=$(gcloud config get-value core/project)
    
    ORGANIZATION_ID=$(gcloud projects get-ancestors $GOOGLE_CLOUD_PROJECT \
        --format json | jq -r '.[] | select (.type=="organization") | .id')
    
    SOURCE_NAME=$(./gatekeeper-securitycenter sources list \
        --organization "$ORGANIZATION_ID" \
        --impersonate-service-account "securitycenter-sources-admin@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        | jq -r ".[] | select (.display_name==\"Gatekeeper\") | .name")
    
    ./gatekeeper-securitycenter sources remove-iam-policy-binding \
        --source $SOURCE_NAME \
        --member "serviceAccount:gatekeeper-securitycenter@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role roles/securitycenter.findingsEditor \
        --impersonate-service-account securitycenter-sources-admin@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com
    
    gcloud iam service-accounts remove-iam-policy-binding \
        gatekeeper-securitycenter@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com \
        --member "serviceAccount:$GOOGLE_CLOUD_PROJECT.svc.id.goog[gatekeeper-securitycenter/gatekeeper-securitycenter-controller]" \
        --role roles/iam.workloadIdentityUser
    
    gcloud iam service-accounts remove-iam-policy-binding \
        gatekeeper-securitycenter@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com \
        --member "user:$(gcloud config get-value account)" \
        --role roles/iam.serviceAccountTokenCreator
    
    gcloud organizations remove-iam-policy-binding $ORGANIZATION_ID \
        --member "serviceAccount:gatekeeper-securitycenter@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role roles/serviceusage.serviceUsageConsumer
    
    gcloud iam service-accounts remove-iam-policy-binding \
        securitycenter-sources-admin@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com \
        --member "user:$(gcloud config get-value account)" \
        --role roles/iam.serviceAccountTokenCreator
    
    gcloud organizations remove-iam-policy-binding $ORGANIZATION_ID \
        --member "serviceAccount:securitycenter-sources-admin@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role roles/serviceusage.serviceUsageConsumer
    
    gcloud organizations remove-iam-policy-binding $ORGANIZATION_ID \
        --member "serviceAccount:securitycenter-sources-admin@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role roles/securitycenter.sourcesAdmin
    
  4. Delete the Google service accounts:

    gcloud iam service-accounts delete --quiet \
        gatekeeper-securitycenter@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com
    
    gcloud iam service-accounts delete --quiet \
        securitycenter-sources-admin@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com
    

What's next