Use the constraint template library

This page shows you how to define Policy Controller constraints by using the pre-existing constraint templates provided by Google.

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce, and use templating of declarative configuration. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.

Policy Controller lets you enforce policy for a Kubernetes cluster by defining one or more constraint objects. After a constraint is installed, requests to the API server are checked against the constraint and are rejected if they don't comply. Pre-existing non-compliant resources are reported at audit time.

Every constraint is backed by a constraint template that defines the schema and logic of the constraint. Constraint templates can be sourced from Google and third parties, or you can write your own. For more information about creating new templates, see Write a constraint template.

Before you begin

Examine the constraint template library

When you define a constraint, you specify the constraint template that it extends. A library of common constraint templates developed by Google is installed by default, and many organizations don't need to create custom constraint templates directly in Rego. Constraint templates provided by Google have the label configmanagement.gke.io/configmanagement.

To list constraints, use the following command:

kubectl get constrainttemplates \
    -l="configmanagement.gke.io/configmanagement=config-management"

To describe a constraint template and check its required parameters, use the following command:

kubectl describe constrainttemplate CONSTRAINT_TEMPLATE_NAME

You can also view all constraint templates in the library.

Define a constraint

You define a constraint by using YAML, and you don't need to understand or write Rego. Instead, a constraint invokes a constraint template and provides it with parameters specific to the constraint.

If you are using Config Sync with a hierarchical repository, we recommend that you create your constraints in the cluster/ directory.

Constraints have the following fields:

  • The lowercased kind matches the name of a constraint template.
  • The metadata.name is the name of the constraint.
  • The match field defines which objects the constraint applies to. All conditions specified must be matched before an object is in-scope for a constraint. match conditions are defined by the following sub-fields:
    • kinds are the kinds of resources the constraint applies to, determined by two fields: apiGroups is a list of Kubernetes API groups that match and kinds is a list of kinds that match. "*" matches everything. If at least one apiGroup and one kind entry match, the kinds condition is satisfied.
    • scope accepts *, Cluster, or Namespaced, which determines if cluster-scoped or namespaced-scoped resources are selected (defaults to *).
    • namespaces is a list of namespace names the object can belong to. The object must belong to at least one of these namespaces. Namespace resources are treated as if they belong to themselves.
    • excludedNamespaces is a list of namespaces that the object cannot belong to.
    • labelSelector is a Kubernetes label selector that the object must satisfy.
    • namespaceSelector is a label selector on the namespace the object belongs to. If the namespace does not satisfy the object, it won't match. Namespace resources are treated as if they belong to themselves.
  • The parameters field defines the arguments for the constraint, based on what the constraint template expects.

The following constraint, called ns-must-have-geo, invokes a constraint template called K8sRequiredLabels, which is included in the constraint template library provided by Google. The constraint defines parameters that the constraint template uses to evaluate whether namespaces have the geo label set to some value.

# ns-must-have-geo.yaml

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-geo
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    labels:
      - key: "geo"

To create the constraint, use kubectl apply -f:

kubectl apply -f ns-must-have-geo.yaml

Audit a constraint

If the constraint is configured and installed correctly, its status.byPod[].enforced field is set to true, whether the constraint is configured to enforce or only test the constraint.

Constraints are enforced by default, and a violation of a constraint prevents a given cluster operation. You can set a constraint's spec.enforcementAction to dryrun to report violations in the status.violations field without preventing the operation.

To learn more about auditing, see Audit using constraints.

Caveats when syncing constraints

If you're syncing your constraints to a centralized source, like a Git repository, with Config Sync or another GitOps-style tool, keep the following caveats in mind when syncing constraints.

Eventual consistency

You can commit constraints to a source of truth like a Git repository, and can limit their effects using ClusterSelectors or NamespaceSelectors. Because syncing is eventually consistent, keep the following caveats in mind:

  • If a cluster operation triggers a constraint whose NamespaceSelector refers to a namespace that hasn't been synced, the constraint is enforced and the operation is prevented. In other words, a missing namespace "fails closed."
  • If you change the labels of a namespace, the cache may contain outdated data for a brief time.

Minimize the need to rename a namespace or change its labels, and test constraints that impact a renamed or relabeled namespace to ensure they work as expected.

Configure Policy Controller for referential constraints

Before you can enable referential constraints, you must create a configuration that tells Policy Controller what kinds of objects to watch, such as namespaces.

Save the following YAML manifest to a file, and apply it with kubectl. The manifest configures Policy Controller to watch namespaces and Ingresses. Create an entry with group, version, and kind under spec.sync.syncOnly, with the values for each type of object you want to watch.

apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
  name: config
  namespace: "gatekeeper-system"
spec:
  sync:
    syncOnly:
      - group: ""
        version: "v1"
        kind: "Namespace"
      - group: "extensions"
        version: "v1beta1"
        kind: "Ingress"

Enable referential constraints

A referential constraint references another object in its definition. For example, you could create a constraint that requires Ingress objects in a cluster to have unique hostnames. The constraint is referential if its constraint template contains the string data.inventory in its Rego.

Referential constraints are enabled by default if you install Policy Controller using the Google Cloud console. If you install Policy Controller using the Google Cloud CLI, you can choose whether to enable referential constraints when you Install Policy Controller. Referential constraints are only guaranteed to be eventually consistent, and this creates risks:

  • On an overloaded API server, the contents of Policy Controller's cache may become stale, causing a referential constraint to "fail open", meaning that the enforcement action appears to be working when it isn't. For example, you can create Ingresses with duplicate hostnames too quickly to allow the admission controller to detect the duplicates.

  • The order in which constraints are installed and the order in which the cache is updated are both random.

You can update an existing cluster to allow referential constraints.

Console

To disable referential constraints, complete the following steps:

  1. In the Google Cloud console, go to the GKE Enterprise Policy page under the Posture Management section.

    Go to Policy

  2. Under the Settings tab, in the cluster table, select Edit in the Edit configuration column.
  3. Expand the Edit Policy Controller configuration menu.
  4. Select the Enable Constraint Templates that reference to objects other than the object currently being evaluated. checkbox.
  5. Select Save changes.

gcloud Policy Controller

To enable support for referential constraints, run the following command:

gcloud container fleet policycontroller update \
    --memberships=MEMBERSHIP_NAME \
    --referential-rules

Replace MEMBERSHIP_NAME with the membership name of the registered cluster to enable referential rules on. You can specify multiple memberships separated by a comma.

gcloud ConfigManagement

To enable support for referential constraints, set the policyController.referentialRulesEnabled to true in your config-management.yaml file:

apiVersion: configmanagement.gke.io/v1
kind: ConfigManagement
metadata:
  name: config-management
  namespace: config-management-system
spec:
  clusterName: my-cluster
  channel: dev
  policyController:
    enabled: true
    referentialRulesEnabled: true

Disable referential constraints

When you disable referential constraints, any templates that use referential constraints are also removed from the cluster, along with any constraints that use those templates.

Console

Referential constraints are enabled by default when you install Policy Controller with the Google Cloud console. To disable referential constraints, complete the following steps:

  1. In the Google Cloud console, go to the GKE Enterprise Policy page under the Posture Management section.

    Go to Policy

  2. Under the Settings tab, in the cluster table, select Edit in the Edit configuration column.
  3. Expand the Edit Policy Controller configuration menu.
  4. Clear the Enable Constraint Templates that reference to objects other than the object currently being evaluated. checkbox.
  5. Select Save changes.

gcloud Policy Controller

To disable support for referential constraints, run the following command:

gcloud container fleet policycontroller update \
    --memberships=MEMBERSHIP_NAME \
    --no-referential-rules

Replace MEMBERSHIP_NAME with the membership name of the registered cluster to enable referential rules on. You can specify multiple memberships separated by a comma.

gcloud ConfigManagement

To disable referential constraints on a cluster, set policyController.referentialRulesEnabled to false in your config-management.yaml file:

apiVersion: configmanagement.gke.io/v1
kind: ConfigManagement
metadata:
  name: config-management
  namespace: config-management-system
spec:
  clusterName: my-cluster
  channel: dev
  policyController:
    enabled: true
    referentialRulesEnabled: false

List all constraints

To list all constraints installed on a cluster, use the following command:

kubectl get constraint

You can also see an overview of your applied constraints in the Google Cloud console. For more information, see Policy Controller metrics.

Remove a constraint

To find all constraints that use a constraint template, use the following command to list all objects with the same kind as the constraint template's metadata.name:

kubectl get CONSTRAINT_TEMPLATE_NAME

To remove a constraint, specify its kind and name:

kubectl delete CONSTRAINT_TEMPLATE_NAME CONSTRAINT_NAME

When you remove a constraint, it stops being enforced as soon as the API server marks the constraint as deleted.

Remove all constraint templates

Console

To disable the constraint template library, complete the following steps:

  1. In the Google Cloud console, go to the GKE Enterprise Policy page under the Posture Management section.

    Go to Policy

  2. Under the Settings tab, in the cluster table, select Edit in the Edit configuration column.
  3. In the Add/Edit policy bundles menu, toggle the template library and all policy bundles off .
  4. Select Save changes.

gcloud Policy Controller

To disable the constraint template library, run the following command:

gcloud container fleet policycontroller content templates disable \
    --memberships=MEMBERSHIP_NAME

Replace MEMBERSHIP_NAME with the membership name of the registered cluster to disable the constraint template library on. You can specify multiple memberships separated by a comma.

gcloud ConfigManagement

Set spec.policyController.templateLibraryInstalled to false. This prevents the Policy Controller from automatically reinstalling the library.

To remove all constraint templates and all constraints, use the following command:

kubectl delete constrainttemplate --all

Restore the constraint template library

Console

To enable the constraint template library, complete the following steps:

  1. In the Google Cloud console, go to the GKE Enterprise Policy page under the Posture Management section.

    Go to Policy

  2. Under the Settings tab, in the cluster table, select Edit in the Edit configuration column.
  3. In the Add/Edit policy bundles menu, toggle the template library on . You can also enable any or all of the policy bundles.
  4. Select Save changes.

gcloud Policy Controller

To restore the constraint template library, run the following command:

gcloud container fleet policycontroller content templates enable \
    --memberships=MEMBERSHIP_NAME

Replace MEMBERSHIP_NAME with the membership name of the registered cluster to enable the constraint template library on. You can specify multiple memberships separated by a comma.

gcloud ConfigManagement

If you disabled the constraint template library or uninstalled all constraint templates, you can restore it by setting spec.policyController.templateLibraryInstalled to true in the Policy Controller config.

To restart the Operator Pod, use the following command:

kubectl delete pod -n config-management-system -l k8s-app=config-management-operator

What's next