Creating constraints and constraint templates

This topic shows how to define Anthos Policy Controller constraints and how to create custom constraint templates.

Overview

Policy Controller policies are described using the OPA Constraint Framework, and written in Rego. A policy can evaluate any field of a Kubernetes object.

Writing policies using Rego is a specialized skill. For this reason, a library of common constraint templates is installed by default. Most users can invoke these constraint templates when creating constraints. If you have specialized needs, you can create your own constraint templates.

Constraint templates allow you to separate a policy's logic from its specific requirements, for reuse and delegation. You can create constraints using constraint templates developed by third parties, such as open source projects, software vendors, or regulatory experts.

Before you begin

Using the constraint template library

When you define a constraint, you specify the constraint template it extends. A library of common constraint templates developed by Google is installed by default, and many organizations do not need to create custom constraint templates directly in Rego. Constraint templates provided by Google have the label configmanagement.gke.io/configmanagement. To list them, use the following command:

kubectl get constrainttemplates \
  -l="configmanagement.gke.io/configmanagement=config-management"

To describe a constraint template and check its required parameters:

kubectl describe constrainttemplate [CONSTRAINT-TEMPLATE-NAME]

You can view all constraint templates in the library.

Defining a constraint

You define a constraint using YAML, and you do not need to understand or write Rego. Instead, a constraint invokes a constraint template and provides it with parameters specific to the constraint.

  • The lowercased kind matches the name of a constraint template.
  • The metadata.name is the name of the constraint.
  • The match field defines the kinds of objects the constraint applies to.
  • The parameters field defines the arguments for the constraint, based on what the constraint template expects.

The following constraint, called ns-must-have-geo invokes a constraint template called K8sRequiredLabels, which is included in the constraint template library provided by Google. The constraint defines parameters that the constraint template uses to evaluate whether Namespaces have the geo label set to some value.

# ns-must-have-geo.yaml

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-geo
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    labels:
      - key: "geo"

To create the constraint, apply it using kubectl apply -f:

kubectl apply -f ns-must-have-geo.yaml

Configure Gatekeeper for auditing, testing, or referential constraints

Before you can audit or test a constraint, or before you can enable referential constraints, you must create a Config that tells Gatekeeper what kinds of objects to watch, such as Namespaces.

Save the following YAML manifest to a file, and apply it to the cluster using kubectl apply -f [FILENAME]. The manifest configures Gatekeeper to watch Namespaces and Ingresses. Create an entry with group, version, and kind under spec.sync.syncOnly, with the values for each type of object you want to watch.

apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
  name: config
  namespace: "gatekeeper-system"
spec:
  sync:
    syncOnly:
      - group: ""
        version: "v1"
        kind: "Namespace"
      - group: "extensions"
        version: "v1beta1"
        kind: "Ingress"

Auditing a constraint

If the constraint is configured and installed correctly, its status.byPod[].enforced field is set to true, whether the constraint is configured to enforce or only audit the constraint.

If a constraint is violated, the violation is always included in the constraint's spec.violations field. For example, the following spec.status field includes four violations:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-geo
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    labels:
      - key: "geo"
status:
  auditTimestamp: "2019-05-11T01:46:13Z"
  enforced: true
  violations:
  - enforcementAction: deny
    kind: Namespace
    message: 'you must provide labels: {"geo"}'
    name: default
  - enforcementAction: deny
    kind: Namespace
    message: 'you must provide labels: {"geo"}'
    name: gatekeeper-system
  - enforcementAction: deny
    kind: Namespace
    message: 'you must provide labels: {"geo"}'
    name: kube-public
  - enforcementAction: deny
    kind: Namespace
    message: 'you must provide labels: {"geo"}'
    name: kube-system

Testing a constraint

Constraints are enforced by default, and a violation of a constraint prevents a given cluster operation. You can set a constraint's spec.enforcementAction to dryrun to report violations in the status.violations field without preventing the operation. Testing constraints this way can prevent disruptions caused by an incorrectly-configured constraint.

For example:

# ns-must-have-geo.yaml

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-geo
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    labels:
      - key: "geo"
  enforcementAction: dryrun

To see violations of a given constraint, whether it is enforced or not, view its spec.status fields.

Adding a custom message to a constraint

You can design a constraint template to require the constraint to set a custom message if a violation occurs. See the included k8srequiredlabels constraint constraint template for a Rego example.

For example, the following constraint specifies that each Namespace must have a geo label, and also provides a custom message:

# ns-must-have-geo-custom-message.yaml

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-geo
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    message: "All namespaces must have a `geo` label set to your team's location"
    labels:
      - key: geo

If the constraint is installed correctly, its status.enforced field is true.

kubectl describe K8sRequiredLabels ns-must-have-geo
Name:         ns-must-have-geo
Namespace:
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sRequiredLabels","metadata":{"annotations":{},"name":"ns-must-have-geo"},"spec...
API Version:  constraints.gatekeeper.sh/v1beta1
Kind:         K8sRequiredLabels
Metadata:
  Creation Timestamp:  2019-09-20T22:46:55Z
  Finalizers:
    finalizers.gatekeeper.sh/constraint
  Generation:        5
  Resource Version:  70678
  Self Link:         /apis/constraints.gatekeeper.sh/v1beta1/k8srequiredlabels/ns-must-have-geo
  UID:               8bed4803-dbf8-11e9-9217-42010a8001d3
Spec:
  Match:
    Kinds:
      API Groups:

      Kinds:
        Namespace
  Parameters:
    Labels:
      Key:    geo
    Message:  All namespaces must have a `geo` label set to your team's location
Status:
  Audit Timestamp:  2019-09-20T22:48:07Z
  By Pod:
    Enforced:  true
    Id:        gatekeeper-controller-manager-0
Events:        <none>

Caveats when syncing constraints

Keep the following caveats in mind when syncing constraints.

Eventual consistency

You can commit constraints to the repo, and can limit their effects using ClusterSelectors or NamespaceSelectors. Because syncing is eventually consistent, keep the following caveats in mind:

  • If a cluster operation triggers a constraint whose NamespaceSelector refers to a Namespace that hasn't been synced, the constraint is enforced and the operation is prevented. In other words, a missing Namespace "fails closed."
  • If you change the labels of a Namespace, the cache may contain outdated data for a brief time.

Minimize the need to rename a Namespace or change its labels, and test constraints that impact a renamed or relabeled Namespace to ensure they work as expected.

Enabling referential constraints

A referential constraint references another object in its definition. For example, you could create a constraint that requires Ingress objects in a cluster to have unique hostnames. The constraint is referential if its constraint template contains the string data.inventory in its Rego.

Referential constraints are disabled by default in Policy Controller, though they are enabled by default in Gatekeeper (the open source project). Referential constraints are only guaranteed to be eventually consistent, and this creates risks:

  • On an overloaded API server, the contents of Gatekeeper's cache may become stale, causing a referential constraint to "fail open", meaning that the enforcement action appears to be working when it isn't. For example, you can create Ingresses with duplicate hostnames too quickly to allow the admission controller to detect the duplicates.
  • The order in which constraints are installed and the order in which the cache is updated are both random.

If you understand these risks and still want to enable support for referential constraints, set policyController.referentialRulesEnabled to true in the Operator object:

apiVersion: configmanagement.gke.io/v1
kind: ConfigManagement
metadata:
  name: config-management
  namespace: config-management-system
spec:
  clusterName: my-cluster
  channel: dev
  policyController:
    enabled: true
    referentialRulesEnabled: true

Removing a constraint

To find all constraints using a constraint template, list all objects with the same kind as the constraint template's metadata.name:

kubectl get [CONSTRAINT-TEMPLATE-NAME]

To remove a constraint, specify its kind and name:

kubectl delete [CONSTRAINT-TEMPLATE-NAME] [CONSTRAINT-NAME]

If you want to delete the constraint template the constraint was using, make a note of the constraint's kind.

When you remove a constraint, it stops being enforced as soon as the API server marks the constraint as deleted.

Defining a constraint template

A constraint template allows you to broadly define how a constraint works without defining its specific values. To use a computer-science analogy, constraint templates are similar to functions, and constraints are similar to function calls.

A constraint template has several important fields:

  • Its kind field is ConstraintTemplate.
  • Its spec.crd field defines template values for constraints derived from the constraint template, including the constraint kind and the schema for the constraint's parameters. This provides type safety and helps prevent misconfiguration of derived constraints.
  • The lower-case version of its kind field (which matches the constraint template's metadata.name) determines the kind of constraints that are created using it.
  • its targets field defines the policy itself, expressed in Rego. This includes the definition of the policy and what happens if it is violated.

For example, the k8srequiredlabels constraint template in the Gatekeeper project repository, which is included in the constraint template library, defines what happens if an object doesn't have a label. The type of object and the label it must have are specified in the parameters of constraints that use this constraint template.

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
        listKind: K8sRequiredLabelsList
        plural: k8srequiredlabels
        singular: k8srequiredlabels
      validation:
        # Schema for the `parameters` field
        openAPIV3Schema:
          properties:
            labels:
              type: array
              items: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels

        violation[{"msg": msg, "details": {"missing_labels": missing}}] {
          provided := {label | input.review.object.metadata.labels[label]}
          required := {label | label := input.parameters.labels[_]}
          missing := required - provided
          count(missing) > 0
          msg := sprintf("you must provide labels: %v", [missing])
        }

Before you can write constraints using a constraint template, the constraint template must be installed on the cluster. Save the template to a file and apply it to the cluster using kubectl apply -f [MANIFEST].

Until you use a constraint template in a constraint, it has no effect.

You can read more about the specification and format for constraint templates.

Removing a constraint template

First, verify that no constraints you want to preserve are using the constraint template:

kubectl get [TEMPLATE-NAME]

In case of a naming conflict between the constraint template's name and a different object in the cluster, you can use the following command instead:

kubectl get [TEMPLATE-NAME].constraints.gatekeeper.sh

Remove the constraint template:

kubectl delete constrainttemplate [CONSTRAINT-TEMPLATE-NAME]

When you remove a constraint template, you can no longer create constraints that reference it.

Removing all constraint templates

Set spec.policyController.templateLibraryInstalled to false. This prevents the Operator from automatically reinstalling the library.

To remove all constraint templates and all constraints:

kubectl delete constrainttemplate --all

Restoring the constraint template library

If you disabled the constraint template library or uninstalled all constraint templates, you can restore it by setting spec.policyController.templateLibraryInstalled to true in the Operator config.

What's next?