This topic shows how to define Policy Controller constraints.
Policy Controller allows you to enforce policy for a Kubernetes cluster by defining one or more constraint objects. Once a constraint is installed, requests to the API server are checked against the constraint and will be rejected if they do not comply. Pre-existing non-compliant resources will be reported at audit time.
Every constraint is backed by a constraint template that defines the schema and logic of the constraint. Constraint templates can be sourced from Google, third parties, or you can write your own. See Writing a constraint template for more information creating new templates.
Before you begin
- You must install Policy Controller before continuing.
- The constraint template library must be enabled before you can use the example constraints in this topic.
Using the constraint template library
When you define a constraint, you specify the constraint template
it extends. A library of common constraint templates developed by Google is
installed by default, and many organizations do not need to create custom
constraint templates directly in Rego. Constraint templates provided by Google
have the label
configmanagement.gke.io/configmanagement. To list them, use the
kubectl get constrainttemplates \ -l="configmanagement.gke.io/configmanagement=config-management"
To describe a constraint template and check its required parameters:
kubectl describe constrainttemplate [CONSTRAINT-TEMPLATE-NAME]
You can view all constraint templates in the library.
Defining a constraint
You define a constraint using YAML, and you do not need to understand or write Rego. Instead, a constraint invokes a constraint template and provides it with parameters specific to the constraint.
- The lowercased
kindmatches the name of a constraint template.
metadata.nameis the name of the constraint.
matchfield defines which objects the constraint applies to. All conditions specified must be matched before an object is in-scope for a constraint.
matchconditions are defined by the following sub-fields:
kindsare the kinds of resources the constraint applies to, determined by two fields:
apiGroupsis a list of Kubernetes API groups that will match and
kindsis a list of kinds that will match. "*" matches everything. If at least one
kindentry match, the
kindscondition is satisfied.
namespacesis a list of namespace names the object can belong to. The object must belong to at least one of these namespaces. Namespace resources are treated as if they belong to themselves.
excludedNamespacesis a list of namespaces that the object cannot belong to.
labelSelectoris a Kubernetes label selector that the object must satisfy.
namespaceSelectoris a label selector on the namespace the object belongs to. If the namespace does not satisfy the object, it will not match. Namespace resources are treated as if they belong to themselves.
parametersfield defines the arguments for the constraint, based on what the constraint template expects.
The following constraint, called
ns-must-have-geo invokes a constraint template
K8sRequiredLabels, which is included in the
constraint template library
provided by Google. The constraint defines parameters that the constraint
template uses to evaluate whether namespaces have the
geo label set to some
# ns-must-have-geo.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-geo spec: match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: - key: "geo"
To create the constraint, apply it using
kubectl apply -f:
kubectl apply -f ns-must-have-geo.yaml
Auditing a constraint
If the constraint is configured and installed correctly, its
status.byPod.enforced field is set to
true, whether the constraint is
configured to enforce or only test the constraint.
Constraints are enforced by default, and a violation of a constraint prevents
a given cluster operation. You can set a constraint's
dryrun to report violations in the
status.violations field without
preventing the operation.
To learn more about auditing, see Auditing using constraints
Caveats when syncing constraints
Keep the following caveats in mind when syncing constraints.
- If a cluster operation triggers a constraint whose NamespaceSelector refers to a namespace that hasn't been synced, the constraint is enforced and the operation is prevented. In other words, a missing namespace "fails closed."
- If you change the labels of a namespace, the cache may contain outdated data for a brief time.
Minimize the need to rename a namespace or change its labels, and test constraints that impact a renamed or relabeled namespace to ensure they work as expected.
Configure Policy Controller for referential constraints
Before you can enable referential constraints, you must create a Config that tells Policy Controller what kinds of objects to watch, such as namespaces.
Save the following YAML manifest to a file, and apply it with
manifest configures Policy Controller to watch namespaces and Ingresses.
Create an entry with
with the values for each type of object you want to watch.
apiVersion: config.gatekeeper.sh/v1alpha1 kind: Config metadata: name: config namespace: "gatekeeper-system" spec: sync: syncOnly: - group: "" version: "v1" kind: "Namespace" - group: "extensions" version: "v1beta1" kind: "Ingress"
Enabling referential constraints
A referential constraint references another object in its definition. For
example, you could create a constraint that requires Ingress objects in a
cluster to have unique hostnames. The constraint is referential if its constraint
template contains the string
data.inventory in its Rego.
Referential constraints are disabled by default in Policy Controller. Referential constraints are only guaranteed to be eventually consistent, and this creates risks:
On an overloaded API server, the contents of Policy Controller's cache may become stale, causing a referential constraint to "fail open", meaning that the enforcement action appears to be working when it isn't. For example, you can create Ingresses with duplicate hostnames too quickly to allow the admission controller to detect the duplicates.
The order in which constraints are installed and the order in which the cache is updated are both random.
If you understand these risks and still want to enable support for referential
true in the
apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management namespace: config-management-system spec: clusterName: my-cluster channel: dev policyController: enabled: true referentialRulesEnabled: true
Listing all constraints
To list all constraints installed on a cluster, use
kubectl get constraint
Removing a constraint
To find all constraints using a constraint template, list all objects with the
kind as the constraint template's
kubectl get [CONSTRAINT-TEMPLATE-NAME]
To remove a constraint, specify its
kubectl delete [CONSTRAINT-TEMPLATE-NAME] [CONSTRAINT-NAME]
If you want to delete the constraint template the constraint was using, make a
note of the constraint's
When you remove a constraint, it stops being enforced as soon as the API server marks the constraint as deleted.
Removing all constraint templates
false. This prevents
the Operator from automatically reinstalling the library.
To remove all constraint templates and all constraints:
kubectl delete constrainttemplate --all
Restoring the constraint template library
If you disabled the constraint template library or uninstalled all constraint
templates, you can restore it by setting
true in the