This page shows you how to define Policy Controller constraints.
Policy Controller lets you enforce policy for a Kubernetes cluster by defining one or more constraint objects. After a constraint is installed, requests to the API server are checked against the constraint and are rejected if they do not comply. Pre-existing non-compliant resources are reported at audit time.
Every constraint is backed by a constraint template that defines the schema and logic of the constraint. Constraint templates can be sourced from Google and third parties, or you can write your own. For more information about creating new templates, see Writing a constraint template.
Before you begin
- Install Policy Controller.
- Enable the constraint template library so that you can use the example constraints in this topic.
Using the constraint template library
When you define a constraint, you specify the constraint template
that it extends. A library of common constraint templates developed by Google is
installed by default, and many organizations do not need to create custom
constraint templates directly in Rego. Constraint templates provided by Google
have the label configmanagement.gke.io/configmanagement
.
To list constraints, use the following command:
kubectl get constrainttemplates \ -l="configmanagement.gke.io/configmanagement=config-management"
To describe a constraint template and check its required parameters, use the following command:
kubectl describe constrainttemplate CONSTRAINT_TEMPLATE_NAME
You can also view all constraint templates in the library.
Defining a constraint
You define a constraint by using YAML, and you do not need to understand or write Rego. Instead, a constraint invokes a constraint template and provides it with parameters specific to the constraint.
If you are using a
structured repo,
we recommend that you create your constraints in the cluster/
directory.
Constraints have the following fields:
- The lowercased
kind
matches the name of a constraint template. - The
metadata.name
is the name of the constraint. - The
match
field defines which objects the constraint applies to. All conditions specified must be matched before an object is in-scope for a constraint.match
conditions are defined by the following sub-fields:kinds
are the kinds of resources the constraint applies to, determined by two fields:apiGroups
is a list of Kubernetes API groups that will match andkinds
is a list of kinds that will match. "*" matches everything. If at least oneapiGroup
and onekind
entry match, thekinds
condition is satisfied.scope
accepts *, Cluster, or Namespaced, which determines if cluster-scoped and/or namespaced-scoped resources are selected (defaults to *).namespaces
is a list of namespace names the object can belong to. The object must belong to at least one of these namespaces. Namespace resources are treated as if they belong to themselves.excludedNamespaces
is a list of namespaces that the object cannot belong to.labelSelector
is a Kubernetes label selector that the object must satisfy.namespaceSelector
is a label selector on the namespace the object belongs to. If the namespace does not satisfy the object, it will not match. Namespace resources are treated as if they belong to themselves.
- The
parameters
field defines the arguments for the constraint, based on what the constraint template expects.
The following constraint, called ns-must-have-geo
, invokes a constraint template
called K8sRequiredLabels
, which is included in the
constraint template library
provided by Google. The constraint defines parameters that the constraint
template uses to evaluate whether namespaces have the geo
label set to some
value.
# ns-must-have-geo.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-geo
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels:
- key: "geo"
To create the constraint, use kubectl apply -f
:
kubectl apply -f ns-must-have-geo.yaml
Auditing a constraint
If the constraint is configured and installed correctly, its
status.byPod[].enforced
field is set to true
, whether the constraint is
configured to enforce or only test the constraint.
Constraints are enforced by default, and a violation of a constraint prevents
a given cluster operation. You can set a constraint's spec.enforcementAction
to dryrun
to report violations in the status.violations
field without
preventing the operation.
To learn more about auditing, see Auditing using constraints.
Caveats when syncing constraints
Keep the following caveats in mind when syncing constraints.
Eventual consistency
You can commit constraints to the repo, and can limit their effects using ClusterSelectors or NamespaceSelectors. Because syncing is eventually consistent, keep the following caveats in mind:
- If a cluster operation triggers a constraint whose NamespaceSelector refers to a namespace that hasn't been synced, the constraint is enforced and the operation is prevented. In other words, a missing namespace "fails closed."
- If you change the labels of a namespace, the cache may contain outdated data for a brief time.
Minimize the need to rename a namespace or change its labels, and test constraints that impact a renamed or relabeled namespace to ensure they work as expected.
Configure Policy Controller for referential constraints
Before you can enable referential constraints, you must create a Config that tells Policy Controller what kinds of objects to watch, such as namespaces.
Save the following YAML manifest to a file, and apply it with kubectl
. The
manifest configures Policy Controller to watch namespaces and Ingresses.
Create an entry with group
, version
, and kind
under spec.sync.syncOnly
,
with the values for each type of object you want to watch.
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
- group: "extensions"
version: "v1beta1"
kind: "Ingress"
Enabling referential constraints
A referential constraint references another object in its definition. For
example, you could create a constraint that requires Ingress objects in a
cluster to have unique hostnames. The constraint is referential if its constraint
template contains the string data.inventory
in its Rego.
Referential constraints are disabled by default in Policy Controller. Referential constraints are only guaranteed to be eventually consistent, and this creates risks:
On an overloaded API server, the contents of Policy Controller's cache may become stale, causing a referential constraint to "fail open", meaning that the enforcement action appears to be working when it isn't. For example, you can create Ingresses with duplicate hostnames too quickly to allow the admission controller to detect the duplicates.
The order in which constraints are installed and the order in which the cache is updated are both random.
If you understand these risks and still want to enable support for referential constraints, you can enable referential constraints in the Google Cloud Console. To learn more, see Installing Policy Controller.
You can also set policyController.referentialRulesEnabled
to true
in your
config-management.yaml
file:
apiVersion: configmanagement.gke.io/v1
kind: ConfigManagement
metadata:
name: config-management
namespace: config-management-system
spec:
clusterName: my-cluster
channel: dev
policyController:
enabled: true
referentialRulesEnabled: true
Listing all constraints
To list all constraints installed on a cluster, use the following command:
kubectl get constraint
Removing a constraint
To find all constraints that use a constraint template, use the following
command to list all objects with the same kind
as the constraint template's
metadata.name
:
kubectl get CONSTRAINT_TEMPLATE_NAME
To remove a constraint, specify its kind
and name
:
kubectl delete CONSTRAINT_TEMPLATE_NAME CONSTRAINT_NAME
If you want to delete the constraint template
that the constraint was using, make a note of the constraint's kind
.
When you remove a constraint, it stops being enforced as soon as the API server marks the constraint as deleted.
Removing all constraint templates
Set spec.policyController.templateLibraryInstalled
to false
. This prevents
the Anthos Config Management from automatically reinstalling the library.
To remove all constraint templates and all constraints, use the following command:
kubectl delete constrainttemplate --all
Restoring the constraint template library
If you disabled the constraint template library or uninstalled all constraint
templates, you can restore it by setting
spec.policyController.templateLibraryInstalled
to true
in the
Anthos Config Management config.
Troubleshooting
Error creating a constraint template
If you see an error that mentions a disallowed ref
, confirm you enabled
referential constraints. For example, if you use data.inventory
in a
constraint template without enabling referential constraints
first, the error is similar to the following:
admission webhook "validation.gatekeeper.sh" denied the request: check refs failed on module {templates["admission.k8s.gatekeeper.sh"]["MyTemplate"]}: disallowed ref data.inventory...
What's next
- Learn more about Policy Controller.
- Install Policy Controller.
- Use the constraint template library.
- Learn how to use constraints instead of PodSecurityPolicies.