This topic shows how to write a custom constraint template and use it to extend Policy Controller.
Overview
Constraint templates can be used to extend Policy Controller. In the case where you cannot find a pre-written template that suits your needs, it is possible to write your own.
Policy Controller policies are described using the OPA Constraint Framework, and written in Rego. A policy can evaluate any field of a Kubernetes object.
Writing policies using Rego is a specialized skill. For this reason, a library of common constraint templates is installed by default. Most users can invoke these constraint templates when creating constraints. If you have specialized needs, you can create your own constraint templates.
Constraint templates allow you to separate a policy's logic from its specific requirements, for reuse and delegation. You can create constraints using constraint templates developed by third parties, such as open source projects, software vendors, or regulatory experts.
Before you begin
- You must have an Anthos entitlement to install Policy Controller using Anthos Config Management.
- You need a cluster with Anthos Config Management already installed.
- Set up Anthos Config Management.
Example Constraint Template
Below is an example constraint template that denies all resources whose name matches a value provided by the creator of the constraint. The rest of this page will discuss the contents of the template, highlighting important concepts along the way.
Constraint Template
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdenyname
spec:
crd:
spec:
names:
kind: K8sDenyName
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
invalidName:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenynames
violation[{"msg": msg}] {
input.review.object.metadata.name == input.parameters.invalidName
msg := sprintf("The name %v is not allowed", [input.parameters.invalidName])
}
Constraint
Here is an example constraint a user might implement to deny all resources named "policy-violation":
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyName
metadata:
name: no-policy-violation
spec:
parameters:
invalidName: "policy-violation"
Parts of a Constraint Template
Constraint templates have two important pieces:
The schema of the constraint that you want users to create. The schema of a constraint template is stored in the
crd
field.The Rego source code that is executed when the constraint is evaluated. The Rego source code for a template is stored in the
targets
field.
The CRD Field
The CRD field is a blueprint for creating the Kubernetes Custom Resource Definition that defines the constraint resource for the Kubernetes API server. You only need to populate the following fields:
- spec.crd.spec.names.kind is the Kind of the constraint. When
lower-cased, the value of this field must be equal to
metadata.name
. - spec.crd.spec.validation.openAPIV3Schema is the schema for the
spec.parameters
field of the constraint resource (the rest of the constraint's schema is defined automatically by Anthos Config Management). It follows the same conventions as it would in a regular CRD resource. Its definition is documented Kubernetes API documentation.
Prefixing the constraint template with the name "K8s" is a convention that allows us to avoid collisions with other kinds of constraint templates, such as Forseti templates targeting GCP resources.
The Rego Source Code
Location
The Rego source code is stored under the spec.targets
field, where targets
is an array of objects of format {"target": "admission.k8s.gatekeeper.sh",
"rego": <REGO SOURCE CODE>, "libs": <LIST OF REGO LIBRARIES>}
. Currently, only
one entry in targets
is allowed.
target
tells Anthos Config Management what system we are looking at (in this case Kubernetes)rego
is the source code for the constraintlibs
is an optional list of libraries of Rego code that will be made available to the constraint template. It is meant to make it easier to use shared libraries and is out-of-scope for this tutorial.
Source Code
Let's take a look at the Rego for the above constraint:
package k8sdenynames
violation[{"msg": msg}] {
input.review.object.metadata.name == input.parameters.invalidName
msg := sprintf("The name %v is not allowed", [input.parameters.invalidName])
}
There are a few items to note here:
package k8sdenynames
is required by OPA (Rego's runtime). The value is ignored.- The Rego rule that Policy Controller invokes to see if there are any
violations is called
violation
. If this rule has matches, a violation of the constraint has occurred. - The
violation
rule has the signatureviolation[{"msg": "violation message for the user"}]
, where the value of"msg"
is the violation message that will be returned to the user. - The parameters provided to the constraint are made available under the keyword
input.parameters
. - The request-under-test is stored under the keyword
input.review
input.review
has the following fields:
uid
is the unique ID for this particular request, not available during auditkind
is the kind information for the object-under-test. It has the format:kind
the resource kindgroup
the resource groupversion
the resource version
name
is the resource name. It may be empty if the user is relying on the API server to generate the name on a CREATE request.namespace
is the resource namespace (not provided for cluster-scoped resources)operation
is the operation requested (e.g. CREATE or UPDATE), not available during audit.userInfo
is the requesting user's information, not available during auditusername
is the user making the requestuid
is the user's UIDgroups
is a list of groups the user is a member ofextra
is any extra user information provided by Kubernetes
object
is the object the user is attempting to modify/createoldObject
is the original state of the object, only available on UPDATE operationsdryRun
is whether this request was invoked withkubectl --dry-run
, not available during audit
Writing Referential Constraint Templates
Referential constraint templates are templates that allow the user to constrain one object with respect to other objects. An example of this might be "don't allow a pod to be created before a matching ingress is known to exist". Another example might be "do not allow two services to have the same hostname".
Policy Controller allows you to write referential constraints by watching the
API Server for a user-provided set of resources. When a resource is modified,
Policy Controller caches it locally so that it can be easily referenced by Rego
source code. Policy Controller makes this cache available under the
data.inventory
keyword.
Cluster-scoped resources are cached in the following location:
data.inventory.cluster[<groupVersion>][<kind>][<name>]
For instance, a Node named my-favorite-node
could be found under
data.inventory.cluster["v1"]["Node"]["my-favorite-node"]
Namespace-scoped resources are cached here:
data.inventory.namespace[<namespace>][<groupVersion>][<kind>][<name>]
For example, a ConfigMap named production-variables
in the namespace
shipping-prod
could be found under
data.inventory.namespace["shipping-prod"]["v1"]["ConfigMap"]["production-variables"]
The full contents of the object are stored at this cache location and can be referenced in your Rego however you see fit.
More Information on Rego
The above information provides the unique features of Policy Controller that make it easy to write constraints on Kubernetes resources in Rego. A full tutorial on how to write in Rego is out-of-scope for this guide. However, the Open Policy Agent website has documentation on the syntax and features of the Rego language itself.
Installing Your Constraint Template
Once you've created your constraint template, simply kubectl apply
it and
Policy Controller will take care of ingesting it. Be sure to check the status
field of your constraint template to make sure there were no errors
instantiating it. On successful ingestion, the status
field should show
created: true
and the observedGeneration
noted in the status
field should
equal the metadata.generation
field.
Once the template is ingested, you can apply constraints for it as described in Creating constraints.
Removing a Constraint Template
First, verify that no constraints you want to preserve are using the constraint template:
kubectl get [TEMPLATE-NAME]
In case of a naming conflict between the constraint template's name and a different object in the cluster, you can use the following command instead:
kubectl get [TEMPLATE-NAME].constraints.gatekeeper.sh
Remove the constraint template:
kubectl delete constrainttemplate [CONSTRAINT-TEMPLATE-NAME]
When you remove a constraint template, you can no longer create constraints that reference it.
What's next?
- Learn more about Policy Controller
- Install Policy Controller
- Use the constraint template library
- Learn how to use constraints instead of PodSecurityPolicies