Configuring Kubernetes objects

This topic demonstrates how to create configs, the files Anthos Config Management reads from Git and applies to your clusters automatically.

Before you begin

  • Make sure you understand the structure of the repo. The location of a config within the repo impacts which clusters and namespaces it is applied to. This is especially important for the namespaces/ directory, because subdirectories of the namespaces/ directory can inherit configs from their abstract namespace directories.

  • You need a basic understanding of YAML or JSON syntax, because configs are written in one of these two formats. All the examples in this documentation use YAML, because it is easier for people to read.

  • Different types of Kubernetes objects have different configurable options. It is helpful to understand how you would achieve your desired configuration manually before writing a config for that type of object.

  • We created a canonical example repo to illustrate how Anthos Config Management works. Examples in this topic are taken from that repo, so you may find it helpful to have the repo open in a browser or clone it to your local system.

Creating a config

When you create a config, you need to decide the best location in the repo and the fields to include.

Location in the repo

The location of a config in the repo is one factor that determines which clusters it applies to.

  • Configs for cluster-scoped objects except for namespaces are stored in the cluster/ directory of the repo.
  • Configs for namespaces and namespace-scoped objects are stored in the namespaces/ directory of the repo.
  • Configs for Anthos Config Management components are stored in the system/ directory of the repo.
  • The config for the Config Management Operator is not stored directly in the repo and is not synced.

Contents of the config

Configs use an additive approach, similar to kubectl. When creating new objects, you need to include all required fields. However, when updating existing objects, you only need to supply the fields you need to update.

The config, when applied, must result in a valid Kubernetes object.

Configuring existing Kubernetes objects

You can create a config for an existing Kubernetes object, such as a namespace that already exists in your cluster before you install Anthos Config Management. However, this config is ignored unless the object has the annotation configmanagement.gke.io/managed: enabled. For an existing object, you need to apply the annotation manually.

For namespaces specifically, Anthos Config Management does apply configs that create new objects within an unannotated namespace, and applies the configmanagement.gke.io/managed: enabled annotation to those objects. However, Anthos Config Management refuses to modify or remove any unannotated cluster-scoped object from a cluster. This is illustrated in the diagram in Working with configs over time.

Configuring CustomResourceDefinitions

Anthos Config Management allows you to sync CustomResourceDefinitions (CRDs) the same way you would sync any other resource. There are a few things to keep in mind when syncing CRDs.

  • CRDs, even when declaring a namespaced Custom Resource, must be placed in the cluster/ directory.

  • Updates to CRDs and their corresponding CustomResources do not occur in any predictable order. If you modify CRDs and the corresponding CustomResources in the same commit, there is no expectation that CRD updates occur before Custom Resource updates. This may cause the syncer logs to report a transient error for a brief period of time, until both the CustomResource and the CRD are present in the cluster.

  • Anthos Config Management does not allow removal of a CRD if any CustomResource in the repo depends on it. To remove a CRD, you also need to remove its CustomResource. It is recommended to remove them both in the same commit to the repo.

  • You can sync a CustomResource without syncing its CRD, as long as you can guarantee that the CRD already exists in the cluster.

Example configs

The following example configs are all taken from the example repo, and should get you started writing your own configs. This list is not exhaustive; you can configure any type of Kubernetes object using Anthos Config Management.

Namespace config

This config creates a namespace called audit.

apiVersion: v1
kind: Namespace
metadata:
  name: audit

When you create a namespace config, you can also add labels or annotations to the namespace. Labels are required when using a NamespaceSelector.

The following example config creates a namespace called shipping-prod if it doesn't already exist or is not managed. The namespace has the label env: prod and the annotation audit: true. If someone manually modifies any of the object's metadata, Anthos Config Management quickly resets it to the value in the config.

apiVersion: v1
kind: Namespace
metadata:
  name: shipping-prod
  labels:
    env: prod
  annotations:
    audit: "true"

For more information about working with namespaces, see Configuring namespaces and namespace-scoped objects.

ClusterRole config

This config creates a ClusterRole called namespace-reader, which provides the ability to read (get, watch, and list) all namespace objects in the cluster. A ClusterRole config is often used together with a ClusterRoleBinding config.

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: namespace-reader
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get", "watch", "list"]

ClusterRoleBinding config

This config creates a ClusterRoleBinding called namespace-readers, which grants user cheryl@foo-corp.com the namespace-reader ClusterRole.

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: namespace-readers
subjects:
- kind: User
  name: cheryl@foo-corp.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: namespace-reader
  apiGroup: rbac.authorization.k8s.io

ClusterRoleBindings are cluster-scoped, and cannot be placed in namespace directories or abstract namespaces.

PodSecurityPolicy config

This example creates a PodSecurityPolicy called psp, which disallows running privileged containers, and allows containers to run as any valid user on the node.

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp
spec:
  privileged: false
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  volumes:
  - '*'

PodSecurityPolicies are cluster-scoped, and cannot be placed in namespace directories or abstract namespaces.

NetworkPolicy config

This example creates a NetworkPolicy called default-deny-all-traffic.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny-all-traffic
spec:
  podSelector: {}

NetworkPolicies are namespace-scoped, and can only be placed in namespace directories or abstract namespaces.

When you apply the above NetworkPolicy to a single namespace, it isolates any Pods in that namespace from ingress and egress traffic.

When you apply the same NetworkPolicy to multiple namespaces by placing it in an abstract namespace with descendant namespaces, each of those namespaces inherits the NetworkPolicy. In the example repo, shipping-app-backend is an abstract namespace that contains configs for shipping-dev, shipping-prod, and shipping-stage. If you add the example NetworkPolicy above to them, they each inherit the NetworkPolicy, so each of their Pods is protected from ingress and egress traffic.

You can use namespace inheritance to enforce a least-privilege approach to security. For example, if the previous NetworkPolicy example is applied to shipping-app-backend and the following NetworkPolicy is added to the shipping-dev namespace, ingress traffic is allowed only to Pods in that namespace with the app:nginx label. shipping-prod and shipping-staging namespaces are not affected by this NetworkPolicy.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-nginx-ingress
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress:
  - {}

ResourceQuota config

This example creates a ResourceQuota called quota, which sets a hard limit of 1 Pod, 100 milli-CPUs, and 100 mebibytes (Mi) of memory.

kind: ResourceQuota
apiVersion: v1
metadata:
  name: quota
spec:
  hard:
    pods: "1"
    cpu: "100m"
    memory: 100Mi

If creating a new object of a given type would violate an existing ResourceQuota, Kubernetes cannot create that object until doing so would no longer violate the ResourceQuota.

What's next?