Using PodSecurityPolicies

This page explains how to use PodSecurityPolicies in Kubernetes Engine.

Overview

A PodSecurityPolicy is an admission controller resource you create that validates requests to create and update Pods on your cluster. The PodSecurityPolicy defines a set of conditions that Pods must meet to be accepted by the cluster; when a request to create or update a Pod does not meet the conditions in the PodSecurityPolicy, that request is rejected and an error is returned.

To use PodSecurityPolicy, you must first create and define policies that new and updated Pods must meet. Then, you must enable the PodSecurityPolicy admission controller, which validates requests to create and update Pods against the defined policies.

When multiple PodSecurityPolicies are available, the admission controller uses the first policy that successfully validates. Policies are ordered alphabetically, and the controller prefers non-mutating policies (policies that don't change the Pod) over mutating policies.

PodSecurityPolicy is available in Kubernetes Engine clusters running Kubernetes version 1.8.6 or later.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Kubernetes Engine API.
  • Enable Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • Set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • Update all gcloud commands to the latest version:
    gcloud components update

Defining PodSecurityPolicies

You need to define PodSecurityPolicy resources in your cluster before the PodSecurityPolicy controller can validate and accept Pods into the cluster.

PodSecurityPolicies specify a list of restrictions, requirements, and defaults for Pods created under the policy. Examples include restricting the use of privileged containers, hostPath volumes, and host networking, or defaulting all containers to run with a seccomp profile. The PodSecurityPolicy admission controller validates requests against available PodSecurityPolicies.

The following example PodSecurityPolicy, my-psp.yaml, simply prevents the creation of privileged Pods. The policy also affects several other control aspects, such as allowing access to all available volumes:

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: my-psp
spec:
  privileged: false  # Prevents creation of privileged Pods
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  volumes:
  - '*'

The PodSecurityPolicy specification can secure numerous control aspects. The control aspects specified in this example—seLinux, supplementalGroups, runAsUser, and fsGroup—are all set to RunAsAny, indicating that any valid values for these fields can be used with this policy.

For more information about PodSecurityPolicies and their control aspects, refer to What is a Pod Security Policy? and the policy reference in the Kubernetes documentation.

You create this resource using the kubectl command-line tool:

kubectl apply -f my-psp.yaml

For more examples of configuring PodSecurityPolicies, refer to Example on the PodSecurityPolicy page of the Kubernetes documentation.

Authorizing policies

You use role-based access control to create a Role or ClusterRole that grants the desired service accounts access to PodSecurityPolicies. A ClusterRole grants cluster-wide permissions, and a Role grants permissions within a namespace that you define.

For example, the following ClusterRole, my-clusterrole.yaml, grants access to the my-psp PodSecurityPolicy, as indicated by verb: use:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-clusterrole
rules:
- apiGroups:
  - extensions
  resources:
  - podsecuritypolicies
  resourceNames:
  - my-psp
  verbs:
  - use

Create the ClusterRole by running the following command:

kubectl apply -f my-clusterrole.yaml

After creating a Role (or ClusterRole), you associate it with the desired service accounts by creating a RoleBinding (or ClusterRoleBinding) resource.

The following RoleBinding, my-rolebinding.yaml, binds the ClusterRole, my-clusterrole, to the service accounts in a specific namespace, my-namespace:

# Bind the ClusterRole to the desired set of service accounts.
# Policies should typically be bound to service accounts in a namespace.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: my-rolebinding
  namespace: my-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: my-clusterrole
subjects:
# Example: All service accounts in my-namespace
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts
# Example: A specific service account in my-namespace
- kind: ServiceAccount # Omit apiGroup
  name: default
  namespace: my-namespace

In this RoleBinding:

  • The subjects field specifies to which accounts the ClusterRole is bound.
  • The first subject is a Group, system:serviceaccounts, which encompasses all service accounts in the cluster.
  • The second subject is an individual ServiceAccount, default, which specifies the default service account in the namespace.

Create the RoleBinding by running the following command:

kubectl apply -f my-rolebinding.yaml

For more information about RBAC, refer to Using RBAC Authorization.

Enabling PodSecurityPolicy controller

To use the PodSecurityPolicy admission controller, you must create a new cluster or update an existing cluster with the --enable-pod-security-policy flag.

To create a new cluster with PodSecurityPolicy, run the following command:

gcloud beta container clusters create [CLUSTER_NAME] --enable-pod-security-policy

To update an existing cluster:

gcloud beta container clusters update [CLUSTER_NAME] --enable-pod-security-policy

Disabling PodSecurityPolicy controller

You disable the PodSecurityPolicy controller by running the following command:

gcloud container clusters create [CLUSTER_NAME] --no-enable-pod-security-policy

Disabling the controller causes the cluster to stop validating and defaulting existing policies, but does not delete them. Bindings are also not deleted.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine