This page explains how to use PodSecurityPolicies in Google Kubernetes Engine.
A PodSecurityPolicy is an admission controller resource you create that validates requests to create and update Pods on your cluster. The PodSecurityPolicy defines a set of conditions that Pods must meet to be accepted by the cluster; when a request to create or update a Pod does not meet the conditions in the PodSecurityPolicy, that request is rejected and an error is returned.
To use PodSecurityPolicy, you must first create and define policies that new and updated Pods must meet. Then, you must enable the PodSecurityPolicy admission controller, which validates requests to create and update Pods against the defined policies.
When multiple PodSecurityPolicies are available, the admission controller uses the first policy that successfully validates. Policies are ordered alphabetically, and the controller prefers non-mutating policies (policies that don't change the Pod) over mutating policies.
PodSecurityPolicy is available in GKE clusters running Kubernetes version 1.8.6 or later.
Before you begin
To prepare for this task, perform the following steps:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
- Set your default project ID:
gcloud config set project [PROJECT_ID]
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone [COMPUTE_ZONE]
- If you are working with regional clusters, set your default compute region:
gcloud config set compute/region [COMPUTE_REGION]
gcloudto the latest version:
gcloud components update
- Ensure that you understand how to use role-based access control in GKE.
You need to define PodSecurityPolicy resources in your cluster before the PodSecurityPolicy controller can validate and accept Pods into the cluster.
PodSecurityPolicies specify a list of restrictions,
requirements, and defaults for Pods created under the policy. Examples include
restricting the use of privileged containers,
hostPath volumes, and host
networking, or defaulting all containers to run with a seccomp profile. The
PodSecurityPolicy admission controller validates requests against available
The following example PodSecurityPolicy,
my-psp.yaml, simply prevents the
creation of privileged Pods. The policy also affects several other control
aspects, such as allowing access to all available volumes:
apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: my-psp spec: privileged: false # Prevents creation of privileged Pods seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny volumes: - '*'
The PodSecurityPolicy specification can secure numerous
control aspects. The control aspects specified in this example—
fsGroup—are all set to
indicating that any valid values for these fields can be used with this policy.
You create this resource using the
kubectl command-line tool:
kubectl apply -f my-psp.yaml
For more examples of configuring PodSecurityPolicies, refer to Example on the PodSecurityPolicy page of the Kubernetes documentation.
Accounts with the cluster-admin role can use role-based access control to create a Role or ClusterRole that grants the desired service accounts access to PodSecurityPolicies. A ClusterRole grants cluster-wide permissions, and a Role grants permissions within a namespace that you define.
For example, the following ClusterRole,
my-clusterrole.yaml, grants access to
my-psp PodSecurityPolicy, as indicated by
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: my-clusterrole rules: - apiGroups: - extensions resources: - podsecuritypolicies resourceNames: - my-psp verbs: - use
Create the ClusterRole by running the following command:
kubectl apply -f my-clusterrole.yaml
The following RoleBinding,
my-rolebinding.yaml, binds the ClusterRole,
my-clusterrole, to the service accounts in a specific namespace,
# Bind the ClusterRole to the desired set of service accounts. # Policies should typically be bound to service accounts in a namespace. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: my-rolebinding namespace: my-namespace roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: my-clusterrole subjects: # Example: All service accounts in my-namespace - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts # Example: A specific service account in my-namespace - kind: ServiceAccount # Omit apiGroup name: default namespace: my-namespace
In this RoleBinding:
subjectsfield specifies to which accounts the ClusterRole is bound.
- The first subject is a Group,
system:serviceaccounts, which encompasses all service accounts in the cluster.
- The second subject is an individual ServiceAccount,
default, which specifies the default service account in the namespace.
Create the RoleBinding by running the following command:
kubectl apply -f my-rolebinding.yaml
For more information about RBAC, refer to Using RBAC Authorization.
Enabling PodSecurityPolicy controller
To use the PodSecurityPolicy admission controller, you must create a new cluster
or update an existing cluster with the
To create a new cluster with PodSecurityPolicy, run the following command:
gcloud beta container clusters create [CLUSTER_NAME] --enable-pod-security-policy
To update an existing cluster:
gcloud beta container clusters update [CLUSTER_NAME] --enable-pod-security-policy
Disabling PodSecurityPolicy controller
You disable the PodSecurityPolicy controller by running the following command:
gcloud beta container clusters update [CLUSTER_NAME] --no-enable-pod-security-policy
Disabling the controller causes the cluster to stop validating and defaulting existing policies, but does not delete them. Bindings are also not deleted.
Working with NetworkPolicy
If you are using a NetworkPolicy, and you have a Pod that is subject to a PodSecurityPolicy create an RBAC Role or ClusterRole that has permission to use the PodSecurityPolicy. Then bind the Role or ClusterRole to the Pod's service account. Granting permissions to user accounts is not sufficient in this case. For more information, see Authorizing policies.