A new version of GKE on AWS was released on November 2. See the release notes for more information.

Security

This page describes the security features included in GKE on AWS, including each layer of its infrastructure, and how you can configure security features to suit your needs.

Overview

GKE on AWS offers several features to help secure your workloads, including the contents of your container image, the container runtime, the cluster network, and access to the cluster API server.

It's best to take a layered approach to protecting your clusters and workloads. You can apply the principle of least privilege to the level of access you provide to your users and workloads. You might need to make tradeoffs to allow the right level of flexibility and security.

Shared responsibilities

When you use GKE on AWS, you agree to take on certain responsibilities for your clusters. For more information, see Anthos clusters shared responsibilities.

Authentication and authorization

You authenticate to a GKE on AWS user cluster through one of the following methods:

To configure more granular access to Kubernetes resources at the cluster level or within Kubernetes namespaces, you use Kubernetes Role-based access control (RBAC). RBAC lets you create detailed policies that define which operations and resources you want to allow users and service accounts to access. With RBAC, you can control access for any validated identity provided.

To further simplify and streamline your authentication and authorization strategy for Kubernetes Engine, GKE on AWS disables Legacy attribute-based access Control (ABAC).

For more information, see Preparing a Kubernetes Engine Environment for Production.

Encryption

By default, GKE on AWS encrypts data in etcd at rest, EBS volumes, Kubernetes Secrets, and control plane components with the AWS Key Management Service (KMS).

To encrypt sensitive data in your GKE on AWS clusters, you can use:

Kubernetes Secrets

Kubernetes Secrets resources store sensitive data, such as passwords, OAuth tokens, and SSH keys, in your clusters. Storing sensitive data in Secrets is more secure than storing them in plaintext ConfigMaps or in Pod specifications. Using Secrets gives you control over how sensitive data is used, and reduces the risk of exposing the data to unauthorized users.

Hashicorp Vault

GKE on AWS supports Secrets stored in Hashicorp Vault. See Using HashiCorp Vault on GKE on AWS for more information.

Control plane security

The control plane components include the management service and the user cluster's Kubernetes API server, scheduler, controllers, and the etcd database. In GKE on AWS, local administrators manage the control plane components.

In GKE on AWS, the control plane components run on AWS. You can protect GKE on AWS's API server by using AWS security groups and network ACLs.

All communication in GKE on AWS is over Transport Layer Security (TLS) channels governed by the following certificate authorities (CAs):

  • The etcd CA secures communication from the API server to the etcd replicas and also traffic between etcd replicas. This CA is self-signed.
  • The user cluster CA secures communication between the API server and all internal Kubernetes API clients (kubelets, controllers, schedulers). This CA is KMS encrypted.
  • The management service CA is AWS KMS encrypted. It is created when you run anthos-gke init and stored in your Terraform workspace. When you use terraform apply to create the management service, the CA key is passed as AWS EC2 user data and decrypted by AWS KMS when the cluster starts.

For the management service, control plane keys are stored on control plane [nodes]{:.external}. For user clusters, the keys are stored as Kubernetes Secrets in the management service's control plane.

Cluster authentication in GKE on AWS is handled by certificates and service account bearer tokens. As the administrator, you authenticate to the control plane using with the administrative certificate to the management service (which you use for initial role binding creation, or for emergency purposes).

Certificate rotation is handled in the following ways:

  • For the API server, control planes, and nodes, certificates are created or rotated at each upgrade.
  • CAs can be rotated manually.

Node security

GKE on AWS deploys your workloads onto node pools of AWS EC2 instances. The following sections explain how to use the node-level security features in GKE on AWS.

Ubuntu

GKE on AWS uses an optimized version of Ubuntu as the operating system on which to run the Kubernetes control plane and nodes. Ubuntu includes a rich set of modern security features, and GKE on AWS implements several security-enhancing features for clusters, including:

  • Optimized package set.
  • Google Cloud-tailored Linux kernel.
  • Limited user accounts and disabled root login.

Additional security guides are available for Ubuntu, such as:

Node upgrades

You should upgrade your nodes on a regular basis. From time to time, security issues in the container runtime, Kubernetes itself, or the node operating system might require you to upgrade your nodes more urgently. When you upgrade your user cluster, each node's software is upgraded to their latest versions. Additionally, upgrading nodes rotates encryption credentials.

Securing your workloads

Kubernetes allows users to quickly provision, scale, and update container-based workloads. This section describes tactics that you can use to limit side-effects of running containers on cluster and the Google Cloud services.

Limiting Pod container process privileges

Limiting the privileges of containerized processes is important for your cluster's security. You can set security-related options with the Security Context of Pods and containers. These settings let you change the security settings of your processes such as:

  • User and group running the process.
  • Available Linux capabilities.
  • Privilege escalation.

The default GKE on AWS node operating system, Ubuntu, applies the default Docker AppArmor security policies to all containers started by Kubernetes. You can view the profile's template on GitHub. Among other things, the profile denies the following abilities to containers:

  • Writing to files directly in a process ID directory (/proc/).
  • Writing to files that are not in /proc/.
  • Writing to files in /proc/sys other than /proc/sys/kernel/shm*.
  • Mounting file systems.

What's next