Security

This page describes the security features included in GKE on VMware, including each layer of its infrastructure, and how you can configure these security features to suit your needs.

Overview

GKE on VMware offers several features to help secure your workloads, including the contents of your container image, the container runtime, the cluster network, and access to the cluster API server.

It's best to take a layered approach to protecting your clusters and workloads. You can apply the principle of least privilege to the level of access you provide to your users and workloads. You might need to make tradeoffs to allow the right level of flexibility and security.

Authentication and Authorization

You authenticate to GKE on VMware clusters using OpenID Connect (OIDC) or a Kubernetes Service Account token via the Cloud console.

To configure more granular access to Kubernetes resources at the cluster level or within Kubernetes namespaces, you use Kubernetes Role-Based Access Control (RBAC). RBAC allows you to create detailed policies that define which operations and resources you want to allow users and service accounts to access. With RBAC, you can control access for any validated identity provided.

To further simplify and streamline your authentication and authorization strategy for Kubernetes Engine, GKE on VMware disables legacy Attribute Based Access Control (ABAC).

Control plane security

The control plane components include the Kubernetes API server, scheduler, controllers and the etcd database where your Kubernetes configuration is persisted. Whereas in Kubernetes Engine, the Kubernetes control plane components are managed and maintained by Google, local administrators manage the control plane components in GKE on VMware.

In GKE on VMware, the control plane components run within your corporate network. You can protect GKE on VMware's API server by using your existing corporate network policies and firewalls. You can also assign a private IP address to the API server and limit access to the private address.

All communication in GKE on VMware is over TLS channels, which are governed by three certificate authorities (CAs): etcd, cluster, and org:

  • The etcd CA secures communication from the API server to the etcd replicas and also traffic between etcd replicas. This CA is self-signed.
  • The cluster CA secures communication between the API server and all internal Kubernetes API clients (kubelets, controllers, schedulers). This CA is self-signed.
  • The org CA is an external CA used to serve the Kubernetes API to external users. You manage this CA.

For admin control planes, keys are stored on the control plane node. For user clusters, the keys are stored as Kubernetes Secrets in the admin control plane. The API server is configured with a user-provided certificate signed by the org CA. The API server uses Server Name Indication (SNI) to determine whether to use the key signed by the cluster CA or the key signed by the org CA.

Cluster authentication in GKE on VMware is handled by certificates and service account bearer tokens. As the administrator, you authenticate to the control plane using OIDC, or with the administrative certificate (which you use for initial role binding creation, or for emergency purposes).

Certificate rotation is handled in the following ways:

  • For the API server, control planes, and nodes, certificates are created or rotated at each upgrade.
  • CAs can be rotated rarely or on demand.

Node security

GKE on VMware deploys your workloads in VMware instances, which are attached to your clusters as nodes. The following sections show you how to leverage the node-level security features available to you in GKE on VMware.

Ubuntu

GKE on VMware uses an optimized version of Ubuntu as the operating system on which to run the Kubernetes control plane and nodes. Ubuntu includes a rich set of modern security features, and GKE on VMware implements several security-enhancing features for clusters, including:

  • Images are preconfigured to meet PCI DSS, NIST Baseline High, and DoD Cloud Computing SRG Impact Level 2 standards.
  • Optimized package set.
  • Google Cloud-tailored Linux kernel.
  • Optional automatic OS security updates.
  • Limited user accounts and disabled root login.

Additional security guides are available for Ubuntu, such as:

Node upgrades

You should upgrade your nodes on a regular basis. From time to time, security issues in the container runtime, Kubernetes itself, or the node operating system might require you to upgrade your nodes more urgently. When you upgrade your cluster, each node's software is upgraded to their latest versions.

Securing your workloads

Kubernetes allows users to quickly provision, scale, and update container-based workloads. This section describes tactics that administrators and users can use to limit the ability of the running containers to affect other containers in the cluster, the hosts on which they run, and the GCP services enabled in their project.

Limiting Pod container process privileges

Limiting the privileges of containerized processes is important for the overall security of your cluster. Kubernetes Engine allows you to set security-related options via the Security Context on both Pods and containers. These settings allow you to change the security settings of your processes such as:

  • User and group to run as.
  • Available Linux capabilities.
  • Privilege escalation.

The default GKE on VMware node operating system, Ubuntu, applies the default Docker AppArmor security policies to all containers started by Kubernetes. You can view the profile's template on GitHub. Among other things, the profile denies the following abilities to containers:

  • Writing to files directly in a process ID directory (/proc/).
  • Writing to files that are not in /proc/.
  • Writing to files in /proc/sys other than /proc/sys/kernel/shm*.
  • Mounting filesystems.

Audit logging

Kubernetes Audit logging provides a way for administrators to retain, query, process, and alert on events that occur in your GKE on VMware environments. Administrators can use the logged information to do forensic analysis, real-time alerting, or for cataloging how a fleet of Kubernetes Engine clusters are being used and by whom.

By default, GKE on VMware logs admin activity. You can optionally also log data access events, depending on the types of operations you are interested in inspecting.

The Connect agent only talks to the local API server running on-premises, and each cluster should have its own set of audit logs. All actions that users perform from the UI through Connect are logged by that cluster.

Encryption

If your GKE on VMware clusters and workloads securely connect to Google Cloud services through Cloud VPN, you can use Cloud Key Management Service (Cloud KMS) for key management. Cloud KMS is a cloud-hosted key management service that lets you manage cryptographic keys for your services. You can generate, use, rotate, and destroy AES256, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 cryptographic keys. Cloud KMS is integrated with Identity and Access Management (IAM) and Cloud Audit Logging so that you can manage permissions on individual keys and monitor how these are used. Use Cloud KMS to protect Secrets and other sensitive data that you need to store. Otherwise, you can choose to use one of the following alternatives:

  • Kubernetes Secrets
  • Hashicorp Vault
  • Thales Luna network HSM
  • Google Cloud Hardware Security Module (HSM)

Kubernetes Secrets

Kubernetes Secrets resources store sensitive data, such as passwords, OAuth tokens, and SSH keys, in your clusters. Storing sensitive data in Secrets is more secure than storing them in plaintext ConfigMaps or in Pod specifications. Using Secrets gives you control over how sensitive data is used, and reduces the risk of exposing the data to unauthorized users.

Hashicorp Vault

Hardware Security Module