GKE on AWS architecture

Overview

GKE on AWS is hybrid cloud software that extends Google Kubernetes Engine (GKE) to Amazon Web Services (AWS).

GKE on AWS uses regular AWS resources such as Elastic Compute Cloud (EC2), Elastic Block Storage (EBS), and Elastic Load Balancer (ELB). Most AWS resources that GKE on AWS creates have names that start with gke-.

Architecture

There are two components to GKE on AWS.

  1. The management service, an environment that can install and update your user clusters, uses the AWS API to provision resources.
  2. User clusters, where you run your workloads.

This topic describes the purpose and shape of your Anthos management service and user clusters.

Architecture of a GKE on AWS installation, showing management service and AWSClusters containing a control plane and AWSNodePools

Management service

The management service provides and manages components of your GKE on AWS installation. For example, you create user clusters using the management service. The management service uses the AWS API to provision resources.

You can create your management service in a dedicated AWS VPC or an existing AWS VPC.

You need a management service in every AWS Virtual Private Cloud (VPC) where you run GKE on AWS. The management service is installed in one AWS Availability Zone. You only need one management service per VPC; a management service can manage multiple user clusters.

The primary component of the management service is the Cluster Operator. The Cluster Operator is a Kubernetes Operator that creates and manages your AWSClusters and AWSNodePools. The Cluster Operator stores configuration in an etcd database with storage persisted on an AWS EBS volume.

Installing and configuring your management service

This section describes the tools you can use to manage your management service.

The anthos-gke tool

You create and manage your clusters with the anthos-gke command-line tool. For more information, see The anthos-gke tool.

Connect

With Connect, you can view and sign in to your GKE on AWS and GKE clusters on Google Cloud from the same interface in Google Cloud console. All of your resources are shown in a single dashboard, so you can get visibility into your workloads across multiple Kubernetes clusters.

User clusters

A user cluster includes two components, both of which are Kubernetes custom resources hosted by the management service:

  1. A control plane.
  2. One or more AWSNodePools.

AWSCluster

An AWSCluster runs in a single VPC.

When you install a management cluster into a Dedicated VPC, GKE on AWS creates control plane replicas in every zone you specify in dedicatedVPC.availabilityZones. When you install a management cluster into existing infrastructure GKE on AWS creates an AWSCluster with three control plane replicas in the same availability zones. Each replica belongs to its own AWS Auto Scaling group which restarts instances when they are terminated.

The management service places the control planes in a private subnet behind an AWS Network Load Balancer (NLB). The management service interacts with the control plane using NLB.

To create control planes across multiple AWS availability zones, see High availability user clusters.

Each control plane stores configuration in a local etcd database. These databases are replicated and set up in a stacked high availability topology.

One control plane manages one or more AWSNodePools.

AWSNodePool

AWSNodePools function like GKE Node Pools on Google Cloud. A node pool is a group of nodes within a cluster that all have the same configuration. Node pools use the AWSNodePool resource and can contain one or more nodes. Each node pool belongs to its own AWS Auto Scaling group which restarts instances when they are terminated.

Troubleshooting

You can troubleshoot your GKE on AWS installation by viewing Kubernetes Events from your AWSCluster and AWSNodePools. For more information, see the Troubleshooting guide.

What's next