GKE on AWS is a managed service that helps you provision, operate, and scale Kubernetes clusters in your AWS account.
This page is for Admins and architects and Operators who want to define IT solutions and system architecture in accordance with company strategy and requirements. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Resource management
GKE on AWS uses AWS APIs to provision the resources needed by your cluster, including virtual machines, managed disks, Auto Scaling group, security groups, and load balancers.
You can create, describe, and delete clusters with the Google Cloud CLI or GKE Multi-Cloud API.
Authenticating to AWS
When you set up GKE on AWS, you create an AWS IAM role in your AWS account with the required permissions. You also create a service account in your Google Cloud project to establish a trust relationship for AWS IAM identity federation.For more information, see Authentication overview.
Resources on Google Cloud
GKE on AWS uses a Google Cloud project to store cluster configuration information on Google Cloud.
Fleets and Connect
GKE on AWS registers each cluster with a Fleet when it is created. Connect enables access to cluster and workload management features from Google Cloud. A cluster's Fleet membership name is the same as its cluster name.
You can enable features such as Config Management and Cloud Service Mesh within your Fleet.
Cluster architecture
GKE on AWS provisions clusters using private subnets inside your AWS Virtual Private Cloud. Each cluster consists of the following components:
Control plane: The Kubernetes control plane uses a high-availability architecture with three replicas. Each replica runs all Kubernetes components including
kube-apiserver
,kube-controller-manager
,kube-scheduler
, andetcd
. Eachetcd
instance stores data in an EBS volume, and uses a network interface to communicate with otheretcd
instances. A standard load balancer is used to balance traffic to the Kubernetes API endpoint,kube-apiserver
.Node pools: A node pool is a group of Kubernetes worker nodes with the same configuration, including instance type, disk configuration, and instance profile. All nodes in a node pool run on the same subnet. For high availability, you can provision multiple node pools across different subnets in the same AWS region.
The following diagram shows a sample VPC, node pool, and control plane structure, including network address translation (NAT) gateway and load balancer. You create this infrastructure with the instructions to Create an AWS VPC and Quickstart.
What's next
- Complete the Prerequisites.