This topic describes how to customize the configuration of a GKE on AWS user cluster.
You might want to create a custom user cluster for the following reasons:
- Creating another cluster for a staging or test environment.
- Adding node pools with different machine types.
- Creating a cluster in specific AWS availability zones (AZ).
Before you begin
Before you start using GKE on AWS, make sure you have performed the following tasks:
- Complete the Prerequisites.
Install a management service.
If you want to create a cluster without using
terraform output example_cluster
, have private AWS subnets for your control plane. Each subnet should belong to a different AZ in the same AWS region. Route tables must be configured to allow traffic between private subnets, and each subnet must have access to a NAT gateway.Have your AWS Virtual Private Cloud (VPC) ID. A VPC ID looks like
vpc-012345678abcde
. You can find your VPC ID on the AWS Console.
To connect to your GKE on AWS resources, perform the following steps. Select if you have an existing AWS VPC (or direct connection to your VPC) or created a dedicated VPC when creating your management service.
Existing VPC
If you have a direct or VPN connection to an existing VPC, omit the line
env HTTP_PROXY=http://localhost:8118
from commands in this topic.
Dedicated VPC
When you create a management service in a dedicated VPC, GKE on AWS includes a bastion host in a public subnet.
To connect to your management service, perform the following steps:
Change to the directory with your GKE on AWS configuration. You created this directory when Installing the management service.
cd anthos-aws
To open the tunnel, run the
bastion-tunnel.sh
script. The tunnel forwards tolocalhost:8118
.To open a tunnel to the bastion host, run the following command:
./bastion-tunnel.sh -N
Messages from the SSH tunnel appear in this window. When you are ready to close the connection, stop the process by using Control+C or closing the window.
Open a new terminal and change into your
anthos-aws
directory.cd anthos-aws
Check that you're able to connect to the cluster with
kubectl
.env HTTPS_PROXY=http://localhost:8118 \ kubectl cluster-info
The output includes the URL for the management service API server.
Selecting a control plane instance size
GKE on AWS doesn't support resizing control plane instances. Before creating your user cluster, select the instance size of your control planes. Control plane sizes depend on the number of nodes in your cluster. The following table contains recommended control plane instance sizes for various cluster sizes.
Cluster size (nodes) | Control plane instance type |
---|---|
1 – 10 | m5.large |
11 – 100 | m5.xlarge |
101 – 200 | m5.2xlarge |
Creating a new cluster with a custom configuration
You can use
terraform output example_cluster
to create configuration for one user cluster per management cluster. If you want
to create additional clusters, you need to apply a custom configuration.
In this example, you create a cluster manually from AWSCluster
and
AWSNodePool
CRDs.
Change to the directory with your GKE on AWS configuration. You created this directory when Installing the management service.
cd anthos-aws
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Open a text editor and copy the following
AWSCluster
definition into a file namedcustom-cluster.yaml
.apiVersion: multicloud.cluster.gke.io/v1 kind: AWSCluster metadata: name: CLUSTER_NAME spec: region: AWS_REGION networking: vpcID: VPC_ID podAddressCIDRBlocks: POD_ADDRESS_CIDR_BLOCKS serviceAddressCIDRBlocks: SERVICE_ADDRESS_CIDR_BLOCKS ServiceLoadBalancerSubnetIDs: SERVICE_LOAD_BALANCER_SUBNETS controlPlane: version: CLUSTER_VERSION # Latest version is 1.25.5-gke.2100 instanceType: AWS_INSTANCE_TYPE keyName: SSH_KEY_NAME subnetIDs: - CONTROL_PLANE_SUBNET_IDS securityGroupIDs: - CONTROL_PLANE_SECURITY_GROUPS iamInstanceProfile: CONTROL_PLANE_IAM_ROLE rootVolume: sizeGiB: ROOT_VOLUME_SIZE volumeType: ROOT_VOLUME_TYPE # Optional iops: ROOT_VOLUME_IOPS # Optional kmsKeyARN: ROOT_VOLUME_KEY # Optional etcd: mainVolume: sizeGiB: ETCD_VOLUME_SIZE volumeType: ETCD_VOLUME_TYPE # Optional iops: ETCD_VOLUME_IOPS # Optional kmsKeyARN: ETCD_VOLUME_KEY # Optional databaseEncryption: kmsKeyARN: ARN_OF_KMS_KEY hub: # Optional membershipName: ANTHOS_CONNECT_NAME cloudOperations: # Optional projectID: YOUR_PROJECT location: GCP_REGION enableLogging: ENABLE_LOGGING enableMonitoring: ENABLE_MONITORING workloadIdentity: # Optional oidcDiscoveryGCSBucket: WORKLOAD_IDENTITY_BUCKET
Replace the following:
- CLUSTER_NAME: the name of your cluster.
AWS_REGION: the AWS region where your cluster runs.
VPC_ID: the ID of the VPC where your cluster runs.
POD_ADDRESS_CIDR_BLOCKS: the range of IPv4 addresses used by the cluster's pods. Currently only a single range is supported. The range must not overlap with any subnets reachable from your network. It is safe to use the same range across multiple different AWSCluster objects. For example,
10.2.0.0/16
.SERVICE_ADDRESS_CIDR_BLOCKS: the range of IPv4 addresses used by the cluster's services. Currently only a single range is supported. The range must not overlap with any subnets reachable from your network. It is safe to use the same range across multiple different AWSCluster objects. For example,
10.1.0.0/16
.SERVICE_LOAD_BALANCER_SUBNETS: the subnet IDs where GKE on AWS can create public or private load balancers.
CLUSTER_VERSION: a Kubernetes version supported by GKE on AWS. The most recent version is 1.25.5-gke.2100.
AWS_INSTANCE_TYPE: a supported EC2 instance type.
SSH_KEY_NAME: an AWS EC2 key pair.
CONTROL_PLANE_SUBNET_IDS: the subnet IDs in the AZs where your control plane instances run.
CONTROL_PLANE_SECURITY_GROUPS: a securityGroupID created during the management service installation. You can customize this by adding any securityGroupIDs required to connect to the control plane.
CONTROL_PLANE_IAM_PROFILE: name of the AWS EC2 instance profile assigned to control plane replicas.
ROOT_VOLUME_SIZE: the size, in gibibyte (GiB), of your control plane root volumes.
ROOT_VOLUME_TYPE with the EBS volume type. For example,
gp3
.ROOT_VOLUME_IOPS with the amount of provisioned IO operations per second (IOPS) for the volume. Only valid when
volumeType
isGP3
. For more information, see General Purpose SSD volumes (gp3).ROOT_VOLUME_KEY with the Amazon Resource Name of the AWS KMS key that encrypts your control plane instance root volumes.
ETCD_VOLUME_SIZE: the size of volumes used by etcd.
ETCD_VOLUME_TYPE with the EBS volume type. For example,
gp3
.ETCD_VOLUME_IOPS with the amount of provisioned IO operations per second (IOPS) for the volume. Only valid when
volumeType
isgp3
. For more information, see General Purpose SSD volumes (gp3).ETCD_VOLUME_KEY with the Amazon Resource Name of the AWS KMS key that encrypts your control plane etcd data volumes.
ARN_OF_KMS_KEY: the AWS KMS key used to encrypt cluster Secrets.
ANTHOS_CONNECT_NAME: the Connect membership name used to register your cluster. The membership name must be unique. For example,
projects/YOUR_PROJECT/locations/global/memberships/CLUSTER_NAME
, whereYOUR_PROJECT
is your Google Cloud project andCLUSTER_NAME
is a unique name in your project. This field is optional.YOUR_PROJECT: your project ID.
GCP_REGION: the Google Cloud region where you want to store logs. Choose a region that is near the AWS region. For more information, see Global Locations - Regions & Zones — for example,
us-central1
.ENABLE_LOGGING:
true
orfalse
, whether Cloud Logging is enabled on control plane nodes.ENABLE_MONITORING:
true
orfalse
, whether Cloud Monitoring is enabled on control plane nodes.WORKLOAD_IDENTITY_BUCKET: the Cloud Storage bucket name containing your workload identity discovery information. This field is optional.
Create one or more AWSNodePools for your cluster. Open a text editor and copy the following AWSCluster definition into a file named
custom-nodepools.yaml
.apiVersion: multicloud.cluster.gke.io/v1 kind: AWSNodePool metadata: name: NODE_POOL_NAME spec: clusterName: AWSCLUSTER_NAME version: CLUSTER_VERSION # latest version is 1.25.5-gke.2100 region: AWS_REGION subnetID: AWS_SUBNET_ID minNodeCount: MINIMUM_NODE_COUNT maxNodeCount: MAXIMUM_NODE_COUNT maxPodsPerNode: MAXIMUM_PODS_PER_NODE_COUNT instanceType: AWS_NODE_TYPE keyName: KMS_KEY_PAIR_NAME iamInstanceProfile: NODE_IAM_PROFILE proxySecretName: PROXY_SECRET_NAME rootVolume: sizeGiB: ROOT_VOLUME_SIZE volumeType: VOLUME_TYPE # Optional iops: IOPS # Optional kmsKeyARN: NODE_VOLUME_KEY # Optional
Replace the following:
- NODE_POOL_NAME: a unique name for your AWSNodePool.
- AWSCLUSTER_NAME: your AWSCluster's name. For example,
staging-cluster
. - CLUSTER_VERSION: a supported GKE on AWS Kubernetes version.
- AWS_REGION: the same AWS region as your AWSCluster.
- AWS_SUBNET_ID: an AWS subnet in the same region as your AWSCluster.
- MINIMUM_NODE_COUNT: the minimum number of nodes in the node pool. See Scaling user clusters for more information.
- MAXIMUM_NODE_COUNT: the maximum number of nodes in the node pool.
- MAXIMUM_PODS_PER_NODE_COUNT: the maximum number of pods that GKE on AWS can allocate to a node.
- AWS_NODE_TYPE: an AWS EC2 instance type.
- KMS_KEY_PAIR_NAME: the AWS KMS key pair assigned to each node pool worker.
- NODE_IAM_PROFILE: the name of the AWS EC2 instance profile assigned to nodes in the pool.
- ROOT_VOLUME_SIZE: the size, in gibibyte (GiB), of your control plane root volumes.
- VOLUME_TYPE: the node's AWS
EBS volume type.
For example,
gp3
. - IOPS: the amount of provisioned IO operations per second (IOPS)
for volumes. Only valid when
volumeType
isgp3
. - NODE_VOLUME_KEY: the ARN of the AWS KMS key used to encrypt the volume. For more information, see Using a customer managed CMK to encrypt volumes.
Apply the manifests to your management service.
env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f custom-cluster.yaml env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f custom-nodepools.yaml
Create a kubeconfig
While your user cluster starts, you can create a kubeconfig
context for your
new user cluster. You use the context to authenticate to a user or management
cluster.
Use
anthos-gke aws clusters get-credentials
to generate akubeconfig
for your user cluster in~/.kube/config
.env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
Replace CLUSTER_NAME with your cluster's name. For example,
cluster-0
.Use
kubectl
to authenticate to your new user cluster.env HTTPS_PROXY=http://localhost:8118 \ kubectl cluster-info
If your cluster is ready, the output includes the URLs for Kubernetes components within your cluster.
Viewing your cluster's status
The management service provisions AWS resources when you apply an
AWSCluster
or AWSNodePool
.
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
To list your clusters, use
kubectl get AWSClusters
.env HTTPS_PROXY=http://localhost:8118 \ kubectl get AWSClusters
The output includes each cluster's name, state, age, version, and endpoint.
For example, the following output includes only one
AWSCluster
namedcluster-0
:NAME STATE AGE VERSION ENDPOINT cluster-0 Provisioning 2m41s 1.25.5-gke.2100 gke-xyz.elb.us-east-1.amazonaws.com
View your cluster's events
To see recent
Kubernetes Events
from your user cluster, use kubectl get events
.
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Run
kubectl get events
.env HTTPS_PROXY=http://localhost:8118 \ kubectl get events
The output includes information, warning, and errors related to from your management service.
Deleting a user cluster
To delete a user cluster, perform the following steps:
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Use
kubectl delete
to delete the manifest containing your user clusters.env HTTPS_PROXY=http://localhost:8118 \ kubectl delete -f CLUSTER_FILE
Replace CLUSTER_FILE with the name of the manifest containing your AWScluster and AWSNodePool objects. For example,
cluster-0.yaml
.
Deleting all user clusters
To delete all of your user clusters, perform the following steps:
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Use
kubectl delete
to delete your AWSNodePools and AWSClusters from your management service.env HTTPS_PROXY=http://localhost:8118 \ kubectl delete AWSNodePool --all env HTTPS_PROXY=http://localhost:8118 \ kubectl delete AWSCluster --all
For more information, see Uninstalling GKE on AWS.
What's next
Configure your identity provider with the GKE Identity Service.
Launch your first workload on GKE on AWS.
Create an externally facing deployment using a load balancer or Ingress.
Read the specifications for the AWSCluster and AWSNodePool Custom Resource Definitions.