A new version of Anthos clusters on AWS (GKE on AWS) was released on April 6. See the release notes for more information.

Creating a custom user cluster

This topic describes how to customize the configuration of a Anthos clusters on AWS (GKE on AWS) user cluster.

You might want to create a custom user cluster for the following reasons:

  • Creating another cluster for a staging or test environment.
  • Adding node pools with different machine types.
  • Creating a cluster in specific AWS availability zones (AZ).

Before you begin

Before you start using Anthos clusters on AWS, make sure you have performed the following tasks:

  • Install a management service.

  • If you want to create a cluster without using terraform output example_cluster, have private AWS subnets for your control plane. Each subnet should belong to a different AZ in the same AWS region. Route tables must be configured to allow traffic between private subnets, and each subnet must have access to a NAT gateway.

  • Have your AWS Virtual Private Cloud (VPC) ID. A VPC ID looks like vpc-012345678abcde. You can find your VPC ID on the AWS Console.

To connect to your Anthos clusters on AWS resources, perform the following steps. Select if you have an existing AWS VPC (or direct connection to your VPC) or created a dedicated VPC when creating your management service.

Existing VPC

If you have a direct or VPN connection to an existing VPC, omit the line env HTTP_PROXY=http://localhost:8118 from commands in this topic.

Dedicated VPC

When you create a management service in a dedicated VPC, Anthos clusters on AWS includes a bastion host in a public subnet.

To connect to your management service, perform the following steps:

  1. Change to the directory with your Anthos clusters on AWS configuration. You created this directory when Installing the management service.

    cd anthos-aws

  2. To open the tunnel, run the bastion-tunnel.sh script. The tunnel forwards to localhost:8118.

    To open a tunnel to the bastion host, run the following command:

    ./bastion-tunnel.sh -N
    

    Messages from the SSH tunnel appear in this window. When you are ready to close the connection, stop the process by using Control+C or closing the window.

  3. Open a new terminal and change into your anthos-aws directory.

    cd anthos-aws
  4. Check that you're able to connect to the cluster with kubectl.

    env HTTP_PROXY=http://localhost:8118 \
    kubectl cluster-info
    

    The output includes the URL for the management service API server.

Selecting a control plane instance size

Anthos clusters on AWS doesn't support resizing control plane instances. Before creating your user cluster, select the instance size of your control planes. Control plane sizes depend on the number of nodes in your cluster. The following table contains recommended control plane instance sizes for various cluster sizes.

Cluster size (nodes) Control plane instance type
1 – 10 m5.large
11 – 100 m5.xlarge
101 – 200 m5.2xlarge

Creating a new cluster with a custom configuration

You can use terraform output example_cluster to create configuration for one user cluster per management cluster. If you want to create additional clusters, you need to apply a custom configuration.

In this example, you create a cluster manually from AWSCluster and AWSNodePool CRDs.

  1. Change to the directory with your Anthos clusters on AWS configuration. You created this directory when Installing the management service.

    cd anthos-aws

  2. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  3. Open a text editor and copy the following AWSCluster definition into a file named custom-cluster.yaml.

    apiVersion: multicloud.cluster.gke.io/v1
    kind: AWSCluster
    metadata:
      name: CLUSTER_NAME
    spec:
      region: AWS_REGION
      networking:
        vpcID: VPC_ID
        podAddressCIDRBlocks: POD_ADDRESS_CIDR_BLOCKS
        serviceAddressCIDRBlocks: SERVICE_ADDRESS_CIDR_BLOCKS
        ServiceLoadBalancerSubnetIDs: SERVICE_LOAD_BALANCER_SUBNETS
      controlPlane:
        version:  CLUSTER_VERSION # Latest version is 1.19.8-gke.1000
        instanceType: AWS_INSTANCE_TYPE
        keyName: SSH_KEY_NAME
        subnetIDs:
        - CONTROL_PLANE_SUBNET_IDS
        securityGroupIDs:
        - CONTROL_PLANE_SECURITY_GROUPS
        iamInstanceProfile: CONTROL_PLANE_IAM_ROLE
        rootVolume:
          sizeGiB: ROOT_VOLUME_SIZE
          volumeType: ROOT_VOLUME_TYPE # Optional
          iops: ROOT_VOLUME_IOPS # Optional
          kmsKeyARN: ROOT_VOLUME_KEY # Optional
        etcd:
          mainVolume:
            sizeGiB: ETCD_VOLUME_SIZE
            volumeType: ETCD_VOLUME_TYPE # Optional
            iops: ETCD_VOLUME_IOPS # Optional
            kmsKeyARN: ETCD_VOLUME_KEY # Optional
        databaseEncryption:
          kmsKeyARN: ARN_OF_KMS_KEY
        hub: # Optional
          membershipName: ANTHOS_CONNECT_NAME
        workloadIdentity: # Optional
          oidcDiscoveryGCSBucket: WORKLOAD_IDENTITY_BUCKET
    

    Replace the following:

    • CLUSTER_NAME: the name of your cluster.
    • AWS_REGION: the AWS region where your cluster runs.

    • VPC_ID: the ID of the VPC where your cluster runs.

    • POD_ADDRESS_CIDR_BLOCKS: the CIDR range of IPv4 addresses used by the cluster's Pods. . The range must be within your VPC CIDR address range, but not part of a subnet. For example, 10.2.0.0/16.

    • SERVICE_ADDRESS_CIDR_BLOCKS: the range of IPv4 addresses used by the cluster's Services. . The range must be within your VPC CIDR address range, but not part of a subnet. For example, 10.1.0.0/16.

    • SERVICE_LOAD_BALANCER_SUBNETS: the subnet IDs where Anthos clusters on AWS can create public or private load balancers.

    • CLUSTER_VERSION: a Kubernetes version supported by Anthos clusters on AWS. The most recent version is 1.19.8-gke.1000.

    • AWS_INSTANCE_TYPE: a supported EC2 instance type.

    • SSH_KEY_NAME: an AWS EC2 key pair.

    • CONTROL_PLANE_SUBNET_IDS: the subnet IDs in the AZs where your control plane instances run.

    • CONTROL_PLANE_SECURITY_GROUPS: a securityGroupID created during the management service installation. You can customize this by adding any securityGroupIDs required to connect to the control plane.

    • CONTROL_PLANE_IAM_PROFILE: name of the AWS EC2 instance profile assigned to control plane replicas.

    • ROOT_VOLUME_SIZE: the size, in gibibyte (GiB), of your control plane root volumes.

    • ROOT_VOLUME_TYPE with the EBS volume type. For example, gp3.

    • ROOT_VOLUME_IOPS with the amount of provisioned IO operations per second (IOPS) for the volume. Only valid when volumeType is GP3. For more information, see General Purpose SSD volumes (gp3).

    • ROOT_VOLUME_KEY with the Amazon Resource Name of the AWS KMS key that encrypts your control plane instance root volumes.

    • ETCD_VOLUME_SIZE: the size of volumes used by etcd.

    • ETCD_VOLUME_TYPE with the EBS volume type. For example, gp3.

    • ETCD_VOLUME_IOPS with the amount of provisioned IO operations per second (IOPS) for the volume. Only valid when volumeType is gp3. For more information, see General Purpose SSD volumes (gp3).

    • ETCD_VOLUME_KEY with the Amazon Resource Name of the AWS KMS key that encrypts your control plane etcd data volumes.

    • ARN_OF_KMS_KEY: the AWS KMS key used to encrypt cluster Secrets.

    • ANTHOS_CONNECT_NAME: the Connect membership name used to register your cluster. The membership name must be unique. For example, projects/YOUR_PROJECT/locations/global/memberships/CLUSTER_NAME, where YOUR_PROJECT is your Google Cloud project and CLUSTER_NAME is a unique name in your project. This field is optional.

    • WORKLOAD_IDENTITY_BUCKET: the Cloud Storage bucket name containing your workload identity discovery information. This field is optional.

  4. Create one or more AWSNodePools for your cluster. Open a text editor and copy the following AWSCluster definition into a file named custom-nodepools.yaml.

    apiVersion: multicloud.cluster.gke.io/v1
    kind: AWSNodePool
    metadata:
      name: NODE_POOL_NAME
    spec:
      clusterName: AWSCLUSTER_NAME
      version:  CLUSTER_VERSION # latest version is 1.19.8-gke.1000
      region: AWS_REGION
      subnetID: AWS_SUBNET_ID
      minNodeCount: MINIMUM_NODE_COUNT
      maxNodeCount: MAXIMUM_NODE_COUNT
      maxPodsPerNode: MAXIMUM_PODS_PER_NODE_COUNT
      instanceType: AWS_NODE_TYPE
      keyName: KMS_KEY_PAIR_NAME
      iamInstanceProfile: NODE_IAM_PROFILE
      rootVolume:
        sizeGiB: ROOT_VOLUME_SIZE
        volumeType: VOLUME_TYPE # Optional
        iops: IOPS # Optional
        kmsKeyARN: NODE_VOLUME_KEY # Optional 
    

    Replace the following:

    • NODE_POOL_NAME: a unique name for your AWSNodePool.
    • AWSCLUSTER_NAME: your AWSCluster's name. For example, staging-cluster.
    • CLUSTER_VERSION: a supported Anthos clusters on AWS Kubernetes version.
    • AWS_REGION: the same AWS region as your AWSCluster.
    • AWS_SUBNET_ID: an AWS subnet in the same region as your AWSCluster.
    • MINIMUM_NODE_COUNT: the minimum number of nodes in the node pool. See Scaling user clusters for more information.
    • MAXIMUM_NODE_COUNT: the maximum number of nodes in the node pool.
    • MAXIMUM_PODS_PER_NODE_COUNT: the maximum number of pods that Anthos clusters on AWS can allocate to a node.
    • AWS_NODE_TYPE: an AWS EC2 instance type.
    • KMS_KEY_PAIR_NAME: the AWS KMS key pair assigned to each node pool worker.
    • NODE_IAM_PROFILE: the name of the AWS EC2 instance profile assigned to nodes in the pool.
    • ROOT_VOLUME_SIZE: the size, in gibibyte (GiB), of your control plane root volumes.
    • VOLUME_TYPE: the node's AWS EBS volume type. For example, gp3.
    • IOPS: the amount of provisioned IO operations per second (IOPS) for volumes. Only valid when volumeType is gp3.
    • NODE_VOLUME_KEY: the ARN of the AWS KMS key used to encrypt the volume. For more information, see Using a customer managed CMK to encrypt volumes.
  5. Apply the manifests to your management service.

    env HTTP_PROXY=http://localhost:8118 \
      kubectl apply -f custom-cluster.yaml
    env HTTP_PROXY=http://localhost:8118 \
      kubectl apply -f custom-nodepools.yaml
    

Create a kubeconfig

While your user cluster starts, you can create a kubeconfig context for your new user cluster. You use the context to authenticate to a user or management cluster.

  1. Use anthos-gke aws clusters get-credentials to generate a kubeconfig for your user cluster in ~/.kube/config.

    env HTTP_PROXY=http://localhost:8118 \
      anthos-gke aws clusters get-credentials CLUSTER_NAME
    

    Replace CLUSTER_NAME with your cluster's name. For example, cluster-0.

  2. Use kubectl to authenticate to your new user cluster.

    env HTTP_PROXY=http://localhost:8118 \
      kubectl cluster-info
    

    If your cluster is ready, the output includes the URLs for Kubernetes components within your cluster.

Viewing your cluster's status

The management service provisions AWS resources when you apply an AWSCluster or AWSNodePool.

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. To list your clusters, use kubectl get AWSClusters.

    env HTTP_PROXY=http://localhost:8118 \
      kubectl get AWSClusters
    

The output includes each cluster's name, state, age, version, and endpoint.

For example, the following output includes only one AWSCluster named cluster-0:

NAME        STATE          AGE     VERSION         ENDPOINT
cluster-0   Provisioning   2m41s   1.19.8-gke.1000   gke-xyz.elb.us-east-1.amazonaws.com

View your cluster's events

To see recent Kubernetes Events from your user cluster, use kubectl get events.

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. Run kubectl get events.

    env HTTP_PROXY=http://localhost:8118 \
      kubectl get events
    

The output includes information, warning, and errors related to from your management service.

Deleting a user cluster

To delete a user cluster, perform the following steps:

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. Use kubectl delete to delete the manifest containing your user clusters.

    env HTTP_PROXY=http://localhost:8118 \
      kubectl delete -f CLUSTER_FILE
    

    Replace CLUSTER_FILE with the name of the manifest containing your AWScluster and AWSNodePool objects. For example, cluster-0.yaml.

Deleting all user clusters

To delete all of your user clusters, perform the following steps:

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. Use kubectl delete to delete your AWSNodePools and AWSClusters from your management service.

    env HTTP_PROXY=http://localhost:8118 \
      kubectl delete AWSNodePool --all
    env HTTP_PROXY=http://localhost:8118 \
      kubectl delete AWSCluster --all
    

For more information, see Uninstalling Anthos clusters on AWS.

What's next