Creating a user cluster

A GKE on AWS user cluster hosts your Kubernetes workloads. This topic shows you how to create a basic user cluster. If you would like to further configure a user cluster, see Creating a custom user cluster.

You provision a GKE on AWS user cluster with the AWSCluster and AWSNodePool custom resources.

Before you begin

To create a user cluster, you first need to install a management service.

To connect to your GKE on AWS resources, perform the following steps. Select if you have an existing AWS VPC (or direct connection to your VPC) or created a dedicated VPC when creating your management service.

Existing VPC

If you have a direct or VPN connection to an existing VPC, omit the line env HTTP_PROXY=http://localhost:8118 from commands in this topic.

Dedicated VPC

When you create a management service in a dedicated VPC, GKE on AWS includes a bastion host in a public subnet.

To connect to your management service, perform the following steps:

  1. Change to the directory with your GKE on AWS configuration. You created this directory when Installing the management service.

    cd anthos-aws

  2. To open the tunnel, run the bastion-tunnel.sh script. The tunnel forwards to localhost:8118.

    To open a tunnel to the bastion host, run the following command:

    ./bastion-tunnel.sh -N
    

    Messages from the SSH tunnel appear in this window. When you are ready to close the connection, stop the process by using Control+C or closing the window.

  3. Open a new terminal and change into your anthos-aws directory.

    cd anthos-aws
  4. Check that you're able to connect to the cluster with kubectl.

    env HTTPS_PROXY=http://localhost:8118 \
    kubectl cluster-info
    

    The output includes the URL for the management service API server.

Selecting a control plane instance size

GKE on AWS doesn't support resizing control plane instances. Before creating your user cluster, select the instance size of your control planes. Control plane sizes depend on the number of nodes in your cluster. The following table contains recommended control plane instance sizes for various cluster sizes.

Cluster size (nodes) Control plane instance type
1 – 10 m5.large
11 – 100 m5.xlarge
101 – 200 m5.2xlarge

Creating a user cluster

In this example, you use terraform to generate a configuration to create a basic cluster. Then, you apply the configuration with kubectl apply.

  1. Open your terminal and, if necessary, connect to your bastion host.

  2. Change directory to the folder you created when Installing the management service.

  3. Use Terraform to generate a manifest configuring an example cluster and save it to a YAML file. Choose your version of Terraform, then run the following commands:

    Terraform 0.12, 0.13

    terraform output cluster_example > cluster-0.yaml
    

    Terraform 0.14.3+

    terraform output -raw cluster_example > cluster-0.yaml
    

    For more information on the contents of this file, see the AWSCluster and AWSNodePool documentation.

  4. Open the file in a text editor and edit it if necessary. By default, GKE on AWS creates node pools in each availability zone specified in anthos-gke.yaml. You can change the size and number of node pools to match your desired cluster. You also can make additional changes to your configuration. For example, you might change the instance types of your GKE on AWS nodes; you might also enable logging and monitoring on your control plane nodes.

  5. Apply the file to your management service.

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl apply -f cluster-0.yaml
    

Create a kubeconfig

While your user cluster starts, you can create a kubeconfig context for your new user cluster. You use the context to authenticate to a user or management cluster.

  1. Use anthos-gke aws clusters get-credentials to generate a kubeconfig for your user cluster in ~/.kube/config.

    env HTTPS_PROXY=http://localhost:8118 \
      anthos-gke aws clusters get-credentials CLUSTER_NAME
    

    Replace CLUSTER_NAME with your cluster's name. For example, cluster-0.

  2. Use kubectl to authenticate to your new user cluster.

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl cluster-info
    

    If your cluster is ready, the output includes the URLs for Kubernetes components within your cluster.

Viewing your cluster's status

The management service provisions AWS resources when you apply an AWSCluster or AWSNodePool.

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. To list your clusters, use kubectl get AWSClusters.

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl get AWSClusters
    

    The output includes each cluster's name, state, age, version, and endpoint.

    For example, the following output includes only one AWSCluster named cluster-0:

    NAME        STATE          AGE     VERSION         ENDPOINT
    cluster-0   Provisioning   2m41s   1.25.5-gke.2100   gke-xyz.elb.us-east-1.amazonaws.com
    

View your cluster's events

To see recent Kubernetes Events from your user cluster, use kubectl get events.

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. Run kubectl get events.

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl get events
    

The output includes information, warning, and errors related to from your management service.

Deleting a user cluster

To delete a user cluster, perform the following steps:

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. Use kubectl delete to delete the manifest containing your user clusters.

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl delete -f CLUSTER_FILE
    

    Replace CLUSTER_FILE with the name of the manifest containing your AWScluster and AWSNodePool objects. For example, cluster-0.yaml.

Deleting all user clusters

To delete all of your user clusters, perform the following steps:

  1. From your anthos-aws directory, use anthos-gke to switch context to your management service.

    cd anthos-aws
    anthos-gke aws management get-credentials

  2. Use kubectl delete to delete your AWSNodePools and AWSClusters from your management service.

    env HTTPS_PROXY=http://localhost:8118 \
      kubectl delete AWSNodePool --all
    env HTTPS_PROXY=http://localhost:8118 \
      kubectl delete AWSCluster --all
    

For more information, see Uninstalling GKE on AWS.

What's next