Deploy an application on GKE on AWS

This page provides quickstart instructions to create a cluster and node pool, then deploy a sample application using GKE on AWS.

This page is for IT administrators and Operators who want to set up, monitor, and manage cloud infrastructure. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.

Quickstart your cluster with Terraform

You can use Terraform to create a cluster and node pool. Before creating your cluster, the Terraform scripts also prepare your AWS VPC.

You can learn more about Terraform in an AWS environment in the Terraform cluster reference and Terraform node pool reference.

After you create a VPC and cluster with Terraform, skip to Deploy an application to the cluster to deploy a sample application.

Quickstart your cluster without Terraform

If you prefer to prepare your AWS VPC and create a cluster and node pool without Terraform, follow these instructions.

Before you begin

Before creating a cluster, you must complete the prerequisites. In particular, you must provide the following resources:

  • An AWS VPC where the cluster will run.
  • Up to three AWS subnets for the three control plane replicas. Each must be in a different AWS Availability Zone.
  • The AWS IAM role that GKE on AWS assumes when managing your cluster. This requires a specific set of IAM permissions.
  • KMS symmetric CMK keys for at-rest encryption of cluster data (etcd) and configuration.
  • The AWS IAM instance profile for each control plane replica. This requires a specific set of IAM permissions.
  • An EC2 SSH key pair (optional) if you need SSH access to the EC2 instances that run each control plane replica.

It is your responsibility to create and manage these resources, which can be shared between all your GKE on AWS clusters. All other underlying cluster-scoped AWS resources are managed by GKE on AWS.

Set default settings for the gcloud CLI

Use the gcloud CLI to configure default settings for your default project and Google Cloud region.

Your project has a project ID as a unique identifier. When you create a project, you can use the automatically generated project ID or you can create your own.

Your Google Cloud region is a location where your clusters will be managed from. For example, us-west1. See Management regions for more details.

When you configure these default settings, you don't need to include them when you run the Google Cloud CLI. You can also specify settings or override default settings by passing the --project and --location flags to the Google Cloud CLI.

When you create GKE on AWS resources after configuring your default project and location, the resources are automatically created in that project and location.

To set defaults, follow these steps:

  1. Set the default project:

    gcloud config set project PROJECT_ID
    

    Replace PROJECT_ID with your project ID.

  2. Set the default management location:

    gcloud config set container_aws/location GOOGLE_CLOUD_LOCATION
    

    Replace GOOGLE_CLOUD_LOCATION with your location, such as us-west1.

Select CIDR ranges for your cluster

Kubernetes requires two CIDR ranges to be provided for the cluster. These CIDR ranges should be chosen so that they do not overlap with CIDR ranges used by your VPC subnets. They should be large enough for the maximum expected size of your cluster.

  • Pod address CIDR range: When a new Pod is created, it is allocated an IP address from this range. Example range: 192.168.208.0/20

  • Service address CIDR range: When a new Service is created, it is allocated an IP address from this range. Example range: 192.168.224.0/20

Create a cluster

Use the following command to create a cluster under GKE on AWS. For more information about this command including its optional parameters, see the gcloud container aws create reference page.

gcloud container aws clusters create aws-cluster-0 \
    --cluster-version 1.31.1-gke.1800 \
    --aws-region AWS_REGION \
    --fleet-project FLEET_PROJECT_ID \
    --vpc-id VPC_ID \
    --subnet-ids CONTROL_PLANE_SUBNET_1,CONTROL_PLANE_SUBNET_2,CONTROL_PLANE_SUBNET_3 \
    --pod-address-cidr-blocks POD_CIDR_BLOCK \
    --service-address-cidr-blocks SERVICE_CIDR_BLOCK \
    --role-arn API_ROLE_ARN \
    --iam-instance-profile CONTROL_PLANE_PROFILE \
    --database-encryption-kms-key-arn DB_KMS_KEY_ARN \
    --config-encryption-kms-key-arn CONFIG_KMS_KEY_ARN \
    --tags "google:gkemulticloud:cluster=aws-cluster-0"

Replace the following:

  • AWS_REGION: the AWS region to create the cluster in.
  • FLEET_PROJECT_ID: the Fleet host project where the cluster will be registered
  • VPC_ID: the ID of the AWS VPC for this cluster that you set up in the Create your VPC prerequisite step
  • CONTROL_PLANE_SUBNET_1, CONTROL_PLANE_SUBNET_2, CONTROL_PLANE_SUBNET_3: the subnet IDs for your cluster's three control plane instances that you created in the Create private subnets prerequisite step
  • POD_CIDR_BLOCK: the CIDR address range for your cluster's pods
  • SERVICE_CIDR_BLOCK: the CIDR address range for your cluster's services
  • API_ROLE_ARN: the ARN of the IAM role for the GKE Multi-Cloud service that you created in the Create GKE Multi-Cloud API role prerequisite step
  • CONTROL_PLANE_PROFILE: the profile name of the IAM instance associated with the cluster that you chose in the Create control plane role prerequisite step
  • DB_KMS_KEY_ARN: the Amazon Resource Name (ARN) of one of the AWS KMS keys that you created in the Create an AWS KMS key prerequisite step
  • CONFIG_KMS_KEY_ARN: the Amazon Resource Name (ARN) of the other of the AWS KMS keys that you created in the Create an AWS KMS key prerequisite step

If present, the --tags parameter applies the given AWS tag to all the underlying AWS resources managed by GKE on AWS. This example tags your control plane nodes with name of the cluster they belong to.

Create a node pool

Use the following command to create a node pool:

gcloud container aws node-pools create pool-0 \
    --cluster aws-cluster-0 \
    --node-version 1.31.1-gke.1800 \
    --min-nodes 1 \
    --max-nodes 5 \
    --max-pods-per-node 110 \
    --root-volume-size 50 \
    --subnet-id NODEPOOL_SUBNET_ID \
    --iam-instance-profile NODEPOOL_PROFILE \
    --config-encryption-kms-key-arn CONFIG_KMS_KEY_ARN \
    --ssh-ec2-key-pair EC2_KEY_PAIR \
    --tags "google:gkemulticloud:cluster=aws-cluster-0"

Replace the following:

  • NODEPOOL_SUBNET_ID: the ID of one of the private subnets that you created in the Create private subnets prerequisite step
  • NODEPOOL_PROFILE: the IAM instance profile name for the EC2 instances in the node pool that you chose in the Create a node pool IAM role prerequisite step
  • CONFIG_KMS_KEY_ARN: the Amazon Resource Name (ARN) of the AWS KMS key to encrypt user data
  • EC2_KEY_PAIR (optional): the name of the EC2 key pair created for SSH access (optional) that you created in the Create SSH key pair prerequisite step

View your cluster status

After you create a cluster and node pool, you can view a cluster's status with the Google Cloud CLI or the Google Cloud console. To view the cluster's status, choose if you are using the Google Cloud CLI or Google Cloud console and follow these steps:

gcloud

Use the gcloud container aws clusters describe command to get details about your cluster:

gcloud container aws clusters describe CLUSTER_NAME \
    --location GOOGLE_CLOUD_LOCATION

Replace the following:

  • CLUSTER_NAME: your cluster's name
  • GOOGLE_CLOUD_LOCATION: the name of the Google Cloud location that manages the cluster

Google Cloud console

  1. In the Google Cloud console, go to the Google Kubernetes Engine clusters overview page.

    Go to GKE clusters

  2. Your clusters are listed by their name and location.

  3. Click the cluster's name. A panel with information on the cluster, including its status and enabled features, appears.

Get authentication credentials for the cluster

After creating your cluster, you need to get authentication credentials to interact with the cluster:

gcloud container aws clusters get-credentials aws-cluster-0

This command configures kubectl to access the cluster you created using Connect gateway. You need at least one node pool to use Connect gateway because it relies on the Connect agent, which runs as a Deployment in the cluster.

Deploy an application to the cluster

Now that you have created a cluster, you can deploy a containerized application to it. For this quickstart, you can deploy our example web application, hello-app.

You use Kubernetes objects to create and manage your cluster's resources. You use the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancers for accessing your application from the internet.

Create the Deployment

To run hello-app in your cluster, you need to deploy the application by running the following command:

kubectl create deployment hello-server --image=us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0

This Kubernetes command, kubectl create deployment creates a Deployment named hello-server. The Deployment's Pod runs the hello-app container image.

In this command:

  • --image specifies a container image to deploy. In this case, the command pulls the example image from an Artifact Registry repository, us-docker.pkg.dev/google-samples/containers/gke/hello-app. The :1.0 indicates the specific image version to pull. If you don't specify a version, the image tagged with latest is used.

Expose the Deployment

After deploying the application, you need to expose it to the internet so that users can access it. You can expose your application by creating a Service, a Kubernetes resource that exposes your application to external traffic.

To expose your application, run the following kubectl expose command:

kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080

Passing in the --type LoadBalancer flag creates an AWS load balancer for your container. The --port flag initializes public port 80 to the internet and the --target-port flag routes the traffic to port 8080 of the application.

Load balancers are billed according to AWS load balancer pricing.

Inspect and view the application

  1. Inspect the running Pods by using kubectl get pods:

    kubectl get pods
    

    You should see one hello-server Pod running on your cluster.

  2. Inspect the hello-server Service by using kubectl get service:

    kubectl get service hello-server
    

    From this command's output, copy the Service's external IP address from the EXTERNAL-IP column.

  3. View the application from your web browser by using the external IP with the exposed port:

    http://EXTERNAL-IP
    

You have just deployed a containerized web application to GKE on AWS.

Clean up

  1. Delete the application's Service and Deployment:

    kubectl delete service hello-server
    kubectl delete deployment hello-server
    
  2. Delete your node pool by running gcloud container aws node-pools delete:

    gcloud container aws node-pools delete pool-0 --cluster aws-cluster-0
    
  3. Delete your cluster by running gcloud container aws clusters delete:

    gcloud container aws clusters delete aws-cluster-0
    

What's next