Create a cluster and deploy a workload using Terraform
A Kubernetes cluster provides compute, storage, networking, and other services for applications, similar to a virtual data center. Apps and their associated services that run in Kubernetes are called workloads.
This tutorial lets you quickly see a running Google Kubernetes Engine cluster and sample workload, all set up using Terraform. You can then explore the workload in the Google Cloud console before going on to our more in-depth learning path, or to start planning and creating your own production-ready cluster. This tutorial assumes that you are already familiar with Terraform.
If you'd prefer to set up your sample cluster and workload in the Google Cloud console, see Create a cluster in the Google Cloud console.
Before you begin
Take the following steps to enable the Kubernetes Engine API:
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the GKE API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the GKE API.
-
Make sure that you have the following role or roles on the project: roles/container.admin, roles/compute.networkAdmin, roles/iam.serviceAccountUser
Check for the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
-
In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
- For all rows that specify or include you, check the Role colunn to see whether the list of roles includes the required roles.
Grant the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
- Click Grant access.
-
In the New principals field, enter your user identifier. This is typically the email address for a Google Account.
- In the Select a role list, select a role.
- To grant additional roles, click Add another role and add each additional role.
- Click Save.
-
Prepare the environment
In this tutorial you use Cloud Shell to manage resources hosted on
Google Cloud. Cloud Shell is preinstalled with the
software you need for this tutorial, including Terraform,
kubectl
, and the
the Google Cloud CLI.
Launch a Cloud Shell session from the Google Cloud console, by clicking the Cloud Shell activation icon Activate Cloud Shell . This launches a session in the bottom pane of the Google Cloud console.
The service credentials associated with this virtual machine are automatic, so you don't have to set up or download a service account key.
Before you run commands, set your default project in the gcloud CLI using the following command:
gcloud config set project PROJECT_ID
Replace
PROJECT_ID
with your project ID.Clone the GitHub repository:
git clone https://github.com/terraform-google-modules/terraform-docs-samples.git --single-branch
Change to the working directory:
cd terraform-docs-samples/gke/quickstart/autopilot
Review the Terraform files
The Google Cloud provider is a plugin that lets you manage and provision Google Cloud resources using Terraform. It serves as a bridge between Terraform configurations and Google Cloud APIs, letting you declaratively define infrastructure resources, such as virtual machines and networks.
The cluster and sample app for this tutorial are specified in two Terraform files that use the Google Cloud and Kubernetes providers.
Review the
cluster.tf
file:cat cluster.tf
The output is similar to the following
This file describes the following resources:
- A VPC network with internal IPv6 enabled.
- A dual-stack subnetwork.
- A
dual-stack Autopilot cluster
located in
us-central1
.
Review the
app.tf
file:cat app.tf
The output is similar to the following:
This file describes the following resources:
- A Deployment with a sample container image.
- A Service of type LoadBalancer. The Service exposes the Deployment on port 80.
(Optional) Expose the application to the internet
The Terraform files for the example describe an application with an internal IP address, which can only be accessed from the same Virtual Private Cloud (VPC) as the sample app. If you want to access the running demo app's web interface from the internet (for example, from your laptop), modify the Terraform files to create a public IP address instead before you create the cluster. You can do this using a text editor directly in Cloud Shell or by using the Cloud Shell Editor.
To expose the demo application to the internet:
In
cluster.tf
, changeipv6_access_type
fromINTERNAL
toEXTERNAL
.ipv6_access_type = "EXTERNAL"
In
app.tf
, configure an external load balancer by removing thenetworking.gke.io/load-balancer-type
annotation.annotations = { "networking.gke.io/load-balancer-type" = "Internal" # Remove this line }
Create a cluster and deploy an application
In Cloud Shell, run this command to verify that Terraform is available:
terraform
The output should be similar to the following:
Usage: terraform [global options] <subcommand> [args] The available commands for execution are listed below. The primary workflow commands are given first, followed by less common or more advanced commands. Main commands: init Prepare your working directory for other commands validate Check whether the configuration is valid plan Show changes required by the current configuration apply Create or update infrastructure destroy Destroy previously-created infrastructure
Initialize Terraform:
terraform init
Plan the Terraform configuration:
terraform plan
Apply the Terraform configuration
terraform apply
When prompted, enter
yes
to confirm actions. This command might take several minutes to complete. The output is similar to the following:Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Verify the cluster is working
Do the following to confirm your cluster is running correctly:
Go to the Workloads page in the Google Cloud console:
Click the
example-hello-app-deployment
workload. The Pod details page displays. This page shows information about the Pod, such as annotations, containers running on the Pod, Services exposing the Pod, and metrics including CPU, Memory, and Disk usage.Go to the Services & Ingress page in the Google Cloud console:
Click the
example-hello-app-loadbalancer
LoadBalancer Service. The Service details page displays. This page shows information about the Service, such as the Pods associated with the Service, and the Ports the Services uses.In the External endpoints section, click the IPv4 link or the IPv6 link to view your Service in the browser. The output is similar to the following:
Hello, world! Version: 2.0.0 Hostname: example-hello-app-deployment-5df979c4fb-kdwgr
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.
If you plan to take additional tutorials or to explore your sample further, wait until you're finished to perform this cleanup step.
In Cloud Shell, run the following command to delete the Terraform resources:
terraform destroy --auto-approve
Troubleshoot cleanup errors
If you see an error message similar to The network resource 'projects/PROJECT_ID/global/networks/example-network' is already being used by 'projects/PROJECT_ID/global/firewalls/example-network-yqjlfql57iydmsuzd4ot6n5v'
,
do the following:
Delete the firewall rules:
gcloud compute firewall-rules list --filter="NETWORK:example-network" --format="table[no-heading](name)" | xargs gcloud --quiet compute firewall-rules delete
Re-run the Terraform command:
terraform destroy --auto-approve
What's next
Explore your cluster and workload in the Google Cloud console to learn about the some of the key workload settings and resources that you deployed.
Learn more about setting up and using Terraform with GKE in Terraform support for GKE.
Try our more in-depth Learning path: Scalable apps.
Learn how to get started with real life cluster administration in our Cluster administration overview.