Deploy Anthos on GKE with Terraform Part 3: Enabling Cloud Resources Provisioning
Try Google Cloud
Start building on Google Cloud with $300 in free credits and 20+ always free products.Free trial
In the previous two parts of the series (1, 2) we discussed how new features in Terraform Provider for GCP make it easier for platform administrators to extend their Terraform automation to add Anthos Config Management (ACM) features to their GKE clusters. Using familiar Terraform resource syntax, you can add
google_gke_hub_feature_membership resource with
configmanagement section to enable Config Sync for GitOps integration and
policy_controller section for policy validation.
So far the cluster in our example was only used to host the configuration consisting of Kubernetes native resources - containerized Wordpress application powered by in-cluster MySQL database. We are getting all the advantages of Kubernetes: continuous reconciliation and drift correction, eventual consistency, order independence and idempotence. We are also deriving benefits from GitOps approach using the repo as the source of truth, enabling reviewable and version-controlled workflow.
Now let’s take it even further. We’ll demonstrate how the same model can be expanded to create and manage not just native Kubernetes resources (Kubernetes service accounts, pods, deployments) but also GCP cloud resources - Cloud databases, storage buckets, VM instances and many other GCP resources. Since Config Connector was launched in 2020, many Kuberentes shops have embraced its convenient way of managing GCP resources. Now that we have enabled Terraform support for Anthos features, combined with a Terraform configuration option to install Config Connector on the cluster, the full GitOps workflow and Kubernetes lifecycle spanning native and cloud resources can be enabled during cluster creation.
In our example, we are enabling Config Connector using the
config_connector setting in the
gke Terraform module. We also use the
workload-identity module to create a GCP service account that will be used to make the changes to K8s resources and bind it to Kubernetes Service Account (
cnrm-system namespace). You can choose the appropriate permissions to GCP service account - in our examples we are giving it the
owner role for simplicity.
Let’s review what changed in this part in the repo that is synchronized with our cluster via Config Sync. In the first part, we added a collection of configs, all native Kubernetes objects. These configs provisioned an in-cluster Wordpress application with an in-cluster MySQL database. In the second part, we added a set of rules used by PolicyController to audit our cluster. In this part, let’s start by setting up the Config Connector to create a Cloud SQL database and other GCP resources. First, we are going to add a config representing an instance of the Config Connector addon. While the addon is enabled on the cluster, this config instance is required to activate it. It specifies the settings, such as
mode (cluster or namespace) and GCP service account, linking it to the cnrmsa account that we created above using the
as well as SQLUser, IAMPolicy, IAMPolicyMember (see complete example here). Overall, more than 130 resources are now supported with Config Connector covering many of the most popular GCP configuration patterns.
You will notice that this configuration is expanded and parameterized for our specific project. How did we specify the parameter values? While many tools can be used, including Helm and Kustomize and some of them used together, we recommend Kpt that fully embraces the principles of configuration-as-data. In this example we used set-project-id kpt function to specify project id on the config-root directory before submitting the change to Git.
This repo provides a complete example of provisioning a cluster that is synchronized with a repo that contains a WordPress configuration powered, this time, by GCP MySQL database.
This was the third and the final article of the three part series that showcased Terraform support for ACM features and how it simplifies cluster provisioning for platform administrators.