Best practices for converting to Terraform
State
The state file stores information about the resources Terraform manages. By default, Terraform stores state locally on disk. If you store state remotely, you can allow for distributed collaboration, protect sensitive information, and run Terraform in continuous integration (CI).
After you convert your Deployment Manager template to Terraform and optionally import resources, we recommend that you follow the steps to store state remotely in Cloud Storage.
Modules
If you want to reduce complexity, enforce consistency, and promote reusability of your configuration, you can use Terraform modules to encapsulate collections of resources.
To use modules, you can do either of the following:
Create a custom module from the resources exported by DM Convert. This gives you the most flexibility.
Use a published module from Google Cloud's collection of official modules or the Terraform registry.
For most use cases, we recommend that you use a published module.
Create a custom module
After you've converted your configuration ,identify which resources you want to move into a module.
Move the configurations of those resources into a module directory, and convert required variables into parameters.
The following example shows how to move
google_bigquery_dataset
andgoogle_bigquery_table
into a module:# bq-module/main.tf resource "google_bigquery_dataset" "bigquerydataset" { provider = google-beta default_table_expiration_ms = 36000000 location = "us-west1" dataset_id = var.dataset_id project = var.project_id } resource "google_bigquery_table" "bigquerytable" { provider = google-beta labels = { data-source = "external" schema-type = "auto-junk" } dataset_id = var.dataset_id project = var.project_id table_id = var.table_id depends_on = [ google_bigquery_dataset.bigquerydataset ] }
# bq-module/variables.tf variable "project_id" { description = "Project ID" type = string } variable "dataset_id" { description = "Dataset ID" type = string } variable "table_id" { description = "Table ID" type = string }
In the exported
main.tf
file, replace the original configuration with the module you created.The following example shows this replacement using the module created in the example from the previous step.
# main.tf module "bq" { source = "./bq-module" project_id = "PROJECT_ID" dataset_id = "bigquerydataset" table_id = "bigquerytable" }
To initialize the local module, run the following command:
terraform init
Move the Terraform state associated with the resources into the module instance.
To move the module from the example in the previous step, run the following command:
terraform state mv google_bigquery_dataset.bigquerydataset module.bq.google_bigquery_dataset.bigquerydataset terraform state mv google_bigquery_table.bigquerytable module.bq.google_bigquery_table.bigquerytable
For this example, the output from the move is:
Move "google_bigquery_dataset.bigquerydataset" to "module.bq.google_bigquery_dataset.bigquerydataset" Successfully moved 1 object(s). Move "google_bigquery_table.bigquerytable" to "module.bq.google_bigquery_table.bigquerytable" Successfully moved 1 object(s).
Validate that no resources have changed, by running the following command:
terraform plan
The following is an example of the output you receive after you run the command:
No changes. Your infrastructure matches the configuration.
Use a published module
After you've converted your configuration, identify a published module and the resources that you want to move into it.
Identify configuration options for the module by reading the module's documentation.
Create an instance of the module configured to your current resource configuration.
For example, if you want to move
google_bigquery_dataset
andgoogle_bigquery_table
into the official BigQuery module , the following example shows what your module might look like:module "bq" { source = "terraform-google-modules/bigquery/google" version = "~> 5.0" project_id = "PROJECT_ID" dataset_id = "bigquerydataset" location = "us-west1" deletion_protection = true tables = [ { table_id = "bigquerytable", friendly_name = "bigquerytable" time_partitioning = null, range_partitioning = null, expiration_time = null, clustering = [], schema = null, labels = { data-source = "external" schema-type = "auto-junk" }, } ] }
To initialize the local module, run the following command:
terraform init
Read the module source code to identify resource addresses within the upstream module and construct the move commands.
terraform state mv google_bigquery_dataset.bigquerydataset module.bq.google_bigquery_dataset.main terraform state mv google_bigquery_table.bigquerytable 'module.bq.google_bigquery_table.main["bigquerytable"]'
To view any changes to the configuration, run the following command:
terraform plan
If the published module you selected has different default settings or is configured differently than your configuration, you might see differences highlighted in the output from running the command.
Actuation
We recommend that you use a continuous integration (CI) system, such as Cloud Build, Jenkins, or GitHub Actions, to automate running Terraform at scale. For more information, visit Managing infrastructure as code with Terraform, Cloud Build, and GitOps.
If you want to bootstrap creation of triggers and simplify authentication, you can choose to use the Cloud Build Workspace blueprint.
Structure
Each converted configuration from DM Convert is a single root configuration mapped to a single state file. We don't recommend setting up a single state file to hold a large number of resources. After you've converted your configuration, we recommend that you ensure that your new configuration follows the best practices for root modules.