- Deployment Manager is an infrastructure deployment service built into GCP.
- Terraform is an open source tool that enables you to automate deployments for your experiments and operations.
This tutorial is the first in a three-part series that demonstrates the automated deployment of common networking resource patterns.
When you move to the public cloud, the first infrastructure tasks you tackle are planning and deploying your network resources. Getting your network up and running is part of any initial experimentation or proof of concept. These tutorials present some common networking configurations that you can use as a reference for your projects.
This tutorial uses a file-based approach for authentication that works with Deployment Manager and Terraform. You can reuse the resulting configuration, which allows you to focus on critical resource requirements for your applications.
The series has the following structure:
- This is the Overview. Follow this tutorial to set up an operating
environment that includes authentication credentials for your
The other tutorials in this series
depend on the authentication configuration outlined here. Alternate
authentication methods also exist using the
- Next, the Startup tutorial introduces Deployment Manager and Terraform. For comparison, you can run the simple deployment provided with each tool. You can also look through the configuration files to study different approaches and see what fits your requirements. You need to complete the Overview tutorial before you begin the Startup tutorial.
Building a VPN Between GCP and AWS is an advanced tutorial. It does not depend on the Startup tutorial, but it does assume that you have completed the Overview tutorial. This advanced tutorial demonstrates how to:
- Deploy a multi-cloud application or build a hybrid environment with connections to your on-premises infrastructure.
- Deploy networking infrastructure and virtual machine (VM) instances in both GCP and Amazon Web Services (AWS).
- Set up connections between both providers, enabling you to distribute your deployed resources to meet reliability and availability demands.
Why automated network deployments?
An automation strategy is important for your development and operations efficiency and to maximize your application quality. Automated network deployments offer the following advantages:
- They support many different applications and the phased environments (development, testing, staging, production) that contribute to a predictable process and reliable operations. Making and disposing copies of environments is straightforward when you use code-based configuration files.
- Code-based, automated deployments enable you to reuse common infrastructure patterns in your organization.
- Code-based deployments support a structured review process that helps align teams and avoid unexpected problems. With this process, you can collaborate with your cross-discipline counterparts, such as security operations, network operations, project administration, and quality teams.
Deployment Manager and Terraform perform complex dependency checking, enabling you to assess your environment and efficiently create resources in parallel whenever possible. In the other tutorials in this series, you use these tools to deploy more complex network structures.
Although this tutorial links to related tools, it aims to consolidate details, allowing you to construct useful working environments without the need to refer to multiple pages and websites.
Deployment Manager. Using this service, you can write configuration files and templates with Jinja or Python that deploy resources such as Compute Engine instances, disks, Virtual Private Cloud (VPC) networks, Cloud Storage buckets, and more. For reference, some excellent examples for Deployment Manager are available in GitHub.
Terraform. Using Terraform, you can design one set of configuration files to deploy infrastructure across many platform providers, including GCP, AWS, Microsoft Azure, Kubernetes, OpenStack, SoftLayer, and VMware. For reference, examples for Terraform are available in GitHub.
- Create access credentials for automation in GCP and AWS.
- Create a functional environment for using Deployment Manager and Terraform.
- Generate a key pair needed to use Secure Shell (SSH) to communicate with your VM instances.
This overview tutorial uses no billable GCP resources. Other tutorials in this series deploy billable resources in GCP and AWS.
Before you begin
In the Google Cloud Platform Console, create a GCP project named
Your project has a name and a unique GCP project ID. Make note of your GCP project ID. You can find it on the Project info panel of the Google Cloud Platform Console Home.
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
- Enable the Compute Engine and Deployment Manager APIs.
Start a Cloud Shell instance. You run all the terminal commands in this tutorial from Cloud Shell.
Cloud Shell can take several minutes to provision. After the process completes, you get this welcome message:
Welcome to Cloud Shell! Type "help" to get started.
The tutorials in this series assume you are familiar with GitHub and that you have access to the AWS Management Console.
In this tutorial, you build the deployment environment illustrated in the following diagram.
The preceding diagram illustrates the components used in this tutorial:
- GCP credentials.
- Deployment configuration files.
- SSH keys.
- AWS credentials.
- Your GCP project.
- VPC network.
Preparing your GCP working environment
In this section, you do the following:
- Clone the tutorial code.
- Choose your GCP region and zone.
Clone the tutorial code
In Cloud Shell, clone the tutorial code from GitHub:
git clone https://github.com/GoogleCloudPlatform/autonetdeploy-startup.git
Navigate to the tutorial directory:
Verify region and zone
Certain cloud resources in this tutorial, including Compute Engine instances, require you to explicitly declare the intended placement region or zone, or both. For more details, see Regions and zones for GCP.
This tutorial requires only a single region. Verify the values from the following table that reflect the values which are set in the tutorial files.
|Field name||GCP values|
|Location||The Dalles, Oregon, USA|
Preparing for AWS
In this section, choose your AWS region, which you need in the other tutorials in this series. For details about AWS regions, see Regions and Availability Zones for AWS.
- Sign in to the AWS Management Console.
- Navigate to the VPC Dashboard and select the Oregon region.
- In the EC2 Dashboard and VPC Dashboard, review the resources this tutorial uses.
Preparing Deployment Manager
Deployment Manager is preinstalled in Cloud Shell along
with the Cloud SDK. You
can see an overview of the execution syntax by running the command
gcloud deployment-manager --help.
In this section, you download and install Terraform.
In Cloud Shell, run the following script:
This script downloads and unpacks the executable binary for the Terraform tool to the
~/terraformdirectory. The script output shows an export command to update your
Run the export command to update your
Verify that Terraform is working:
Usage: terraform [--version] [--help] [command] [args] ...
Creating access credentials
In this section, you do the following:
- Download access credentials for GCP and AWS.
- Point your templates at these access credentials.
Deployment Manager and Terraform require access to your projects and environments in GCP and AWS. For Terraform, you can read more about the supported providers in the online docs. Although the Terraform GCP provider offers multiple ways to supply credentials, in this tutorial, you download credentials files from GCP and AWS. Downloading GCP credentials is a way of using service account authentication, which is a best practice.
Download Compute Engine default service account credentials
In Cloud Shell, which is a Linux environment,
gcloud manages credentials files
To set up your Compute Engine default service account credentials, follow these steps:
In the GCP Console, go to the Create Service Account Key page.
From the Service account dropdown, select Compute Engine default service account, and leave JSON selected as the key type.
Click Create, which downloads your credentials as a file named
To get this JSON file from your local machine into the Cloud Shell environment, in Cloud Shell click More more_vert and click Upload file.
Navigate to the JSON file you downloaded and click Open to upload. The file is placed in the home (
./gcp_set_credentials.shscript provided to create the
~/.config/gcloud/credentials_autonetdeploy.jsonfile. This script also creates
terraform/terraform.tfvarswith a reference to the new credentials.
Created ~/.config/gcloud/credentials_autonetdeploy.json from ~/[PROJECT_ID]-[UNIQUE_ID].json. Updated gcp_credentials_file_path in ~/autonetdeploy-startup/terraform/terraform.tfvars.
For Deployment Manager, add the service account credentials to the existing set of credentials managed by
gcloud. This is not required for Terraform, because the Terraform configuration reads the file directly from this location.
gcloud auth activate-service-account --key-file ~/.config/gcloud/credentials_autonetdeploy.json
Next, use the
grepcommands to retrieve the service account email address from the file and display the email string.
cat ~/.config/gcloud/credentials_autonetdeploy.json | grep client_email
gcloudauthentication setup, and verify that your service account email address is listed and active.
gcloud auth list
ACTIVE ACCOUNT * ###-email@example.com
Download AWS access credentials
In Cloud Shell, AWS stores credentials files under
In the AWS Management Console, click your name, and then click My Security Credentials.
Click your User name.
Click Security credentials.
Click Create Access Key.
Click Download .csv file to create an
accessKeys.csvfile on your local system.
In Cloud Shell, click More more_vert, and then click Upload file to upload your credentials file to the home (
accessKeys.csvfile you downloaded, and click Open to upload.
Use the script provided to create the
Setting your project
In this section, you point your deployment templates at your project.
GCP offers several ways to designate the GCP project to be used by the automation tools. For simplicity, instead of pulling the project ID from the environment, the GCP project is explicitly identified by a string variable in the template files.
Set your GCP project ID. Replace
[YOUR_PROJECT_ID]with your GCP project ID.
gcloud config set project [YOUR_PROJECT_ID]
Updated property [core/project].
Use the provided script to update the project value in your configuration files for Deployment Manager and Terraform.
Updated project_id: gcp-automated-networks in ~/autonetdeploy-startup/deploymentmanager/autonetdeploy_config.yaml. Updated gcp_project_id in ~/autonetdeploy-startup/terraform/terraform.tfvars.
Review the two updated files to verify that your
[PROJECT_ID]value has been inserted into
Run the one-time
terraform initcommand to install the Terraform provider for this deployment.
pushd ./terraform && terraform init && popd > /dev/null
Initializing provider plugins... - Checking for available provider plugins on https://releases.hashicorp.com... - Downloading plugin for provider "google" (0.1.3)...
Terraform has been successfully initialized!
Verify your credentials by running the Terraform
plancommand. If you don't see any red error text, your authentication is working properly.
pushd ./terraform && terraform plan && popd > /dev/null
Refreshing Terraform state in-memory prior to plan... ... +google_compute_instance.gcp-vm ... Plan: 1 to add, 0 to change, 0 to destroy.
Using SSH keys for connecting to VM instances
In GCP, the GCP Console and
gcloud tool work behind
the scenes to
manage SSH keys
for you. Use the
ssh command to
communicate with your Compute Engine instances
without generating or uploading any key files.
However, for your multi-cloud exercises, you do need a public/private key pair to connect with VM instances in Amazon Elastic Compute Cloud (EC2).
Generate a key pair
In Cloud Shell, use
ssh-keygento generate a new key pair. Replace
[USERNAME]with your GCP login. If you are unclear what your username is, use the output of the
whoamicommand in your Cloud Shell as your
[USERNAME]. For the tutorials, it's okay to use an empty passphrase.
ssh-keygen -t rsa -f ~/.ssh/vm-ssh-key -C [USERNAME]
Generating public/private rsa key pair. ...
Restrict access to your private key. This is a best practice.
chmod 400 ~/.ssh/vm-ssh-key
Import the public key to GCP
Import your public key using either of two methods:
In Cloud Shell, register your public key with GCP:
gcloud compute config-ssh --ssh-key-file=~/.ssh/vm-ssh-key
Updating project ssh metadata...done.
You can ignore the warning
No host aliases were added...because the command also attempts to update Compute Engine VM instances, but no instances have been created yet.
In the GCP Console, open the Metadata page for your project.
Click SSH Keys. You might need to click Add SSH keys if no keys have been created, or you can click Edit and Add item to modify metadata.
Copy the key data string from the page using the following commands. The string is in the form
ssh-rsa [KEY_DATA] [USERNAME]. GCP expects you to enter the entire string. If you do not include the entire string, you see the following error text:
ssh-rsa [KEY_DATA] [USERNAME]
Now, you can use the
sshcommand from Cloud Shell to verify access to your created VM instances. Be aware that you're using a project-wide key that can be used to access all VM instances in the project.
ssh -i ~/.ssh/vm-ssh-key [EXTERNAL_IP]
To debug, add
Import the public key to AWS
You can reuse the public key file generated with GCP.
- To download the public key file from Cloud Shell click More more_vert, and then click Download file.
Enter the fully qualified path (as before, replace
In the AWS Management Console, in EC2 Dashboard, and under NETWORK & SECURITY, click Key Pairs.
Click Import Key Pair.
Click Choose File.
vm-ssh-key.puband click Open.
Verify that the contents are of the expected form:
ssh-rsa [KEY_DATA] [USERNAME].
You can now see the AWS entry for the
vm-ssh-key value. You can reference
this key in configuration settings, which enables you to use the
command for access.
When you use the
ssh command to connect to AWS instances, AWS behaves
differently from GCP. For AWS, you must provide a generic
username supported by the AMI provider. In this tutorial, the AMI provider
ubuntu as the user.
ssh -i ~/.ssh/vm-ssh-key ubuntu@[AWS_INSTANCE_EXTERNAL_IP]
You now have an environment in which you can easily deploy resources to the cloud using automated tools. Use this environment to complete any of the automated network deployment tutorials in this series.
This tutorial generates no billable resources, so you don't need to clean up anything.
Try the other tutorials in this series.
Start the Automated Network Deployment: Startup tutorial, which walks you through using Deployment Manager and Terraform to deploy a VM instance in the default network on GCP.
Learn more about Deployment Manager and Terraform.
Learn more about advanced
If you want to use multiple sets of authentication credentials,
gcloudsupports this through configurations. Learn more about how to create named configurations with distinct authentication accounts, and switch between them using
Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.