Automated Network Deployment: Overview

This tutorial explains how to use Cloud Deployment Manager and Terraform by HashiCorp to automate how you create and manage Google Cloud Platform (GCP) resources.

  • Deployment Manager is an infrastructure deployment service built into GCP.
  • Terraform is an open source tool that enables you to automate deployments for your experiments and operations.

This tutorial is the first in a three-part series that demonstrates the automated deployment of common networking resource patterns.

When you move to the public cloud, the first infrastructure tasks you tackle are planning and deploying your network resources. Getting your network up and running is part of any initial experimentation or proof of concept. These tutorials present some common networking configurations that you can use as a reference for your projects.

This tutorial uses a file-based approach for authentication that works with Deployment Manager and Terraform. You can reuse the resulting configuration, which allows you to focus on critical resource requirements for your applications.

The series has the following structure:

  1. This is the Overview. Follow this tutorial to set up an operating environment that includes authentication credentials for your GCP project. The other tutorials in this series depend on the authentication configuration outlined here. Alternate authentication methods also exist using the gcloud command-line tool.
  2. Next, the Startup tutorial introduces Deployment Manager and Terraform. For comparison, you can run the simple deployment provided with each tool. You can also look through the configuration files to study different approaches and see what fits your requirements. You need to complete the Overview tutorial before you begin the Startup tutorial.
  3. Building a VPN Between GCP and AWS is an advanced tutorial. It does not depend on the Startup tutorial, but it does assume that you have completed the Overview tutorial. This advanced tutorial demonstrates how to:

    • Deploy a multi-cloud application or build a hybrid environment with connections to your on-premises infrastructure.
    • Deploy networking infrastructure and virtual machine (VM) instances in both GCP and Amazon Web Services (AWS).
    • Set up connections between both providers, enabling you to distribute your deployed resources to meet reliability and availability demands.

Why automated network deployments?

An automation strategy is important for your development and operations efficiency and to maximize your application quality. Automated network deployments offer the following advantages:

  • They support many different applications and the phased environments (development, testing, staging, production) that contribute to a predictable process and reliable operations. Making and disposing copies of environments is straightforward when you use code-based configuration files.
  • Code-based, automated deployments enable you to reuse common infrastructure patterns in your organization.
  • Code-based deployments support a structured review process that helps align teams and avoid unexpected problems. With this process, you can collaborate with your cross-discipline counterparts, such as security operations, network operations, project administration, and quality teams.

Overview

Deployment Manager and Terraform perform complex dependency checking, enabling you to assess your environment and efficiently create resources in parallel whenever possible. In the other tutorials in this series, you use these tools to deploy more complex network structures.

Although this tutorial links to related tools, it aims to consolidate details, allowing you to construct useful working environments without the need to refer to multiple pages and websites.

Objectives

  • Create access credentials for automation in GCP and AWS.
  • Create a functional environment for using Deployment Manager and Terraform.
  • Generate a key pair needed to use Secure Shell (SSH) to communicate with your VM instances.

Costs

This overview tutorial uses no billable GCP resources. Other tutorials in this series deploy billable resources in GCP and AWS.

Before you begin

  1. In the Google Cloud Platform Console, create a GCP project named gcp-automated-networks.

    Go to the Projects page

    Your project has a name and a unique GCP project ID. Make note of your GCP project ID. You can find it on the Project info panel of the Google Cloud Platform Console Home.

    Project info panel that displays your project name and ID.

  2. Make sure that billing is enabled for your Google Cloud Platform project.

    Learn how to enable billing

  3. Enable the Compute Engine and Deployment Manager APIs.

    Enable the APIs

  4. Start a Cloud Shell instance. You run all the terminal commands in this tutorial from Cloud Shell.

    Open Cloud Shell

    Cloud Shell can take several minutes to provision. After the process completes, you get this welcome message:

    Welcome to Cloud Shell! Type "help" to get started.

The tutorials in this series assume you are familiar with GitHub and that you have access to the AWS Management Console.

Deployment architecture

In this tutorial, you build the deployment environment illustrated in the following diagram.

Components of your deployment architecture.

The preceding diagram illustrates the components used in this tutorial:

  • GCP credentials.
  • Deployment configuration files.
  • SSH keys.
  • AWS credentials.
  • Your GCP project.
  • VPC network.

Preparing your GCP working environment

In this section, you do the following:

  • Clone the tutorial code.
  • Choose your GCP region and zone.

Clone the tutorial code

  1. In Cloud Shell, clone the tutorial code from GitHub:

    git clone https://github.com/GoogleCloudPlatform/autonetdeploy-startup.git
  2. Navigate to the tutorial directory:

    cd autonetdeploy-startup
    

Verify region and zone

Certain cloud resources in this tutorial, including Compute Engine instances, require you to explicitly declare the intended placement region or zone, or both. For more details, see Regions and zones for GCP.

This tutorial requires only a single region. Verify the values from the following table that reflect the values which are set in the tutorial files.

Field name GCP values
Region name
us-west1
Location The Dalles, Oregon, USA
Zone
us-west1-b

Preparing for AWS

In this section, choose your AWS region, which you need in the other tutorials in this series. For details about AWS regions, see Regions and Availability Zones for AWS.

  1. Sign in to the AWS Management Console.
  2. Navigate to the VPC Dashboard and select the Oregon region.
  3. In the EC2 Dashboard and VPC Dashboard, review the resources this tutorial uses.

Preparing Deployment Manager

Deployment Manager is preinstalled in Cloud Shell along with the Cloud SDK. You can see an overview of the execution syntax by running the command gcloud deployment-manager --help.

Preparing Terraform

In this section, you download and install Terraform.

  1. In Cloud Shell, run the following script:

    ./get_terraform.sh
    

    This script downloads and unpacks the executable binary for the Terraform tool to the ~/terraform directory. The script output shows an export command to update your PATH.

  2. Run the export command to update your PATH.

  3. Verify that Terraform is working:

    terraform --help
    

    Resulting output:

    Usage: terraform [--version] [--help] [command] [args]
    ...
    

If you need help, see the topics Download Terraform and Install Terraform.

Creating access credentials

In this section, you do the following:

  • Download access credentials for GCP and AWS.
  • Point your templates at these access credentials.

Deployment Manager and Terraform require access to your projects and environments in GCP and AWS. For Terraform, you can read more about the supported providers in the online docs. Although the Terraform GCP provider offers multiple ways to supply credentials, in this tutorial, you download credentials files from GCP and AWS. Downloading GCP credentials is a way of using service account authentication, which is a best practice.

Download Compute Engine default service account credentials

In Cloud Shell, which is a Linux environment, gcloud manages credentials files under the ~/.config/gcloud directory.

To set up your Compute Engine default service account credentials, follow these steps:

  1. In the GCP Console, go to the Create Service Account Key page.

    Go to Create service account key page

  2. From the Service account dropdown, select Compute Engine default service account, and leave JSON selected as the key type.

  3. Click Create, which downloads your credentials as a file named [PROJECT_ID]-[UNIQUE_ID].json.

  4. To get this JSON file from your local machine into the Cloud Shell environment, in Cloud Shell click More and click Upload file.

  5. Navigate to the JSON file you downloaded and click Open to upload. The file is placed in the home (~) directory.

  6. Use the ./gcp_set_credentials.sh script provided to create the ~/.config/gcloud/credentials_autonetdeploy.json file. This script also creates terraform/terraform.tfvars with a reference to the new credentials.

    ./gcp_set_credentials.sh ~/[PROJECT_ID]-[UNIQUE_ID].json
    

    Resulting output:

    Created ~/.config/gcloud/credentials_autonetdeploy.json from ~/[PROJECT_ID]-[UNIQUE_ID].json.
    Updated gcp_credentials_file_path in ~/autonetdeploy-startup/terraform/terraform.tfvars.
    
  7. For Deployment Manager, add the service account credentials to the existing set of credentials managed by gcloud. This is not required for Terraform, because the Terraform configuration reads the file directly from this location.

    gcloud auth activate-service-account --key-file ~/.config/gcloud/credentials_autonetdeploy.json
    
  8. Next, use the cat and grep commands to retrieve the service account email address from the file and display the email string.

    cat ~/.config/gcloud/credentials_autonetdeploy.json | grep client_email
    
  9. Review your gcloud authentication setup, and verify that your service account email address is listed and active.

    gcloud auth list
    

    Resulting output:

    ACTIVE  ACCOUNT
    *       ###-compute@developer.gserviceaccount.com
    

Download AWS access credentials

In Cloud Shell, AWS stores credentials files under ~/.aws.

  1. In the AWS Management Console, click your name, and then click My Security Credentials.

  2. Click Users.

  3. Click your User name.

  4. Click Security credentials.

  5. Click Create Access Key.

  6. Click Download .csv file to create an accessKeys.csv file on your local system.

  7. Click Close.

  8. In Cloud Shell, click More , and then click Upload file to upload your credentials file to the home (~) directory.

  9. Select the accessKeys.csv file you downloaded, and click Open to upload.

  10. Use the script provided to create the ~/.aws/credentials_autonetdeploy file:

    ./aws_set_credentials.sh ~/accessKeys.csv
    

    Resulting output:

    Created ~/.aws/credentials_autonetdeploy.

Setting your project

In this section, you point your deployment templates at your project.

GCP offers several ways to designate the GCP project to be used by the automation tools. For simplicity, instead of pulling the project ID from the environment, the GCP project is explicitly identified by a string variable in the template files.

  1. Set your GCP project ID. Replace [YOUR_PROJECT_ID] with your GCP project ID.

    gcloud config set project [YOUR_PROJECT_ID]

    Resulting output:

    Updated property [core/project].
    
  2. Use the provided script to update the project value in your configuration files for Deployment Manager and Terraform.

    ./gcp_set_project.sh
    

    Resulting output:

    Updated project_id: gcp-automated-networks in ~/autonetdeploy-startup/deploymentmanager/autonetdeploy_config.yaml.
    Updated gcp_project_id in ~/autonetdeploy-startup/terraform/terraform.tfvars.
    
  3. Review the two updated files to verify that your [PROJECT_ID] value has been inserted into deploymentmanager/autonetdeploy_config.yaml and terraform/terraform.tfvars.

  4. Run the one-time terraform init command to install the Terraform provider for this deployment.

    pushd ./terraform && terraform init && popd > /dev/null
    

    Resulting output:

    Initializing provider plugins...
    - Checking for available provider plugins on https://releases.hashicorp.com...
    - Downloading plugin for provider "google" (0.1.3)...
    Terraform has been successfully initialized!
  5. Verify your credentials by running the Terraform plan command. If you don't see any red error text, your authentication is working properly.

    pushd ./terraform && terraform plan && popd > /dev/null
    

    Resulting output:

    Refreshing Terraform state in-memory prior to plan...
    ...
    +google_compute_instance.gcp-vm
    ...
    Plan: 1 to add, 0 to change, 0 to destroy.
    

Using SSH keys for connecting to VM instances

In GCP, the GCP Console and gcloud tool work behind the scenes to manage SSH keys for you. Use the ssh command to communicate with your Compute Engine instances without generating or uploading any key files.

However, for your multi-cloud exercises, you do need a public/private key pair to connect with VM instances in Amazon Elastic Compute Cloud (EC2).

Generate a key pair

  1. In Cloud Shell, use ssh-keygen to generate a new key pair. Replace [USERNAME] with your GCP login. If you are unclear what your username is, use the output of the whoami command in your Cloud Shell as your [USERNAME]. For the tutorials, it's okay to use an empty passphrase.

    ssh-keygen -t rsa -f ~/.ssh/vm-ssh-key -C [USERNAME]
    

    Resulting output:

    Generating public/private rsa key pair.
    ...
    
  2. Restrict access to your private key. This is a best practice.

    chmod 400 ~/.ssh/vm-ssh-key
    

Import the public key to GCP

Import your public key using either of two methods:

  1. In Cloud Shell, register your public key with GCP:

    gcloud compute config-ssh --ssh-key-file=~/.ssh/vm-ssh-key
    

    Resulting output:

    Updating project ssh metadata...done.

    You can ignore the warning No host aliases were added... because the command also attempts to update Compute Engine VM instances, but no instances have been created yet.

  2. In the GCP Console, open the Metadata page for your project.

    Open the Metadata page

  3. Click SSH Keys. You might need to click Add SSH keys if no keys have been created, or you can click Edit and Add item to modify metadata.

  4. Copy the key data string from the page using the following commands. The string is in the form ssh-rsa [KEY_DATA] [USERNAME]. GCP expects you to enter the entire string. If you do not include the entire string, you see the following error text: Invalid key.

    cat ~/.ssh/vm-ssh-key.pub
    

    Resulting output:

    ssh-rsa [KEY_DATA] [USERNAME]
  5. Click Save.

    Now, you can use the ssh command from Cloud Shell to verify access to your created VM instances. Be aware that you're using a project-wide key that can be used to access all VM instances in the project.

    ssh -i ~/.ssh/vm-ssh-key [EXTERNAL_IP]

    To debug, add -v to your ssh command.

Import the public key to AWS

You can reuse the public key file generated with GCP.

  1. To download the public key file from Cloud Shell click More , and then click Download file.
  2. Enter the fully qualified path (as before, replace [USERNAME]):

    /home/[USERNAME]/.ssh/vm-ssh-key.pub
    
  3. Click Download.

  4. In the AWS Management Console, in EC2 Dashboard, and under NETWORK & SECURITY, click Key Pairs.

  5. Click Import Key Pair.

  6. Click Choose File.

  7. Select vm-ssh-key.pub and click Open.

  8. Verify that the contents are of the expected form: ssh-rsa [KEY_DATA] [USERNAME].

  9. Click Import.

You can now see the AWS entry for the vm-ssh-key value. You can reference this key in configuration settings, which enables you to use the ssh command for access.

When you use the ssh command to connect to AWS instances, AWS behaves differently from GCP. For AWS, you must provide a generic username supported by the AMI provider. In this tutorial, the AMI provider expects ubuntu as the user.

ssh -i ~/.ssh/vm-ssh-key ubuntu@[AWS_INSTANCE_EXTERNAL_IP]

You now have an environment in which you can easily deploy resources to the cloud using automated tools. Use this environment to complete any of the automated network deployment tutorials in this series.

Cleaning up

This tutorial generates no billable resources, so you don't need to clean up anything.

What's next

  • Try the other tutorials in this series.

    Start the Automated Network Deployment: Startup tutorial, which walks you through using Deployment Manager and Terraform to deploy a VM instance in the default network on GCP.

  • Learn more about Deployment Manager and Terraform.

  • Learn more about advanced gcloud configuration setup.

    If you want to use multiple sets of authentication credentials, gcloud supports this through configurations. Learn more about how to create named configurations with distinct authentication accounts, and switch between them using gcloud.

  • Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.

Was this page helpful? Let us know how we did:

Send feedback about...