Deploy the blueprint

Last reviewed 2023-12-20 UTC

This section describes the process that you can use to deploy the blueprint, its naming conventions, and alternatives to blueprint recommendations.

Bringing it all together

To deploy your own enterprise foundation in alignment with the best practices and recommendations from this blueprint, follow the high-level tasks summarized in this section. Deployment requires a combination of prerequisite setup steps, automated deployment through the terraform-example-foundation on GitHub, and additional steps that must be configured manually after the initial foundation deployment is complete.

Process Steps

Prerequisites before deploying the foundation pipeline resources

Complete the following steps before you deploy the foundation pipeline:

To connect to an an existing on-premises environment, prepare the following:

Steps to deploy the terraform-example-foundation from GitHub

Follow the README directions for each stage to deploy the terraform-example-foundation from GitHub:

Additional steps after IaC deployment

After you deploy the Terraform code, complete the following:

Additional administrative controls for customers with sensitive workloads

Google Cloud provides additional administrative controls that can help your security and compliance requirements. However, some controls involve additional cost or operational trade-offs that might not be appropriate for every customer. These controls also require customized inputs for your specific requirements that can't be fully automated in the blueprint with a default value for all customers.

This section introduces security controls that you apply centrally to your foundation. This section isn't intended to be exhaustive of all the security controls that you can apply to specific workloads. For more information on Google's security products and solutions, see Google Cloud security best practices center.

Evaluate whether the following controls are appropriate for your foundation based on your compliance requirements, risk appetite, and sensitivity of data.

Control Description

Protect your resources with VPC Service Controls

VPC Service Controls lets you define security policies that prevent access to Google-managed services outside of a trusted perimeter, block access to data from untrusted locations, and mitigate data exfiltration risks. However, VPC Service Controls can cause existing services to break until you define exceptions to allow intended access patterns.

Evaluate whether the value of mitigating exfiltration risks justifies the increased complexity and operational overhead of adopting VPC Service Controls. The blueprint prepares restricted networks and optional variables to configure VPC Service Controls, but the perimeter isn't enabled until you take additional steps to design and enable it.

Restrict resource locations

You might have regulatory requirements that cloud resources must only be deployed in approved geographical locations. This organization policy constraint enforces that resources can only be deployed in the list of locations you define.

Enable Assured Workloads

Assured Workloads provides additional compliance controls that help you meet specific regulatory regimes. The blueprint provides optional variables in the deployment pipeline for enablement.

Enable data access logs

You might have a requirement to log all access to certain sensitive data or resources.

Evaluate where your workloads handle sensitive data that requires data access logs, and enable the logs for each service and environment working with sensitive data.

Enable Access Approval

Access Approval ensures that Cloud Customer Care and engineering require your explicit approval whenever they need to access your customer content.

Evaluate the operational process required to review Access Approval requests to mitigate possible delays in resolving support incidents.

Enable Key Access Justifications

Key Access Justifications lets you programmatically control whether Google can access your encryption keys, including for automated operations and for Customer Care to access your customer content.

Evaluate the cost and operational overhead associated with Key Access Justifications as well as its dependency on Cloud External Key Manager (Cloud EKM).

Disable Cloud Shell

Cloud Shell is an online development environment. This shell is hosted on a Google-managed server outside of your environment, and thus it isn't subject to the controls that you might have implemented on your own developer workstations.

If you want to strictly control which workstations a developer can use to access cloud resources, disable Cloud Shell. You might also evaluate Cloud Workstations for a configurable workstation option in your own environment.

Restrict access to the Google Cloud console

Google Cloud lets you restrict access to the Google Cloud console based on access level attributes like group membership, trusted IP address ranges, and device verification. Some attributes require an additional subscription to BeyondCorp Enterprise.

Evaluate the access patterns that you trust for user access to web-based applications such as the console as part of a larger zero trust deployment.

Naming conventions

We recommend that you have a standardized naming convention for your Google Cloud resources. The following table describes recommended conventions for resource names in the blueprint.

Resource Naming convention

Folder

fldr-environment

environment is a description of the folder-level resources within the Google Cloud organization. For example, bootstrap, common, production, nonproduction, development, or network.

For example: fldr-production

Project ID

prj-environmentcode-description-randomid

  • environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). Shared VPC host projects use the environmentcode of the associated environment. Projects for networking resources that are shared across environments, like the interconnect project, use the net environment code.
  • description is additional information about the project. You can use short, human-readable abbreviations.
  • randomid is a randomized suffix to prevent collisions for resource names that must be globally unique and to mitigate against attackers guessing resource names. The blueprint automatically adds a random four-character alphanumeric identifier.

For example: prj-c-logging-a1b2

VPC network

vpc-environmentcode-vpctype-vpcconfig

  • environmentcode is a short form of the environment field (one of b, c, p, n, d, or net).
  • vpctype is one of shared, float, or peer.
  • vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not.

For example: vpc-p-shared-base

Subnet

sn-environmentcode-vpctype-vpcconfig-region{-description}

  • environmentcode is a short form of the environment field (one of b, c, p, n, d, or net).
  • vpctype is one of shared, float, or peer.
  • vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not.
  • region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid hitting character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast).
  • description is additional information about the subnet. You can use short, human-readable abbreviations.

For example: sn-p-shared-restricted-uswest1

Firewall policies

fw-firewalltype-scope-environmentcode{-description}

  • firewalltype is hierarchical or network.
  • scope is global or the Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast).
  • environmentcode is a short form of the environment field (one of b, c, p, n, d, or net) that owns the policy resource.
  • description is additional information about the hierarchical firewall policy. You can use short, human-readable abbreviations.

For example:

fw-hierarchical-global-c-01

fw-network-uswest1-p-shared-base

Cloud Router

cr-environmentcode-vpctype-vpcconfig-region{-description}

  • environmentcode is a short form of the environment field (one of b, c, p, n, d, or net).
  • vpctype is one of shared, float, or peer.
  • vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not.
  • region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast).
  • description is additional information about the Cloud Router. You can use short, human-readable abbreviations.

For example: cr-p-shared-base-useast1-cr1

Cloud Interconnect connection

ic-dc-colo

  • dc is the name of your data center to which a Cloud Interconnect is connected.
  • colo is the colocation facility name that the Cloud Interconnect from the on-premises data center is peered with.

For example: ic-mydatacenter-lgazone1

Cloud Interconnect VLAN attachment

vl-dc-colo-environmentcode-vpctype-vpcconfig-region{-description}

  • dc is the name of your data center to which a Cloud Interconnect is connected.
  • colo is the colocation facility name that the Cloud Interconnect from the on-premises data center is peered with.
  • environmentcode is a short form of the environment field (one of b, c, p, n, d, or net).
  • vpctype is one of shared, float, or peer.
  • vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not.
  • region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast).
  • description is additional information about the VLAN. You can use short, human-readable abbreviations.

For example: vl-mydatacenter-lgazone1-p-shared-base-useast1-cr1

Group

grp-gcp-description@example.com

Where description is additional information about the group. You can use short, human-readable abbreviations.

For example: grp-gcp-billingadmin@example.com

Custom role

rl-description

Where description is additional information about the role. You can use short, human-readable abbreviations.

For example: rl-customcomputeadmin

Service account

sa-description@projectid.iam.gserviceaccount.com

Where:

  • description is additional information about the service account. You can use short, human-readable abbreviations.
  • projectid is the globally unique project identifier.

For example: sa-terraform-net@prj-b-seed-a1b2.iam.gserviceaccount.com

Storage bucket

bkt-projectid-description

Where:

  • projectid is the globally unique project identifier.
  • description is additional information about the storage bucket. You can use short, human-readable abbreviations.

For example: bkt-prj-c-infra-pipeline-a1b2-app-artifacts

Alternatives to default recommendations

The best practices that are recommended in the blueprint might not work for every customer. You can customize any of the recommendations to meet your specific requirements. The following table introduces some of the common variations that you might require based on your existing technology stack and ways of working.

Decision area Possible alternatives

Organization: The blueprint uses a single organization as the root node for all resources.

Decide a resource hierarchy for your Google Cloud landing zone introduces scenarios in which you might prefer multiple organizations, such as the following:

  • Your organization includes sub-companies that are likely to be sold in the future or that run as completely separate entities.
  • You want to experiment in a sandbox environment with no connectivity to your existing organization.

Folder structure: The blueprint has a simple folder structure, with workloads divided into production, non-production and development folders at the top layer.

Decide a resource hierarchy for your Google Cloud landing zone introduces other approaches for structuring folders based on how you want to manage resources and inherit policies, such as:

  • Folders based on application environments
  • Folders based on regional entities or subsidiaries
  • Folders based on accountability framework

Organization policies: The blueprint enforces all organization policy constraints at the organization node.

You might have different security policies or ways of working for different parts of the business. In this scenario, enforce organization policy constraints at a lower node in the resource hierarchy. Review the complete list of organization policy constraints that help meet your requirements.

Deployment pipeline tooling: The blueprint uses Cloud Build to run the automation pipeline.

You might prefer other products for your deployment pipeline, such as Terraform Enterprise, GitLab Runners, GitHub Actions, or Jenkins. The blueprint includes alternative directions for each product.

Code repository for deployment: The blueprint uses Cloud Source Repositories as the managed private Git repository.

Use your preferred version control system for managing code repositories, such as GitLab, GitHub, or Bitbucket.

If you use a private repository that is hosted in your on-premises environment, configure a private network path from your repository to your Google Cloud environment.

Identity provider: The blueprint assumes an on-premises Active Directory and federates identities to Cloud Identity using Google Cloud Directory Sync.

If you already use Google Workspace, you can use the Google identities that are already managed in Google Workspace.

If you don't have an existing identity provider, you might create and manage user identities directly in Cloud Identity.

If you have an existing identity provider, such as Okta, Ping, or Azure Entra ID, you might manage user accounts in your existing identity provider and synchronize to Cloud Identity.

If you have data sovereignty or compliance requirements that prevent you from using Cloud Identity, and if you don't require managed Google user identities for other Google services such as Google Ads or Google Marketing Platform, then you might prefer workforce identity federation. In this scenario, be aware of limitations with supported services.

Multiple regions: The blueprint deploys regional resources into two different Google Cloud regions to help enable workload design with high availability and disaster recovery requirements in mind.

If you have end users in more geographical locations, you might configure more Google Cloud regions to create resources closer to the end user with less latency.

If you have data sovereignty constraints or your availability needs can be met in a single region, you might configure only one Google Cloud region.

IP address allocation: The blueprint provides a set of IP address ranges.

You might need to change the specific IP address ranges that are used based on the IP address availability in your existing hybrid environment. If you modify the IP address ranges, use the blueprint as guidance for the number and size of ranges required, and review the valid IP address ranges for Google Cloud.

Hybrid networking: The blueprint uses Dedicated Interconnect across multiple physical sites and Google Cloud regions for maximum bandwidth and availability.

Depending on your requirements for cost, bandwidth, and reliability requirements, you might configure Partner Interconnect or Cloud VPN instead.

If you need to start deploying resources with private connectivity before a Dedicated Interconnect can be completed, you might start with Cloud VPN and change to using Dedicated Interconnect later.

If you don't have an existing on-premises environment, you might not need hybrid networking at all.

VPC Service Controls perimeter: The blueprint recommends a single perimeter which includes all the service projects that are associated with a restricted VPC network. Projects that are associated with a base VPC network are not included inside the perimeter.

You might have a use case that requires multiple perimeters for an organization or you might decide not to use VPC Service Controls at all.

For information, see decide how to mitigate data exfiltration through Google APIs.

Secret Manager: The blueprint deploys a project for using Secret Manager in the common folder for organization-wide secrets, and a project in each environment folder for environment-specific secrets.

If you have a single team who is responsible for managing and auditing sensitive secrets across the organization, you might prefer to use only a single project for managing access to secrets.

If you let workload teams manage their own secrets, you might not use a centralized project for managing access to secrets, and instead let teams use their own instances of Secret Manager in workload projects.

Cloud KMS: The blueprint deploys a project for using Cloud KMS in the common folder for organization-wide keys, and a project for each environment folder for keys in each environment.

If you have a single team who is responsible for managing and auditing encryption keys across the organization, you might prefer to use only a single project for managing access to keys. A centralized approach can help meet compliance requirements like PCI key custodians.

If you let workload teams manage their own keys, you might not use a centralized project for managing access to keys, and instead let teams use their own instances of Cloud KMS in workload projects.

Aggregated log sinks: The blueprint configures a set of log sinks at the organization node so that a central security team can review audit logs from across the entire organization.

You might have different teams who are responsible for auditing different parts of the business, and these teams might require different logs to do their jobs. In this scenario, design multiple aggregated sinks at the appropriate folders and projects and create filters so that each team receives only the necessary logs, or design log views for granular access control to a common log bucket.

Monitoring scoping projects: The blueprint configures a single monitoring scoping project for each environment.

You might configure more granular scoping projects that are managed by different teams, scoped to the set of projects that contain the applications that each team manages.

Granularity of infrastructure pipelines: The blueprint uses a model where each business unit has a separate infrastructure pipeline to manage their workload projects.

You might prefer a single infrastructure pipeline that is managed by a central team if you have a central team who is responsible for deploying all projects and infrastructure. This central team can accept pull requests from workload teams to review and approve before project creation, or the team can create the pull request themselves in response to a ticketed system.

You might prefer more granular pipelines if individual workload teams have the ability to customize their own pipelines and you want to design more granular privileged service accounts for the pipelines.

SIEM exports:The blueprint manages all security findings in Security Command Center.

Decide whether you will export security findings from Security Command Center to tools such as Chronicle or your existing SIEM, or whether teams will use the console to view and manage security findings. You might configure multiple exports with unique filters for different teams with different scopes and responsibilities.

DNS lookups for Google Cloud services from on-premises: The blueprint configures a unique Private Service Connect endpoint for each Shared VPC, which can help enable designs with multiple VPC Service Controls perimeters.

You might not require routing from an on-premises environment to Private Service Connect endpoints at this level of granularity if you don't require multiple VPC Service Control perimeters.

Instead of mapping on-premises hosts to Private Service Connect endpoints by environment, you might simplify this design to use a single Private Service Connect endpoint with the appropriate API bundle, or use the generic endpoints for private.googlepais.com and restricted.googleapis.com.

What's next