Resource deployment

Stay organized with collections Save and categorize content based on your preferences.

Resources in the example.com reference architecture can be grouped into one of two categories: foundation or workload components. Foundation resources need to be tightly secured, governed, and audited to help avoid exposing the enterprise to any security or compliance risks. Foundation resources include resources in the hierarchy such as organizational policies, folders, projects, APIs, identities (for example, service accounts), role bindings, custom role definitions, Shared VPC networks, subnets, routes (dynamic and static), firewall rules, and Dedicated Interconnect connections. Workload components include resources such as Cloud SQL databases and Google Kubernetes Engine clusters.

Resources within Google Cloud can be deployed from the Google Cloud console web interface, using the gcloud command-line tool, using API calls directly, or using an infrastructure as code (IaC) tool. For creating the foundation elements of your enterprise's environment, you should minimize the amount of manual configuration so that you limit the possibility of human error. In the example.com reference architecture, Terraform is used for IaC, deployed through a pipeline that's implemented using Jenkins.

The combination of Terraform and Jenkins allows the foundation elements of example.com to be deployed in a consistent and controllable manner. The consistency and controllability of this approach helps enable governance and policy controls across the example.com Google Cloud environment. The example.com reference pipeline architecture is designed to be consistent with most enterprise's security controls.

CICD and seed projects

The example.com organization uses the Cloud Foundation Toolkit to create the basic resources necessary to stand up an IaC environment within Google Cloud. This process creates a bootstrap folder at the root of the organization that contains a CICD project and a seed Terraform project.

The CICD project is a tightly controlled project within the organization hierarchy that's used to host the Jenkins deployment pipeline. It also hosts a service account that's used to run the Jenkins agents' Compute Engine instances. The Jenkins service account can impersonate the Terraform service account, which is located in the seed project, and which has permissions to deploy the foundation structures within the organization. The CICD project is created through a scripted process. It has direct connectivity to the on-premises environment, separate from the connectivity described in Hub-and-spoke.

The seed project contains the Terraform state of the foundation infrastructure, a highly privileged service account that's able to create new infrastructure, and the encryption configuration to protect that state. When the CI/CD pipeline runs, it impersonates this service account. The reason for having independent CICD and Terraform seed projects is due to separation of concerns. While Terraform is used as the IaC tool and has its own requirements, the deployment of that IaC is the responsibility of the CI/CD pipeline.

Deployment pipeline architecture

Figure 2.5.1 shows the foundation deployment pipeline for example.com. Terraform code that defines the example.com infrastructure is stored in an on-premises Git repository. Code changes to the main branch of the repository trigger a webhook that in turn triggers a deployment of the Terraform code into the example.com organization.

The Jenkins pipeline is implemented using a horizontal distributed-build architecture. In this model, a Jenkins manager (master) handles HTTP requests and manages the build environment, while the execution of builds is delegated to the Jenkins agents in the CICD project.

The Jenkins manager is hosted on-premises, co-located with the source-control management system, while the Jenkins agents are hosted in the Google Cloud environment. The Jenkins agent is a simple SSH server with Java and Docker installed on it—it needs to have access only to the manager's SSH public key. The Jenkins manager in turn acts as an SSH client. The manager uses the SSH private key to connect to the agent and deploys a Java executable JAR file that allows it to instruct the agent to execute the pipeline. The pipeline creates temporary containerized Terraform workers that deploy and modify the example.com infrastructure.

Figure 2.5.1 The example.com foundation deployment pipeline

Figure 2.5.1 The example.com foundation deployment pipeline

Table 2.5.1 details the security and governance controls that are integrated with the pipeline.

Control Description
Pull request (PR) Code that's merged with the main branch needs to have an approved PR.
Policy checks Policy checks are enforced by the pipeline against the Terraform code using Terraform Validator.
Deployment approval An optional manual approval stage is included in the pipeline for deploying code.

Table 2.5.1 The example.com foundation pipeline security controls

Using this architecture in Google Cloud allows a clean authentication and authorization process for Google Cloud APIs. The Jenkins agent uses a custom service account that doesn't have permission to create new infrastructure and that instead impersonates the Terraform service account to deploy infrastructure. As you can see in Table 2.5.2, the Terraform service account in turn has the administrative roles that are needed in order to create organization policies, folders, projects, and other foundation components. Because these service accounts include very sensitive permissions, access to the CICD and seed projects where these agents run is highly restricted.

Role title Role
Access Context Manager Admin roles/accesscontextmanager.policyAdmin
Billing Account User roles/billing.user
Compute Network Admin roles/compute.networkAdmin
Compute Shared VPC Admin roles/compute.xpnAdmin
Folder Admin roles/resourcemanager.folderAdmin
Logs Configuration Writer roles/logging.configurationWriter
Organization Administrator roles/resourcesmanager.organizationAdmin
Organization Policy Administrator roles/orgpolicy.policyAdmin
Organization Viewer roles/resourcemanager.organizationViewer
Security Center Notification Configurations Editor roles/securitycenter.notificationConfigEditor
Service Account Admin roles/iam.serviceAccountAdmin
Security Admin roles/iam.securityAdmin

Table 2.5.2 Service account permissions for the foundation deployment pipeline

Running the agents in Compute Engine forgoes the requirement to download a service account's JSON key, which would be required if these agents ran on-premises or in any other environment outside of Google Cloud. Within the example.com organization, only the Jenkins agents and select firecall accounts have permissions to deploy foundation components.

Project deployment

Projects in the example.com organization are deployed through the deployment pipeline. This section describes project attributes that are assigned when the project is created.

Project labels

Project labels are key-value pairs that are included with the billing export into Cloud Monitoring, enabling enhanced analysis on billing and resource usage. Table 2.5.3 details the project metadata that's added to each project in the example.com deployment.

Label Description
business-code A 4-character code that describes which business unit owns the project. The code abcd is used for projects that are not explicitly tied to a business unit.
billing-code A code that's used to provide chargeback information.
primary-contact The primary email contact for the project.
secondary-contact The secondary email contact for the project.
environment A value that identifies the type of environment, such as nonprod or prod.

Table 2.5.3 Project labels for example.com

IAM permissions

IAM permissions are defined and created on a per-project basis as part of project deployment.

Google Cloud APIs

Google Cloud APIs are enabled on a per-project basis, and the pipeline has policy checks in place to ensure that only approved APIs can be enabled using an allow/deny list.

Billing account

New projects are linked to the primary billing account. Chargeback to the appropriate business unit is enabled through the use of project labels, as described in Billing exports and chargeback.

Networking

Project networking structures such as VPC networks, subnets, firewalls, and routes are enabled through the deployment pipeline.

Project editor

There are two project editors associated with a project. One is the custom service account that's used by the deployment pipeline in the seed project. The other project editor is a firecall account that can be used if automation breaks down or in an emergency, as described in Privileged identities.

Repository structure

The example.com code repository is distributed as a combined single repository to make it easy for you to fork, copy, and use. However, the code has been constructed in such a way that each step in the code is executed through a separate Jenkins job and repository. The top-level folders and the contents of each folder are shown in Table 2.5.4. You can find the code for example.com in the terraform-example-foundation GitHub repository.

Folder Description example.com components
0-bootstrap This is where initial projects and IAM permissions are deployed for subsequent IaC stages (1-4). bootstrap folder
  • seed project
  • CICD project
    • Jenkins pipeline
    • Terraform Validator
1-org This is for organization-wide concerns such as policy, log exports, IAM, and so on. organization policy organization-wide IAM settings common folder
  • base_network_hub
  • billing export project
  • log export project
  • interconnect project
  • org-wide secrets project
  • DNS project
  • restricted_network_hub
  • SCC notifications project
2-environments This is for modular creation of new top-level environments, including required projects and the top-level folder. dev folder
  • base Shared VPC host projects
  • restricted Shared VPC host projects
  • environment secrets projects
  • environment monitoring project
nonprod folder
  • (same projects as dev folder)
prod folder
  • (same projects as dev folder)
3-networks This is for modular creation and management of VPC networks. VPC networks firewall rules Cloud Router instances routes
4-projects This is for creation of projects for different teams or business units, with an application workload focus. application projects

Table 2.5.4 The example.com repository structure

Foundation creation and branching strategy

To build out the example.com foundation, you start by creating a fork of the example.com repository and then create separate repositories for each of the folders in the example.com repository. Once you've created separate repositories, you manually deploy the code that's in the 0-bootstrap repository. The code in this repository creates the initial projects and the foundation deployment pipeline. After the code from the 0-bootstrap folder has run, the code in the 1-org folder runs by using the foundation deployment pipeline to create organizational-wide settings and to create the common folder.

After the code has been deployed in the 0-bootstrap and 1-org folders, the remainder of the foundation is built by deploying the code from the 2-environments, 3-networks, and 4-projects folders. The code uses a persistent branch strategy to deploy code through the foundation deployment pipeline to the appropriate environment.

As shown in Figure 2.5.2, the example.com organization uses three branches (development, non-production, and production) that reflect the corresponding environments. You should protect each branch through a PR process. Development happens on feature branches that branch off development.

Figure 2.5.2 The example.com deployment branching strategy

Figure 2.5.2 The example.com deployment branching strategy

When a feature or bug fix is complete, you can open a PR that targets the development branch. Submitting the PR triggers the foundation pipeline to perform a plan and to validate the changes against all environments. After you've validated the changes to the code, you can merge the feature or bug fix into the development branch. The merge process triggers the foundation pipeline to apply the latest changes in the development branch on the development environment. After the changes have been validated in the development environment, changes can be promoted to non-production by opening a PR that targets the non-production branch and merging those changes. Similarly, changes can be promoted from non-production to production.

The foundation pipeline and workloads

The pipeline architecture that's laid out in CICD and seed projects through Repository structure deploys the foundation layer of the Google Cloud organization. You should not use the pipeline for deploying higher-level services or applications. Furthermore, as shown in Figure 2.5.3, the access pattern to Google Cloud through deployment pipelines is only one potential access pattern. You might need to evaluate the access pattern and the controls on the workload for each workload individually.

Figure 2.5.3 Access patterns for example.com

Figure 2.5.3 Access patterns for example.com

The example.com foundation provides you with an infrastructure pipeline, detailed in The infrastructure pipeline, that you can use to deploy infrastructure components such as a Cloud SQL instance or a Google Kubernetes Engine (GKE) cluster. The secured foundation also includes an example application pipeline, described in The application pipeline, that you can use to deploy containers to GKE clusters. The application pipeline is maintained in a separate repository from the secured foundation.

The infrastructure pipeline

The infrastructure pipeline that comes with the example.com foundation builds off the Cloud Build-based code of the foundation pipeline. The infrastructure pipeline manages the lifecycle of the infrastructure components independently of the foundation components. The service account that's associated with the infrastructure pipeline has a more limited set of permissions compared to the service account that's associated with the foundation pipeline.

When the foundation pipeline creates a project, it creates a service account that has a controlled set of permissions. The service account that's associated with the infrastructure pipeline is allowed to impersonate the project service account and to perform only those actions that are permitted to the project service account. This strategy allows you to create a clear separation of duties between the people who deploy foundation components and those who deploy infrastructure components.

The infrastructure pipeline is created by the foundation pipeline and deployed in the prj-bu1-c-infra-pipelineandprj-bu2-c-infra-pipeline projects in the common folder, rather than in the CICD project in the seed folder where the foundation pipeline is deployed. To deploy infrastructure components, you create a separate repository to define those components, such as 5-infrastructure. You can then deploy components to your various environments using the same branching strategy that the foundation pipeline uses.

If your organization has multiple business units and you want each business unit to be able to deploy infrastructure components independently, as seen in Figure 2.5.4, you can create multiple infrastructure pipelines and repositories. Each of the infrastructure pipelines can have a service account with permission to impersonate only the service accounts of the projects that are associated with that business unit.

Figure 2.5.4 Multiple infrastructure pipelines

Figure 2.5.4 Multiple infrastructure pipelines

The application pipeline

The application pipeline that's used with the secured foundation is a Cloud Build-based implementation of the secured shift-left blueprint pipeline that enables you to deploy applications to a Kubernetes cluster. The application pipeline consists of a continuous integration (CI) pipeline and a continuous delivery (CD) pipeline, as shown in Figure 2.5.5.

Figure 2.5.5 Application deployment pipeline

Figure 2.5.5 Application deployment pipeline

The application pipeline uses immutable container images across your environments. This means that the same image is deployed across all environments and will not be modified while it's running. If you must update the application code or apply a patch, you build a new image and redeploy it. The use of immutable container images requires you to externalize your container configuration so that configuration information is read during runtime.

Continuous integration

The application CI pipeline starts when you commit your application code to the release branch; this operation triggers the Cloud Build pipeline. Cloud Build creates a container image, and then Cloud Build pushes the container image to Container Registry, creating an image digest. When you build your application, you should follow best practices for building containers.

After the container has been pushed to the Container Registry, the container is analyzed using the Container Structure Test framework. This framework performs command tests, file existence tests, file content tests, and metadata tests. The container image then goes through vulnerability scanning to identify vulnerabilities against a vulnerability database that's maintained by Google Cloud. To help avoid compromised artifacts being introduced, IAM policies are used so that only the Cloud Build service account can contribute to the repository.

After the container has successfully gone through vulnerability scanning, the application pipeline uses Binary Authorization to sign an image. Binary Authorization is a service on Google Cloud that provides software supply-chain security for container-based applications by using policies, rules, notes, attestations, attestors, and signers. At deployment time, the Binary Authorization policy enforcer ensures the provenance of the container before allowing the container to deploy.

Continuous delivery

The application CD pipeline uses Cloud Build and Anthos Config Management to enable you to deploy container images to your development, non-production, and production environments. Deployments to your environments are controlled through a persistent branching strategy. To start your deployment, you submit a Kubernetes manifest into a "dry" repository on the development branch. The manifest contains the image digest of the container or containers that you want to deploy. The initial submission into the dry repository triggers Cloud Build to place the manifest into the "wet" repository.

The CD pipeline uses Anthos Config Management to monitor the wet repository and to deploy the container or containers that are specified in the manifest file or files to the GKE cluster that corresponds to the Git branch. Anthos Config Management synchronizes the states of your clusters with your Git repository using Config Sync. If Config Sync Operator fails to apply changes to a resource, the resource is left in the last known good state.

After you validate the container image in the development environment, you can promote changes to the non-production environment by opening a PR that targets the non-production branch and then merging the changes. Changes can be promoted from the non-production branch to the production branch in a similar way. In each promotion, the CD pipeline uses the Binary Authorization framework to create a new signed attestation to ensure that the container image has the appropriate providence. If you need to roll back, you can revert to the last known good commit.