This tutorial is the second part of a series that discusses building an automated continuous integration (CI) pipeline to build multi-architecture container images on Google Cloud.
In this tutorial, you implement a pipeline for building multi-architecture container images by using Cloud Build and Container Registry. This tutorial exemplifies the multi-architecture build strategy described in the Implementing a multi-architecture container images building pipeline section in part 1 of this series.
For example, suppose you maintain a fleet of Internet of Things (IoT) devices. As new requirements for your IoT solution emerge, you need new hardware devices. If the new devices have a different hardware architecture than your existing ones, you need to modify your build pipeline to support the new architecture.
This tutorial is intended for IT professionals who want to simplify and streamline complex pipelines for building container images, or to extend those pipelines to build multi-architecture images.
This tutorial assumes that you have a basic knowledge of the following:
- Terraform, for creating infrastructure on Google Cloud.
- Google Cloud CLI, for performing platform tasks on Google Cloud.
- Cloud Shell, for running commands in this tutorial. All the tools used in this tutorial are preinstalled in Cloud Shell.
- Cloud Build, for setting up a CI pipeline.
- Docker, as a container management platform.
- Container Registry, for storing the container images that the build process produces.
In this tutorial, you use Terraform to set up the resources that you need in order to provision and configure the pipeline for building container images.
Architecture
The following diagram illustrates the workflow for the pipeline that you create in this tutorial for building container images.
Changes made to the source code of the container image trigger Cloud Build to build a multi-architecture container image. When the build is completed, the multi-architecture container image is stored in Container Registry.
Objectives
- Use Terraform to provision the pipeline for building container images on Google Cloud.
- Modify the source code of the container image to trigger a new build.
- Inspect the container image that is stored in Container Registry.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Preparing your environment
In this tutorial, you run all commands in Cloud Shell.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Clone the sample code repository:
cd "$HOME" git clone \ https://github.com/GoogleCloudPlatform/solutions-build-multi-architecture-images-tutorial.git
Generate application default credentials:
gcloud auth application-default login --quiet
The output is similar to the following:
Go to the following link in your browser: https://accounts.google.com/o/oauth2/auth?code_challenge=... Enter verification code:
In a browser window, open the URL that is displayed in the output from generating the application default credentials (the preceding step).
Select Allow to continue.
Copy the code on the screen and enter it into Cloud Shell.
The output is similar to the following:
/tmp/tmp.xxxxxxxxxx/application_default_credentials.json
Note the path to the
application_default_credentials.json
file. You use this path to set an environment variable in the next section.
Setting environment variables
Before you can provision the necessary infrastructure for this tutorial, you need to initialize and export the following environment variables:
In Cloud Shell, create an environment variable that stores the Google Cloud service account name that Terraform uses to provision resources:
export TF_SERVICE_ACCOUNT_NAME=tf-service-account
Create an environment variable that stores the Google Cloud project ID that Terraform uses to store the state:
export TF_STATE_PROJECT=${DEVSHELL_PROJECT_ID}
Create an environment variable that stores the Cloud Storage bucket that Terraform uses to save the state files:
export TF_STATE_BUCKET=tf-state-bucket-${TF_STATE_PROJECT}
Create an environment variable that stores the Google Cloud project ID that contains the resources for the container image build pipeline:
export GOOGLE_CLOUD_PROJECT=${DEVSHELL_PROJECT_ID}
Create an environment variable that stores path to the default Google Cloud application default credentials, which is the value you noted in the preceding section:
export GOOGLE_APPLICATION_CREDENTIALS=PATH
Replace the following:
PATH
: path to theapplication_default_credentials.json
file
Provisioning the environment
You need to run the generate-tf-backend.sh
shell script that generates the
Terraform backend configuration,
the necessary Google Cloud service accounts, and the
Cloud Storage bucket to store information about the
Terraform remote state.
In Cloud Shell, provision your build environment:
cd $HOME/solutions-build-multi-architecture-images-tutorial/ ./generate-tf-backend.sh
The script is idempotent and safe to run multiple times.
After you run the script successfully for the first time, the output is similar to the following:
Generating the descriptor to hold backend data in terraform/backend.tf terraform { backend "gcs" { bucket = "tf-state-bucket-project-id" prefix = "terraform/state" } }
Creating the build pipeline
The Terraform template file terraform/main.tf
defines the resources that are
created for this tutorial. By running Terraform with that descriptor, you create
the following Google Cloud resources:
- A Cloud Source Repositories code repository to store the container image descriptor and the Cloud Build build configuration file.
- A Pub/Sub topic where Cloud Build publishes messages on each source code change.
- A Cloud Build build that builds the multi-architecture container image.
- A Container Registry repository to store container images.
In Cloud Shell, do the following:
To initialize the Terraform working directory, run the terraform init command:
cd terraform terraform init
(Optional) To review the changes that Terraform is going to apply, run the terraform plan command:
terraform plan
The output is a list of all actions that Terraform is expected to perform to provision resources in the Google Cloud environment. The summary of all actions is similar to the following:
Plan: 8 to add, 0 to change, 0 to destroy.
The total number of add actions is 8, with no changes and no deletions.
Run the terraform apply command to create the resources in your Google Cloud project:
terraform apply
To continue with running the command, enter
yes
.
Pushing the source files to Cloud Source Repositories
In order for the build pipeline to execute the build, the Dockerfile and the Cloud Build configuration files need to be stored in a Cloud Source Repositories source code repository.
In Cloud Shell, clone the source repository:
cd $HOME gcloud source repos clone cross-build
Copy the Dockerfile and the Cloud Build configuration file into the source code repository:
cp -r "$HOME"/solutions-build-multi-architecture-images-tutorial/terraform/cloud-build/. "$HOME"/cross-build
Commit and push the files in the source code repository:
cd "$HOME"/cross-build git add . git commit -m "Initial commit" git push
Inspecting the results
While the Cloud Build job is running and after its completion, you can inspect the execution of each build step on the Cloud Build Build history page.
Cloud Build build
In the Build history page, you get an overview of the build steps, together with the time it took to run each step, as the following illustration shows.
If you open a build step, you see the output for that step. For example, the build details of the buildx inspect step in the preceding diagram show the different target platform architecture that the platform supports:
12 Name: mybuilder0 13 Endpoint: unix:///var/run/docker.sock 14 Status: running 15 Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
The build details for the fourth step show the output from the build for each target architecture:
#8 0.268 I am running on linux/amd64, building for linux/amd64 #12 0.628 I am running on linux/amd64, building for linux/arm/v7 #10 0.279 I am running on linux/amd64, building for linux/arm/v6 #14 0.252 I am running on linux/amd64, building for linux/arm64
Image manifest in Container Registry
After the build has completed, you can inspect the image manifest on the Container Registry Images page in the Google Cloud console, as the following illustration shows.
If you open the test repository in the repository list, you see all the container image versions that belong to the test repository, as the following illustration shows.
You can open the image that is tagged latest to open to the Digest details page to see detailed information about the image, as the following illustration shows.
In the Digest details page, you can expand the Manifest section and verify all the target architecture created by the build is stated in the file, as the following example shows:
{ "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", "schemaVersion": 2, "manifests": [ { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:839024acb1038509e3bc66f3744857840951d0d512be54fd6670ea1e8babdcb6", "size": 735, "platform": { "architecture": "amd64", "os": "linux" } }, { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:33489767c29efb805e446a61d91cc55e042d3cfadcd186d9a1c8698f2f12309d", "size": 735, "platform": { "architecture": "arm64", "os": "linux" } }, { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:f1958815778ca8c83d324bad3fc68a9e3e9d5ea48b5bb27a8aca7d8da20cf8d4", "size": 735, "platform": { "architecture": "arm", "os": "linux", "variant": "v7" } } ] }
You can also view the image manifest directly from Cloud Shell.
In Cloud Shell, display the image manifest:
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest inspect gcr.io/"${DEVSHELL_PROJECT_ID}"/test:latest
The output is the same as the manifest file that you can expand on the Digest details page.
Configuring continuous deployment from the build pipeline
To build the container image for the new hardware architecture, you modify the build configuration file by adding the new target architecture. After you commit and push the change to the source repository in Cloud Source Repositories, Cloud Build starts a new build. The build produces a new version of the multi-architecture container image, including the support for the newly added hardware architecture.
In Cloud Shell, add the new target platform to the build configuration file:
cd "$HOME"/cross-build sed -i -e 's/linux\/arm\/v7/linux\/arm\/v7,linux\/386/g' build-docker-image-trigger.yaml
Commit and push the change to the source code repository:
git add . git commit -m "add a new target platform" git push
View the latest manifest to verify the new target platform is part of the latest container image:
DOCKER_CLI_EXPERIMENTAL=enabled docker manifest inspect gcr.io/${DEVSHELL_PROJECT_ID}/test:latest
Verify the newly added target platform is in the manifest file, similar to the following output:
{ "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", "schemaVersion": 2, "manifests": [ { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:bc80d063fccb4c370df9b505cbf4f8a814a366d99644de09ebee98af2ef0ff63", "size": 735, "platform": { "architecture": "amd64", "os": "linux" } }, { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:be10e4f01f529149815ebad7eb09edaa84ebef5b7d70d51f7d1acb5ceb1f61cd", "size": 735, "platform": { "architecture": "arm64", "os": "linux" } }, { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:f6ba5d5d3bc1ea0177e669517ea15a0d4fb97c06c7eca338afa43734d87af779", "size": 735, "platform": { "architecture": "arm", "os": "linux", "variant": "v7" } }, { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:a3c34621cca10974026f8ad0782af78539cd7bb0ebfa0082a27b2c3ed4418ca0", "size": 735, "platform": { "architecture": "386", "os": "linux" } } ] }
Clean up
The easiest way to eliminate billing is to delete the Google Cloud project you created for the tutorial. Alternatively, you can delete the individual resources.
Delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Delete the individual resources
If you want to keep the project that you used in this tutorial, perform the following steps to delete the resources that you created in this tutorial.
In Cloud Shell, delete the container images:
gcloud container images delete gcr.io/${DEVSHELL_PROJECT_ID}/test --quiet
Delete the resources you provisioned using Terraform:
cd $HOME/solutions-build-multi-architecture-images-tutorial/terraform terraform destroy
Enter
yes
to confirm the deletion.
What's next
- Read Multi-architecture container images for IoT devices.
- Learn more about managing infrastructure as code with Terraform, Cloud Build, and GitOps.
- Get more information about DevOps on the DevOps page.
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.