This tutorial describes how to set up and use a development, continuous integration (CI), and continuous delivery (CD) system using an integrated set of Google Cloud tools. You can use this system to develop and deploy applications to Google Kubernetes Engine (GKE).
This tutorial is intended for both software developers and operators, and you play both roles as you complete it. First you act as the operator to set up the CI/CD pipeline. The main components of this pipeline are Cloud Build, Artifact Registry, and Google Cloud Deploy.
Then you act as a developer to change an application using Cloud Code. When you act as a developer, you see the integrated experience that this pipeline provides.
Finally, you act as an operator and go through the steps to deploy an application into production.
This tutorial assumes you're familiar with running gcloud
commands on
Google Cloud and with deploying application containers to
GKE.
The following are the key features of this integrated system:
Develop and deploy faster.
The development loop is efficient because you can validate changes in the developer workspace. Deployment is fast because the automated CI/CD system and increased parity across the environments allow you to detect more issues when you deploy changes to production.
Benefit from increased parity across development, staging, and production.
The components of this system use a common set of Google Cloud tools.
Reuse configurations across the different environments.
This reuse is done with Skaffold, which allows a common configuration format for the different environments. It also allows developers and operators to update and use the same configuration.
Apply governance early in the workflow.
This system applies validation tests for governance at production and in the CI system and development environment. Applying governance in the development environment allows problems to be found and fixed earlier.
Let opinionated tooling manage your software delivery.
Continuous delivery is fully managed, separating the stages of your CD pipeline from the details of rendering and deploying.
Architecture overview
The following diagram shows the resources used in this tutorial:
The three main components of this pipeline are:
Cloud Code as a development workspace
As part of this workspace, you can see changes in the development cluster, which runs on minikube. You run Cloud Code and the minikube cluster in Cloud Shell. Cloud Shell is an online development environment accessible from your browser. It has compute resources, memory, an integrated development environment, (IDE), and it also has Cloud Code installed.
Cloud Build to build and test the application—the "CI" part of the pipeline
This part of the pipeline includes the following actions:
- Cloud Build monitors changes to the source repository, using a Cloud Build trigger.
- When a change is committed into the main branch, the
Cloud Build trigger does the following:
- Rebuilds the application container.
- Places build artifacts in a Cloud Storage bucket.
- Places the application container in Artifact Registry.
- Runs tests on the container.
- Calls Google Cloud Deploy to deploy the container to the staging environment. In this tutorial, the staging environment is a Google Kubernetes Engine cluster.
- If the build and tests are successful, you can then use Google Cloud Deploy to promote the container from staging to production.
Google Cloud Deploy to manage the deployment—the "CD" part of the pipeline
In this part of the pipeline, Google Cloud Deploy does the following:
- Registers a delivery pipeline and targets. The targets represent the staging and production clusters.
- Creates a Cloud Storage bucket and stores the Skaffold rendering source and rendered manifests in that bucket.
- Generates a new release for each source code change. In this tutorial there is one change, so one new release.
- Deploys the application to the production environment. For this deployment to production, an operator (or other designated person) manually approves the deployment. In this tutorial, the production environment is a Google Kubernetes Engine cluster.
Skaffold, a command-line tool that facilitates continuous development for Kubernetes-native applications, underlies these components, allowing configuration to be shared among the development, staging, and production environments.
Google Cloud stores the application's source code in GitHub, and as part of this tutorial you clone that repository into Cloud Source Repositories to connect it to the CI/CD pipeline.
This tutorial uses Google Cloud products for most of the components of the system, with Skaffold enabling the integration of the system. Because Skaffold is open source, you can use these principles to create a similar system using a combination of Google Cloud, in-house, and third-party components. The modularity of this solution means that you can adopt it incrementally as part of your development and deployment pipeline.
Objectives
Acting as an operator, you do the following:
- Set up the CI pipeline and CD pipeline. This setup includes the following:
- Set up the required permissions.
- Create the GKE clusters for the staging and production environments.
- Create a repository in Cloud Source Repositories for the source code.
- Create a repository in Artifact Registry for the application container.
- Create a Cloud Build trigger on the main GitHub repository.
- Create a Google Cloud Deploy delivery pipeline and targets. The targets are the staging and production environment.
- Start the CI/CD process to deploy to staging and then promote to production.
Acting as a developer, you make a change to the application. To do so, you do the following:
- Clone the repository to work with a pre-configured development environment.
- Make a change to the application within your developer workspace.
- Build and test the change. The tests include a validation test for governance.
- View and validate the change in a dev cluster. This cluster runs on minikube.
- Commit the change into the main repository.
Costs
This tutorial uses the following billable components of Google Cloud:
- Cloud Build
- Google Cloud Deploy
- Artifact Registry
- Google Kubernetes Engine
- Cloud Source Repositories
- Cloud Storage
To generate a cost estimate based on your projected usage,
use the pricing calculator.
When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Clean up.
Before you begin
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
-
Enable the Artifact Registry, Cloud Build, Google Cloud Deploy, Cloud Source Repositories, Google Kubernetes Engine, Resource Manager, and Service Networking APIs.
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Prepare your environment
In this section, you act as the application operator and do the following:
- Set up the required permissions.
- Create the GKE clusters for the staging and production environments.
- Clone the source repository.
- Create a repository in Cloud Source Repositories for the source code.
- Create a repository in Artifact Registry for the container application.
Set up permissions
In this section, you grant the permissions that are needed to set up the CI/CD pipeline.
If you're working in a new instance of Cloud Shell Editor, specify the project to use for this tutorial:
gcloud config set project PROJECT_ID
Replace PROJECT_ID with the ID of the project you selected or created for this tutorial.
If a dialog is displayed, click Authorize.
Grant the service accounts the required permissions:
Make sure the default Compute Engine service account has sufficient permissions to run jobs in Google Cloud Deploy and pull containers from Artifact Registry. Cloud Build and Google Cloud Deploy use this default service account.
This service account might already have the necessary permissions. This step ensures the necessary permissions are granted for projects that disable automatic role grants for default service accounts.
gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")-compute@developer.gserviceaccount.com \ --role="roles/clouddeploy.jobRunner" gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")-compute@developer.gserviceaccount.com \ --role="roles/artifactregistry.reader"
Grant the Cloud Build service account privilege to invoke deployments with Google Cloud Deploy and to update the delivery pipeline and the target definitions:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")@cloudbuild.gserviceaccount.com \ --role="roles/clouddeploy.operator"
See the clouddeploy.operator role for more details about this IAM role.
Grant the Cloud Build and Google Cloud Deploy service account privilege to deploy to GKE:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")-compute@developer.gserviceaccount.com \ --role="roles/container.admin"
See the container.admin role for more details about this IAM role.
Grant the Cloud Build service account the permissions needed to invoke Google Cloud Deploy operations:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")@cloudbuild.gserviceaccount.com \ --role="roles/iam.serviceAccountUser"
When Cloud Build invokes Google Cloud Deploy it uses a Compute Engine service account to create a release, which is why this permission is needed.
See the iam.serviceAccountUser role for more details about this IAM role.
You have now granted the permissions needed for the CI/CD pipeline.
Create the GKE clusters
In this section, you create the staging and production environments, which are both GKE clusters. (You don't need to set up the development cluster here, because it uses minikube.)
Create the staging and production GKE clusters:
gcloud container clusters create-auto staging \ --region us-central1 \ --project=$(gcloud config get-value project) \ --async
gcloud container clusters create-auto prod \ --region us-central1 \ --project=$(gcloud config get-value project) \ --async
The staging cluster is where you test changes to your code. After you verify that the deployment in staging did not negatively affect the application, you deploy to production.
Run the following command and ensure the output has
STATUS: RUNNING
for both the staging and production clusters:gcloud container clusters list
Retrieve the credentials to your
kubeconfig
files for the staging and production clusters.You use these credentials to interact with the GKE clusters, for example to check if an application is running properly.
gcloud container clusters get-credentials staging --region us-central1
gcloud container clusters get-credentials prod --region us-central1
You have now created the GKE clusters for the staging and production environment.
Open the IDE and clone the repository
To clone the repository and view the application in your development environment, do the following:
Click Confirm.
Cloud Shell Editor opens and clones the sample repository.
You can now view the application's code in your Cloud Shell Editor.
Specify the project to use for this tutorial:
gcloud config set project PROJECT_ID
If a dialog is displayed, click Authorize.
You now have the source code for the application in your development environment.
This source repository includes the Cloud Build and Google Cloud Deploy files needed for the CI/CD pipeline.
Create repositories for the source code and for the containers
In this section you set up a repository in Cloud Source Repositories for the source code, and a repository in Artifact Registry to store the containers built by the CI/CD pipeline.
Create a repository in Cloud Source Repositories to store the source code and link it with the CI/CD process:
gcloud source repos create cicd-sample
Ensure the Google Cloud Deploy configurations target the correct project:
sed -i s/project-id-placeholder/$(gcloud config get-value project)/g deploy/* git config --global credential.https://source.developers.google.com.helper gcloud.sh git remote add google https://source.developers.google.com/p/$(gcloud config get-value project)/r/cicd-sample
Push your source code to the repository:
git push --all google
Create an image repository in Artifact Registry:
gcloud artifacts repositories create cicd-sample-repo \ --repository-format=Docker \ --location us-central1
You now have a repository for source code in Cloud Source Repositories and one for the application container in Artifact Registry. The Cloud Source Repositories repository allows you to clone the source code and connect it to the CI/CD pipeline.
Configure the CI/CD pipeline
In this section, you act as the application operator and configure the CI/CD pipeline. The pipeline uses Cloud Build for CI and Google Cloud Deploy for CD. The steps of the pipeline are defined in the Cloud Build trigger.
Create a Cloud Storage bucket for Cloud Build to store the
artifacts.json
file (which tracks the artifacts generated by Skaffold for each build):gsutil mb gs://$(gcloud config get-value project)-gceme-artifacts/
Storing each build's
artifacts.json
file in a central place is a good practice because it provides traceability, which makes troubleshooting easier.Review the
cloudbuild.yaml
file, which defines the Cloud Build trigger and is already configured in the source repository that you cloned.This file defines the trigger invoked whenever there is a new push to the main branch of the source-code repository.
The steps for the CI/CD pipeline are defined in this file:
Cloud Build uses Skaffold to build the application container.
Cloud Build places the build's
artifacts.json
file in the Cloud Storage bucket.Cloud Build places the application container in Artifact Registry.
Cloud Build runs tests on the application container.
The
gcloud deploy apply
command registers the following files with the Google Cloud Deploy service:deploy/pipeline.yaml
, which is the delivery pipelinedeploy/staging.yaml
anddeploy/prod.yaml
, which are the target filesWhen the files are registered, Google Cloud Deploy creates the pipeline and targets if they do not yet exist, or re-creates them if the configuration changed. The targets are the staging and production environments.
Google Cloud Deploy creates a new release for the delivery pipeline.
This release references the application container that was built and tested in the CI process.
Google Cloud Deploy deploys the release to the staging environment.
The delivery pipeline and targets are managed by Google Cloud Deploy and are decoupled from the source code. This decoupling means that you don't need to update the delivery pipeline and target files when a change is made to the application's source code.
Create the Cloud Build trigger:
gcloud beta builds triggers create cloud-source-repositories \ --name="cicd-sample-main" \ --repo="cicd-sample" \ --branch-pattern="main" \ --build-config="cloudbuild.yaml"
This trigger tells Cloud Build to watch the source repository and to use the
cloudbuild.yaml
file to react to any changes to the repository. This trigger is invoked whenever there is a new push to the main branch.Go to Cloud Build and notice that there are no builds for your application.
You have now set up the CI and CD pipelines, and created a trigger on the main branch of the repository.
Make a change to your application within your developer workspace
In this section, you act as the application developer.
As you develop your application, you make and verify iterative changes to the application using Cloud Code as your development workspace:
- Make a change to the application.
- Build and test the new code.
- Deploy the application to the minikube cluster and verify the user-facing changes.
- Submit the change to the main repository.
When this change is committed into the main repository, the Cloud Build trigger starts the CI/CD pipeline.
Build, test, and run the application
In this section, you build, test, deploy, and access your application.
Use the same instance of Cloud Shell Editor that you used in the preceding section. If you closed the editor, then in your browser open Cloud Shell Editor by navigating to ide.cloud.google.com.
In the terminal, start minikube:
minikube start
minikube sets up a local Kubernetes cluster in your Cloud Shell. This setup takes a few minutes to run. After it's completed, the minikube process runs in the background on the Cloud Shell instance.
In the pane at the bottom of Cloud Shell Editor, select Cloud Code.
In the thin panel that appears between the terminal and the editor, select Run on Kubernetes.
If you see a prompt that says
Use current context (minikube) to run the app?
, click Yes.This command builds the source code and runs tests. This can take a few minutes. The tests include unit tests and a pre-configured validation step that checks the rules set for the deployment environment. This ensures that you are warned about deployment issues even while you're still working in your development environment.
The Output tab shows the Skaffold progress as it builds and deploys your application.
Keep this tab open throughout this section.
When the build and tests finish, the Output tab says
Update succeeded
, and shows two URLs.As you build and test your app, Cloud Code streams back the logs and URLs in the Output tab. As you make changes and run tests in your development environment, you can see your development environment's version of the app and verify that it's working correctly.
The output also says
Watching for changes...
, which means that watch mode is enabled. While Cloud Code is in watch mode, the service detects any saved changes in your repository and automatically rebuilds and redeploys the app with the latest changes.In the Cloud Code terminal, hold the pointer over the first URL in the output (
http://localhost:8080
).In the tool tip that appears, select Open Web Preview.
In the background, Cloud Code is automatically port-forwarding traffic to the
cicd-sample
service running on minikube.In your browser, refresh the page.
The number next to Counter increases, showing that the app is responding to your refresh.
In your browser, keep this page open so that you can view the application as you make any changes in your local environment.
You have now built and tested your application in the development environment. You have deployed the application into the development cluster running on minikube, and viewed the user-facing behavior of the application.
Make a change
In this section, you make a change to the application and view the change as the app runs in the development cluster.
In Cloud Shell Editor, open the
index.html
file.Search for the string
Sample App Info
, and change it tosample app info
, so that the title now uses lowercase letters.The file is automatically saved, triggering a rebuild of the application container.
Cloud Code detects the change and redeploys it automatically. The Output tab shows
Update initiated
. This redeployment takes a few minutes to run.This automatic redeploy feature is available for any application running on a Kubernetes cluster.
When the build is done, go to your browser where you have the app open and refresh the page.
When you refresh, see that the text now uses lowercase letters.
This setup gives you automatic reloading for any architecture, with any components. When you use Cloud Code and minikube, anything that is running in Kubernetes has this hot code reloading functionality.
You can debug applications that are deployed to a Kubernetes cluster in Cloud Code. These steps aren't covered in this tutorial, but for details see Debugging a Kubernetes application.
Commit the code
Now that you've made a change to the application, commit the code:
Configure your git identity:
git config --global user.email "YOU@EXAMPLE.COM" git config --global user.name "NAME"
Replace the following:
- YOU@EXAMPLE.COM with the email address connected to your GitHub account
- NAME with the name connected to your GitHub account
From the terminal, commit the code:
git add . git commit -m "use lowercase for: sample app info"
You don't need to run the
git push
command here. That comes later.
Working in the development environment, you have now made a change to the application, built and tested the change, and verified the user-facing behavior of these changes. The tests in the development environment include governance checks, which let you fix issues that cause problems in the production environment.
In this tutorial, when you commit the code into the main repository, you don't go through a code review. However, a code review or change approval is a recommended process for software development.
For more information about change approval best practices, see streamlining change approval.
Deploy a change into production
In this section, you act as the application operator and do the following:
- Trigger the CI/CD pipeline, which deploys the release to the staging environment.
- Promote and approve the release to production.
Start the CI/CD pipeline and deploy into staging
In this section, you start the CI/CD pipeline by invoking the Cloud Build trigger. This trigger is invoked whenever a change is committed to the main repository. You can also initiate the CI system with a manual trigger.
In the Cloud Shell Editor, run the following command to trigger a build:
git push google
This build includes the change you made to
cicd-sample
.Return to the Cloud Build dashboard and see that a build is created.
Click Running: cicd-sample - cicd-sample-main in the build log on the right, and look for the blue text denoting the start and end of each step.
Step 0 shows the output of the
skaffold build
andskaffold test
instructions from thecloudbuild.yaml
file. The build and test tasks in Step 0 (the CI part of the pipeline) passed, so the deployment tasks of Step 1 (the CD part of the pipeline) now run.This step successfully finishes with the following message:
Created Google Cloud Deploy rollout ROLLOUT_NAME in target staging
Open the Google Cloud Deploy delivery pipelines page and click the
cicd-sample delivery
pipeline.The application is deployed in staging, but not in production.
Verify that the application is working successfully in staging:
kubectl proxy --port 8001 --context gke_$(gcloud config get-value project)_us-central1_staging
This command sets up a kubectl proxy to access the application.
Access the application from Cloud Shell:
In Cloud Shell Editor, open a new terminal tab.
Send a request to
localhost
to increment a counter:curl -s http://localhost:8001/api/v1/namespaces/default/services/cicd-sample:8080/proxy/ | grep -A 1 Counter
You can run this command multiple times and watch the counter value increment each time.
As you view the app, notice that the text that you changed is in the version of the application you deployed on staging.
Close this second tab.
In the first tab, press
Control+C
to stop the proxy.
You have now invoked the Cloud Build trigger to start the CI process, which includes building the application, deploying it to the staging environment, and running tests to verify the application is working in staging.
The CI process is successful when the code builds and tests pass in the staging environment. The success of the CI process then initiates the CD system in Google Cloud Deploy.
Promote the release to production
In this section, you promote the release from staging to production. The production target comes pre-configured to require approval, so you manually approve it.
For your own CI/CD pipeline, you might want to use a deployment strategy that launches the deployment gradually before you do a full deployment into production. Launching the deployment gradually can make it easier to detect issues and, if needed, to restore a previous release.
To promote the release to production, do the following:
Open the Google Cloud Deploy delivery pipelines overview and select the cicd-sample pipeline.
Promote the deployment from staging to production. To do so:
In the pipeline diagram at the top of the page, click the blue Promote button in the staging box.
In the window that opens, click the Promote button at the bottom.
The deployment is not yet running in production. It's waiting for the required manual approval.
Manually approve the deployment:
In the pipeline visualization, click the Review button between the staging and production boxes.
In the window that opens, click the Review button.
In the next window, click Approve.
Return to the Google Cloud Deploy delivery pipelines overview and select the cicd-sample pipeline.
After the pipeline visualization shows the prod box as green (meaning a successful rollout), verify that the application is working in production by setting up a kubectl proxy that you use to access the application:
kubectl proxy --port 8002 --context gke_$(gcloud config get-value project)_us-central1_prod
Access the application from Cloud Shell:
In Cloud Shell Editor, open a new terminal tab.
Increment the counter:
curl -s http://localhost:8002/api/v1/namespaces/default/services/cicd-sample:8080/proxy/ | grep -A 1 Counter
You can run this command multiple times and watch the counter value increment each time.
Close this second terminal tab.
In the first tab, press
Control+C
to stop the proxy.
You've now promoted and approved the production deployment. The application with your recent change is now running in production.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Option 1: delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Option 2: delete the individual resources
Delete the Google Cloud Deploy pipeline:
gcloud deploy delivery-pipelines delete cicd-sample --region=us-central1 --force
Delete the Cloud Build trigger:
gcloud beta builds triggers delete cicd-sample-main
Delete the staging and production clusters:
gcloud container clusters delete staging
gcloud container clusters delete prod
Delete the repository in Cloud Source Repositories:
gcloud source repos delete cicd-sample
Delete the Cloud Storage buckets:
gsutil rm -r gs://$(gcloud config get-value project)-gceme-artifacts/
gsutil rm -r gs://$(gcloud config get-value project)_clouddeploy/
Delete the repository in Artifact Registry:
gcloud artifacts repositories delete cicd-sample-repo \ --location us-central1
What's next
- To learn how to deploy into a private GKE instance, see Deploying to a private cluster on a Virtual Private Cloud network.
- For best practices about automating your deployments, see:
- DevOps tech: Deployment automation for how to implement, improve, and measure deployment automation.
- Automate your deployments, from the Architecture Framework.
- For more information about deployment strategies, see:
- Launch deployments gradually, from the Architecture Framework.
- Application deployment and testing strategies
- The tutorial Implementing deployment and testing strategies on GKE
- For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center.