Migration to Google Cloud: Deploying your workloads

Stay organized with collections Save and categorize content based on your preferences.

This document can help you plan and design the deployment phase of your migration to Google Cloud. After you've assessed your current environment, planned the migration to Google Cloud, and built your Google Cloud foundation, you can deploy your workloads.

This document is part of a multi-part series about migrating to Google Cloud. If you're interested in an overview of the series, see Migration to Google Cloud: Choosing your migration path.

This article is part of a series:

The following diagram illustrates the path of your migration journey.

Migration path with four phases.

The deployment phase is the third phase in your migration to Google Cloud where you design a deployment process for your workloads.

This document is useful if you're planning a migration from an on-premises environment, from a private-hosting environment, from another cloud provider to Google Cloud, or if you're evaluating the opportunity to migrate and want to explore what it might look like.

In this document, you review the different deployment process types, in order of flexibility, automation, and complexity, along with criteria on how to pick an approach that's right for you:

  1. Deploy manually.
  2. Deploy with configuration management (CM) tools.
  3. Deploy by using container orchestration tools.
  4. Deploy automatically.
  5. Deploy by applying the infrastructure as code pattern.

Before you deploy your workloads, plan and design your deployment phase. First, you should evaluate the different deployment process types that you implement for your workloads. When you evaluate deployment process types, you can decide to start with a simple process and move to a more complex one in the future. This approach can lead to quicker results, but can also introduce friction when you move to a more advanced process, because you have to absorb the technical debt you accumulated while using the simpler process. For example, if you move from fully manual deployments to an automated solution, you might have to manage upgrades to your deployment pipeline and apps.

While it's possible to implement different types of deployment processes according to your workloads' needs, this approach can also increase the complexity of this phase. If you implement different types of deployment processes, you can benefit from the added flexibility, but you might need expertise, tooling, and resources tailored to each process, which translates to more effort on your side.

Deploy manually

A fully manual deployment is backed by a provisioning, configuration, and deployment process that is completely non-automated. While there might be specifications and checklists for each step of the process, there is no automated check or enforcement of those specifications. A manual process is prone to human error, not repeatable, and its performance is limited by the human factor.

Fully manual deployment processes can be useful, for example, when you need to quickly instrument an experiment in a sandboxed environment. Setting up a structured, automated process for an experiment that lasts minutes can unnecessarily slow down your pace, especially in the early stages of your migration, when you might lack the necessary expertise in the tools and practices that let you build an automated process.

While this limitation isn't the case with Google Cloud, fully manual deployments might be your only option when dealing with bare metal environments that lack the necessary management APIs. In this case, you cannot implement an automated process due to the lack of the necessary interfaces. If you have a legacy virtualized infrastructure that doesn't support any automation, you might be forced to implement a fully manual process

We recommend that you avoid a fully manual deployment unless you have no other option.

You can implement a fully manual provisioning, configuration, and deployment process by using tools, such as Google Cloud console, Cloud Shell, Cloud APIs, and Google Cloud CLI.

Deploy with configuration management tools

CM tools let you configure an environment in a repeatable and controlled way. These tools include a set of plugins and modules that already implement common configuration operations. These tools let you focus on the end state that you want to achieve for your environment, rather than implementing the logic to reach that end state. If the included operations set isn't enough, CM tools often feature an extension system that you can use to develop your own modules. While these extensions are possible, try to use the predefined modules and plugins where applicable, to avoid extra development and maintenance burden.

You use CM tools when you need to configure environments. You can also use them to provision your infrastructure and to implement a deployment process for your workloads. CM tools are a better process compared to a fully manual provisioning, configuration, and deployment process because it's repeatable, controlled, and auditable. However, there are several downsides, because CM tools aren't designed for provisioning or deployment tasks. They usually lack built-in features to implement elaborate provisioning logic, such as detecting and managing differences between the real-world state of your infrastructure and the wanted state, or rich deployment processes, such as deployments with no downtime or blue-green deployments. You can implement the missing features using the previously mentioned extension points. These extensions can result in extra effort and can increase the overall complexity of the deployment process, because you need the necessary expertise to design, develop, and maintain a customized deployment solution.

You can implement this type of provisioning, configuration, and deployment process by using tools such as Ansible, Chef, Puppet, and SaltStack.

Deploy by using container orchestration tools

If you already invested, or plan to invest in the containerization of your workloads, you can use a container orchestration tool to deploy your workloads.

A container orchestration tool takes care of managing the infrastructure underpinning your environment, and supports a wide range of deployment operations and building blocks to implement your deployment logic that you can use when the built-in ones aren't enough. By using these tools, you can focus on composing the actual deployment logic using the provided mechanisms, instead of having to implement them.

Container orchestration tools also provide abstractions that you can use to generalize your deployment processes to different underlying environments, so you don't have to design and implement multiple processes for each of your environments. For example, these tools usually include the logic for scaling and upgrading your deployments, so you don't have to implement them by yourself. You can even start leveraging these tools to implement your deployment processes in your current environment, and you can then port them to the target environment, because the implementation is largely the same, by design. By adopting these tools early, you gain the experience in administering containerized environments, and this experience is useful for your migration to Google Cloud.

You use a container orchestration tool if your workloads are already containerized or if you can containerize them in the future and you plan to invest in this effort. In the latter case, you should run a thorough analysis of each workload to determine the following:

  • Ensure that a containerization of the workload is possible.
  • Assess the potential benefits that you could gain by containerizing the workload.

If the potential pitfalls outweigh the benefits of containerization, you should only use a container orchestration tool if your teams are already committed to using them and if you don't want to manage heterogeneous environments.

For example, data warehouse solutions aren't typically deployed using container orchestration tools, because they aren't designed to run in ephemeral containers.

You can implement this deployment process using tools such as Kubernetes, and managed services such as Google Kubernetes Engine (GKE) on Google Cloud. If you're interested in a serverless environment, you can use tools, such as App Engine flexible environment, Cloud Functions, and Cloud Run.

Deploy automatically

Regardless of the provisioning, configuration, deployment, and orchestration tools you use in your environment, you can implement fully automated deployment processes to minimize human errors and to consolidate, streamline, and standardize the processes across your organization. You can also insert manual approval steps in the deployment process if needed, but every step is automated.

The steps of a typical end-to-end deployment pipeline are as follows:

  1. Code review.
  2. Continuous integration (CI).
  3. Artifact production.
  4. Continuous deployment (CD), with eventual manual approvals.

You can automate each of those steps independently from the others, so you can gradually migrate your current deployment processes towards an automated solution, or you can implement a new process directly in the target environment. For this process to be effective, you need testing and validation procedures in each step of the pipeline, not just during the code review step or the CI step.

For each change in your codebase, you should perform a thorough review to assess the quality of the change. Most source code management tools have a first-class support for code reviews. They also often support the automatic creation and initialization of reviews by looking at the source code area that was modified, provided that you configured the teams responsible for each area of your codebase. In each review you can also run automated checks on the source code, such as linters and static analyzers to enforce consistency and quality standards across the codebase.

After you review and integrate a change in the codebase, the CI tool can automatically run tests, evaluate the results, and then notify you about any issues with the current build. You can add value to this step by following a test-driven development process for a complete test coverage of the features of each workload.

For each successful build, you can automate the creation of deployment artifacts. Such artifacts represent a ready-to-deploy version of your workloads, with the latest changes. As part of the artifact creation step, you can also perform an automated validation of the artifact itself. For example, you run a vulnerability scan against known issues and approve the artifact for deployment only if no vulnerabilities are found.

Finally, you can automate the deployment of each approved artifact in the target environment. If you have multiple runtime environments, you can also implement unique deployment logic for each one, even adding manual approval steps, if needed. For example, you can automatically deploy new versions of your workloads in your development, quality assurance, and pre-production environments, while still requiring a manual review and approval from your production control team to deploy in your production environment.

While a fully automated end-to-end process is one of your best options if you need an automated, structured, streamlined, and auditable process, implementing this process isn't a trivial task. Before choosing this kind of process, you should have a clear view on the expected benefits, the costs involved, and if your current level of team knowledge and expertise is sufficient to implement a fully automated deployment process.

You can implement a fully automated process with tools, such as SonarQube, Jenkins, Cloud Build, Container Registry, and Spinnaker.

Deploy by applying the infrastructure as code pattern

Infrastructure as code is a process where you treat the provisioning of the resources in a runtime environment in the same way that you handle the source code of your workloads. For example, you can manage the entire lifecycle of Google Cloud resources entirely with Cloud APIs, and codify the final state in your source code. You then implement a fully automated provisioning process for your infrastructure, similar to the one you implement for your workloads, complete with a comprehensive test suite.

A provisioning tool is designed to bootstrap your infrastructure and ready it for configuration. It's not suited to complete configuration tasks. For this reason, after provisioning all the resources in your infrastructure, you should use a CM tool to configure those resources according to your requirements. While you can implement configuration tasks with provisioning tools and provisioning tasks with CM tools, they're designed for a purpose and complement each other. You should use the right tool for the job, so use provisioning tools to provision your infrastructure, and CM tools to configure it.

If you can manage the resources in your target environment entirely with APIs, like in Google Cloud, you should implement an infrastructure as code process. You gain immediate full auditability and versioning for your entire cloud infrastructure. Also, you can even implement a continuous integration and continuous deployment (CI/CD) process to automatically apply changes to your infrastructure.

On the other hand, if the target environment doesn't offer programmatic access to manage and configure resources, implementing an infrastructure as code deployment isn't possible. Also, you should check with your procurement department because provisioning and de-provisioning resources in a cloud environment can lead to differences in billing and expensing.

You can implement an infrastructure as code process with tools such as Terraform and managed services such as Deployment Manager. You can also use tools such as RSpec, Serverspec, and InSpec to implement test suites for your infrastructure.


Now that you understand the different options, when to use them, when to avoid them, and have some example tools to explore, the following chart can help you easily compare and contrast each option for your workloads and use cases.

Deployment process type When to use it When to avoid it Tools and services
Fully manual deployment When you need to quickly instrument an experiment in a sandboxed environment, or when dealing with bare metal environments or a legacy virtualized infrastructure, that lack the necessary management APIs Every time there is a more manageable alternative N/A
Deployment with CM tools When you need a way to automate your manual deployments and are already heavily invested in CM tools for the configuration of your environments When your effort to overcome the limitations of CM tools in terms of deployment are too high Ansible, Chef, Puppet, SaltStack
Container orchestration If your workloads are already containerized or if they can be in containerized in the future and you plan to invest in this effort When the potential pitfalls outweigh the benefits of containerization Kubernetes, GKE, App Engine flexible environment, Cloud Functions, Cloud Run
Deployment automation If you need an automated, structured, streamlined, and auditable process If your teams lack the necessary skills and don't have a chance to be trained, or if you cannot afford the effort to implement a fully automated process SonarQube, Jenkins, Cloud Build, Container Registry, Spinnaker
Infrastructure as code When resources in your target environment can be entirely managed with APIs and programmatically If the target environment doesn't offer programmatic access to manage and configure resources Terraform, Cloud Deployment Manager

There is no best deployment process because it entirely depends on your current situation, your level of expertise, and what you expect from the process.

Finding help

Google Cloud offers various options and resources for you to find the necessary help and support to best use Google Cloud services:

There are more resources to help migrate workloads to Google Cloud in the Google Cloud Migration Center.

For more information about these resources, see the finding help section of Migration to Google Cloud: Getting started.

What's next