Migrating VMs to containers with Migrate to Containers

Last reviewed 2021-10-21 UTC

This document is for cloud architects responsible for designing and implementing a migration plan for virtual-machine-based workloads to containers. It provides guidance about using Migrate to Containers to migrate your virtual machines (VMs) from your source environment to containers running in Google Kubernetes Engine (GKE) or GKE Enterprise. Your source environment might be running in an on-premises environment, in a private hosting environment, or in another cloud.

This document provides an overview of Migrate to Containers. It also contains important points for you to consider when planning a VM migration to containers. It's part of a multi-part series about migrating to Google Cloud. If you're interested in an overview of the series, see Migration to Google Cloud: Choosing your migration path.

Read this document if you're planning to migrate VMs running a compatible operating system (OS), such as Linux or Windows, from a supported source environment to a GKE or GKE Enterprise environment with Migrate to Containers. These source environments can include the following:

Migrate to Containers lets you place existing VM-based workloads into GKE and GKE Enterprise containers, without:

  • Requiring access to source code
  • Rewriting your workloads
  • Manually containerizing your workloads

Migrating VM-based workloads with Migrate to Containers, provides the following benefits:

  • A containerized environment, including:
  • High-workload density
  • Rich orchestration mechanisms and policy management
  • Flexible service-to-service communication channels
  • Can use continuous integration and continuous deployment pipelines
  • Can move from unsupported OS versions
  • Can start decommissioning your VM-based environment

Migrating VM-based workloads to containers with Migrate to Containers is one of the possible steps in your workload modernization journey. Migrating VM-based workloads with Migrate to Containers helps you avoid the expensive rewrites needed to modernize those workloads. It doesn't transform them into workloads designed to run in a cloud environment, however.

Ideal migration candidates include the following:

  • Workloads where modernization through a complete rewrite is either impossible or too expensive
  • Workloads with unknown dependencies that could break something if touched
  • Workloads that are maintained, but not actively developed
  • Workloads that aren't maintained anymore
  • Workloads without source code access

You can interact with Migrate to Containers in multiple ways. For example, it's accessible through the Google Cloud console. If you need to automate the migration process and integrate it with your existing toolchain, you can use the command-line interface and Migrate to Containers Kubernetes Custom Resource Definitions (CRDs).

For more information about Migrate to Containers interfaces, see APIs and reference | Migrate to Containers.

This document assumes that you've read and are familiar with the following documents:

Designing the migration to Google Cloud

To migrate your VMs from your source environment to containers running in Google Cloud, we recommend that you follow the framework described in the Migration to Google Cloud series.

The following diagram illustrates the path of your migration journey:

Migration path with four phases.

The framework illustrated in the preceding diagram has four phases:

  1. Assess. In this phase, you assess your source environment, assess the workloads that you want to migrate to Google Cloud, and assess which VMs support each workload.
  2. Plan. In this phase, you create the basic infrastructure for Migrate to Containers, such as provisioning the resource hierarchy and setting up network access.
  3. Deploy. In this phase, you migrate the VMs from the source environment to GKE or GKE Enterprise with Migrate to Containers.
  4. Optimize. In this phase, you begin to take advantage of the cloud technologies and capabilities.

Assessing the source environment and workloads

In the assessment phase, you gather information about your source environment and the VM-based workloads you want to migrate. Doing so helps you rightsize the resources that you need—both for the migration and for your target environment.

In the assessment phase, you do the following:

  1. Build a comprehensive inventory of your workloads.
  2. Catalog your applications according to their properties and dependencies.
  3. Train and educate your teams on Google Cloud.
  4. Build an experiment and a proof of concept on Google Cloud.
  5. Calculate the total cost of ownership (TCO) of the target environment.
  6. Choose the workloads that you want to migrate first.

The following sections rely on the information within Migration to Google Cloud: Assessing and discovering your workloads. However, they provide information that's specific to assessing the VM-based workloads that you want to migrate to containers with Migrate to Containers.

Build your inventories

To scope your migration, you must understand your current VM-based environment. To understand your environment, gather information about your workloads and their dependencies.

Building an inventory of your apps describes how to build an inventory of the workloads in your VM-based environment and their dependencies. Follow that guidance and build your inventories before proceeding with this document.

After you build an inventory of your workloads and their dependencies, you refine the inventory. Assess the aspects and features that are of interest to your organization when it migrates its VM-based workloads with Migrate to Containers.

To complete the inventory of your workloads, consider the following:

  • Source environment. Migrate to Containers supports migrating VMs from different source environments:

    • Compute Engine
    • VMware vSphere
    • Microsoft Azure VM
    • Amazon EC2

    To correctly set up Migrate to Containers so it can migrate your workloads, you must assess the source environment.

  • Operating system running in your VMs: Gather information about the operating systems and their licenses running in your VMs, and ensure that the operating systems are compatible with Migrate to Containers. If you're running an OS that Migrate to Containers doesn't support, consider upgrading to a supported version or changing your OS to an OS that Migrate to Containers supports.

  • Workloads in your VMs: Assess which workloads are deployed in each VM. Then map the dependencies between your workloads and between your workloads and external services. Next, gather information about the configuration sources of your workloads. For example, are you using environment variables, a distributed configuration system, or metadata servers to dynamically configure your workloads? Also, evaluate how your workloads send information to your logging system.

  • Migrate to Containers fit score: Assess whether your workloads are fit to migrate with Migrate to Containers. Migrate to Containers provides a fit assessment tool for Linux and Windows that you can run on your VMs to compute a fit score. A low fit score indicates that there are issues that you need to resolve before migrating your workloads. For example, if you enabled security-enhanced Linux on your VMs, you might need additional effort to mitigate this dependency before migrating them.

  • Network services: Gather information about the configuration of your network services, and how your VM-based workloads use these services. For example, assess how your workloads are using the Domain Name System (DNS), Multicast DNS, hosts files, and other service discovery mechanisms to determine the location of other workloads and services. Next, assess the hosts file of each VM for any customized entry that your workloads need. For more information about hosts files, see Verify and validate the generated resources and descriptors.

  • Hardware dependencies: Evaluate any type of hardware that you're using in your VM-based environment, such as high-performance storage devices, GPUs, TPUs, or network appliances.

  • Stateless and stateful workloads: Stateless workloads don't store state in the cluster or to persistent storage. Stateful workloads save data for later use. Because migrating stateful workloads is typically harder than migrating stateless workloads, assess which workloads are stateless and which are stateful.

  • Storage: For stateful workloads, make a list of your storage requirements. Here are some things to consider when making a list:

    • Storage system type (block volumes, file storage, or object storage)
    • Storage system size
    • Workload access to the storage system

      • For example, are your workloads using Network File System (NFS), or Server Message Block (SMB) to access files over a network?
      • For example, do your VMs run NFS or SMB servers?

        If your VMs run NFS servers in kernel mode, you need to spend additional effort to migrate those servers. You can can migrate those servers to another runtime environment, such as Compute Engine or GKE. Or you can migrate data to Filestore, which is a fully managed network attached storage service.

    • Disk configuration:

      • Assess the configuration of all the disks, data partitions, and volumes in your VMs, and the security and confidentiality features of each.
  • Data locality: Data locality affects the performance of stateful workloads. The distance and connectivity between your external systems and your environment affects latency. For each external data storage system, consider any performance and availability requirements that it must satisfy.

Complete the assessment

After building the inventories related to your environment and your VM-based workloads, complete the rest of the assessment phase activities documented in Migration to Google Cloud: Assessing and discovering your workloads. When you're done with that work, continue reading this document.

Planning and building your foundation

In the planning and building phase, you provision and configure the cloud infrastructure and services that support your workloads on Google Cloud:

  1. Build a resource hierarchy.
  2. Configure Identity and Access Management.
  3. Set up billing.
  4. Set up network connectivity.
  5. Harden your security.
  6. Set up monitoring and alerting.

For guidance about how to build the cloud infrastructure and services that support your workloads and their dependencies, see Migration to Google Cloud: Building your foundation. Follow those guidelines to build a foundation for your environments. When you are done with that work, continue reading this document.

After following the guidance in "Migration to Google Cloud: Building your foundation", complete your foundation work by setting up Migrate to Containers:

  1. Confirm that your workloads and source environment meet Migrate to Containers prerequisites.
  2. Enable Migrate to Containers Cloud APIs.
  3. Provision the service accounts that Migrate to Containers uses to access resources in the target environment.
  4. If you're migrating your workloads to GKE or GKE clusters on Google Cloud, set up Migrate to Virtual Machines.
  5. Configure a Migrate to Containers processing cluster. A Migrate to Containers processing cluster runs Migrate to Containers components during the migration.
  6. Install and configure Migrate to Containers in the processing cluster.

The "Setting up Migrate to Containers" article mentioned previously describes how to provision and configure Migrate to Containers and its dependencies. Follow that guidance to set up Migrate to Containers.

When you're done with the work described in this section, return to this document.

Migrating your VM-based workloads to containers

In the deployment phase, use the following milestones to guide you as you migrate the VMs from your source environment to containers running in GKE or GKE Enterprise:

  1. Generate and review migration plans.
  2. Generate container artifacts and deployment descriptors.
  3. Verify, validate, and customize the resources descriptors that Migrate to Containers generated for you.
  4. Deploy and validate the containerized workloads to GKE or GKE Enterprise.
  5. Uninstall Migrate to Containers.

For more information about the steps required to migrate VMs with Migrate to Containers, see Execute a migration.

Generate and review the migration plan

Create a Migrate to Containers migration plan for your VM-based workloads:

  1. Configure the source environments as Migrate to Containers migration sources. To migrate your VM-based workloads, Migrate to Containers needs information about the source environments where your VMs currently run. You gathered that information by performing the tasks described in the Build your inventories section within this document. For more information about configuring your source environments, see Adding a migration source (Linux) and Adding a migration source (Windows).
  2. Create migration plans. To specify which VM-based workloads you want to migrate from a source environment to a supported target environment, create a migration plan. For example, you can configure where you want to store your persistent data. For more information about creating and monitoring migration plans, see Creating a migration (Linux) and Creating a migration (Windows).
  3. Review and customize migration plans. After generating migration plans for each of the VM-based workloads that you want to migrate, we recommend that you review and customize each migration plan to ensure that it fits your requirements. For more information about customizing migration plans, see Customizing a migration plan (Linux) and Customizing a migration plan (Windows).

When you're done with the work described in this section, return to this document.

Generate container artifacts and deployment descriptors

To generate the target container artifacts for your workloads, Migrate to Containers creates a container image containing the workload and data extracted from the VM in the migration plan. It then stores a copy of the container image in the configured container image repository. Migrate to Containers also generates the deployment descriptors that you can customize and use to deploy instances of the container images in the target environment.

For more information about generating container artifacts, see Executing a migration (Linux) and Executing a migration (Windows).

You can monitor the progress of the container artifacts you create and migrate. For more information about monitoring a migration, see Monitoring a migration (Linux) and Monitoring a migration (Windows).

If you're generating container artifacts for Windows workloads, use the artifacts and the deployment descriptors that Migrate to Containers generated to build Windows container images for those workloads. For more information about building Windows container images for your workloads, see Building a Windows container image.

When you're done with the work described in this section, return to this document.

Verify and validate the generated resources and descriptors

After you generate container artifacts and deployment descriptors with Migrate to Containers, review and update those artifacts and descriptors to ensure that they meet your requirements. The amount of time needed to update your container artifacts and deployment descriptors depends on how many VM-based workloads you're migrating and how complex they are. For more information about reviewing container artifacts and deployment descriptors, see Reviewing generated deployment files (Linux), and Building a Windows container image. For example, consider the following aspects.

Naming resources and descriptors

Configuration and log resources and descriptors

Policy and profile resources and descriptors

  • Access Control Lists (ACLs) for Windows workloads: Because Windows containers don't support setting ACLs when building Windows container images, Migrate to Containers doesn't migrate customized Windows ACLs. If you need to customize your Windows workloads, see Setting ACLs.
  • AppArmor profiles: Ensure that all the AppArmor profiles that your workloads require are available in the target environment, otherwise the migrated workloads might not start.
  • Network policies: If you need to restrict access to the Pods running your workloads, you can use NetworkPolicies to control the traffic flow to and from your Pods.

Other resources and descriptors

When you're done with the work described in this section, return to this document.

Deploy and validate the containerized workloads

When the deployment descriptors for your workloads are ready, perform the following steps:

  1. Deploy your migrated workloads in the target environment. For guidance about deploying your migrated Linux and Windows workloads, see Deploying a Linux workload to a target cluster and Deploying a Windows workload to a target cluster.
  2. Monitor your migrated workloads. After deploying your migrated Linux workloads and Windows workloads, you can gather information about how they are performing. For more information, see Monitoring migrated workloads (Linux) and Monitoring migrated workloads (Windows).

  3. Integrate your migrated workloads. Once the workloads you deployed in the target environment work, integrate them. Integrate the container artifact generation and deployment processes of the workloads with your deployment processes and pipelines. If you don't currently have an automated deployment process in place and are manually deploying your workloads, it's recommended that you migrate from manual deployments to automated deployments.

When you're done with the work described in this section, return to this document.

Uninstall Migrate to Containers

After you complete the migration of your workloads with Migrate to Containers, it's recommended that you:

  1. Ensure that you have all the references to the artifacts that Migrate to Containers generated during the migration.
  2. Uninstall Migrate to Containers.

When you're done with the work described in this section, return to this document.

Optimizing your environment after migration

Optimizing your environment is the last phase of your migration. To optimize your GKE and GKE Enterprise environment, see Optimizing your environment.

What's next