Your migration journey to GKE or Anthos

This topic describes the recommended sequence of steps that you should take when migrating workloads to Google Kubernetes Engine (GKE) or Anthos.

At a high level, you need to go through a discovery and assessment phase in which you identify what workloads you have, dependencies between them, whether they can be migrated to the cloud.

Next, you engage in a migration planning phase, in which you break down your workload fleet into groups of workloads that are related and should migrate together (based on the outcome of the assessment) and then determine an order of the groups to be migrated.

In addition, you determine which workloads that can be migrated to GKE versus those workloads that are not suitable for GKE, but can migrate to Compute Engine with Migrate for Compute Engine. You might elect to break the workload migration journey into two distinct phases, even for workloads that are suitable for GKE:

  1. Migrate workloads to Compute Engine with Migrate for Compute Engine.
  2. Migrate from Compute Engine to GKE with Migrate for Anthos.

This method makes sense, for instance, in cases where you want to conduct a data-center migration and migrate all workloads into Compute Engine, and only at a second stage selectively modernize suitable workloads to GKE. As shown in the diagram, once you select the desired path for a certain workload, you have to go through a landing-zone setup migration, and then optionally make post migration optimization phases.

The steps of a migration to either VMs or containers using Migrate for Anthos.


Gather the information needed for a migration by understanding your applications and their dependencies. This information includes an inventory of:

  • The VMs whose workloads you want to migrate.
  • Your applications' required network ports and connections.
  • Dependencies across app tiers.
  • Service name resolution or DNS configuration.

Migrate for Anthos supplies tools that you run on a VM workload to determine the workload's fit for migration to a container. For more information, see:


  • IT analyst with knowledge of the application's topologies and migration.

Migration planning

Divide your applications into batches and translate the information collected during the discovery step into the Kubernetes model. Your application environments, topologies, namespaces, and policies are centralized in Kubernetes YAML configuration files. CRD reference.


  • Application migration engineer or analyst. This person needs a beginner-level knowledge of the Kubernetes managed object model, GKE deployments, and YAML files.

Landing zone setup

In this step, you will:

  • Create or identify the GKE cluster to host your migrated workloads. The GKE processing clusters can be located in the Cloud or on-premises.
  • Create VPC network rules and Kubernetes network policies for your applications.
  • Apply Kubernetes service definitions.
  • Make load-balancing choices.
  • Configure DNS.


  • Cluster administrator, familiar with cluster deployment, Google Cloud networking, firewall rules, Identity and Access Management service accounts and roles, and launching deployments from the Google Cloud Marketplace.

Migration and validation

After your processing cluster, VPC network, and Migrate for Anthos components are ready to process workloads, you may start the Migrate for Anthos migration workflow for each source VM you want to migrate. Make sure to assess the compatibility of the source workload and operating system with Migrate for Anthos, following these guidelines.

The migration workflow is depicted in the following diagram:

Diagram showing overview of setup and migration steps.

The migration workflow contains the following five steps:

  1. Generate and review the migration plan -- Using the Migrate for Anthos migctl CLI for Linux workloads, or migration script for Windows workloads, generate a migration plan, then review and update the plan with input from key stakeholders, such as the application owner, security admin, storage admin, etc.
  2. Generate artifacts -- Use the migration plan as input to the CLI to process the source VM and produce the relevant artifacts:
    • For Linux workloads -- Container image, Dockerfile, reference deployment YAMLs and, if specified for stateful workloads -- a data volume.
    • For Windows workloads -- Extracted application files in a ZIP archive, and a Dockerfile. Note: You will need to build a container image using the Dockerfile and extracted content before you can proceed to the next step.
  3. Test -- Before you proceed with deployment of the workload for end-to-end application-level validation in a test, staging or production cluster of choice, you may want to verify that the extracted container image and data volume (if applicable) function correctly when executed in a container. You may run a "sanity test" on the processing cluster, identify any issues or edits needed to the migration plan, then repeat step 2 and test again.
  4. Data sync -- When a stateful workload is being migrated (Linux only) and the workload continues to accumulate new data and state while running at the source, you may want to iterate through one or more data sync cycles before performing a final data sync with the source shut down. These data syncs reduce the final cut over downtime. Each data sync execution only transfers the changed data since the last data sync cycle.
  5. Deploy or integrate with CI/CD -- With the container artifacts now ready and validated, you may continue with deployment in a test, staging, or production cluster. Alternatively, you may use the artifacts to integrate with a build and deploy pipeline using an orchestration tool such as Cloud Build.


  • For workload migration:
    • Application owner, or Application migration analyst, with beginner knowledge of the Kubernetes managed object model, GKE deployments, and YAML editing.
  • OPTIONAL: For data storage migration to a Persistent Volume other than a Google Cloud persistent disk:
    • Storage administrator or GKE administrator, familiar with Kubernetes persistent volume management on GKE.

Operate and Optimize

You can now leverage tools provided by Anthos and the larger Kubernetes ecosystem. In this step, you can add access policies, encryption and authentication using Istio, monitoring and logging with Cloud Logging and Cloud Monitoring, all through changing configuration and not rebuilding your applications. You may also integrate with a CI/CD pipeline using tools like Cloud Build to implement day-2 maintenance procedures, such as software package and version updates.