Migration journey for GKE and Anthos
This topic describes the recommended sequence of steps that you should take when migrating workloads to containers. This section refers strictly to linux system container migrations.
At a high level, you need to go through the following phases:
Discovery phase in which you identify what workloads you have, dependencies between them, and whether they can be migrated to containers.
Migration planning phase in which you break down your workload fleet into groups of workloads that are related and should migrate together (based on the outcome of the assessment) and then determine an order of the groups to be migrated.
Landing zone phase where you configure the deployment environment for the migrated containers.
Migration and deployment phase where you containerize your VM workloads and then deploy and test the containers.
Operate and optimize where you leverage tools provided by Anthos and the larger Kubernetes ecosystem with your containerized workloads.
The following figure shows these phases:
These phases are described in more detail below.
In the discovery phase, you gather the information needed for a migration by understanding your applications and their dependencies.
This information includes an inventory of:
- The VMs whose workloads you want to migrate.
- Your applications' required network ports and connections.
- Dependencies across app tiers.
- Service name resolution or DNS configuration.
Make sure to assess the compatibility of the source workload and operating system with Migrate to Containers, following these guidelines.
Migrate to Containers supplies a tool that you run on a VM workload to determine the workload's fit for migration to a container. For more information, see Using the fit assessment tool.
- IT analyst with knowledge of the application's topologies and migration.
Migration planning phase
In the migration planning phase, divide your applications into batches and translate the information collected during the discovery step into the Kubernetes model.
Your application environments, topologies, namespaces, and policies are centralized in Kubernetes YAML configuration files, where each of these files contains a Kubernetes Custom Resource Definitions (CRDs).
- Application migration engineer or analyst. This person needs a beginner-level knowledge of the Kubernetes managed object model, GKE deployments, and YAML files.
Landing zone setup phase
In the landing zone setup phase, you configure the deployment environment for the migrated containers.
In this step, you:
- Create or identify the GKE or Anthos cluster to host your migrated workloads. This deployment cluster can a GKE or Anthos cluster on Google Cloud, Anthos clusters on VMware, or 1.4 version or later Anthos clusters on AWS..
- Create VPC network rules and Kubernetes network policies for your applications.
- Apply Kubernetes service definitions.
- Make load-balancing choices.
- Configure DNS.
- Cluster administrator, familiar with cluster deployment, Google Cloud networking, firewall rules, Identity and Access Management service accounts and roles, and launching deployments from the Google Cloud Marketplace.
Migration and deployment phase
You are now ready to migrate your VM workloads and deploy them as containers. The details of the migration and deployment phase are shown in the following diagram:
The migration and deployment workflow contains the following steps:
Configure the processing cluster: Create a GKE or Anthos processing cluster to run the Migrate to Containers components that perform the transformation from a source VM to the target container.
Add a migration source: Add the migration source that represents the source platform from which you will be migrating.
Create a migration plan: Create the migration plan that you then review and customize before executing the migration.
Review and customize the migration plan -- Review and update the migration plan with input from key stakeholders, such as the application owner, security admin, storage admin, etc.
Generate artifacts -- Use the migration plan as input to process the source VM and produce the relevant container artifacts:
- The generate artifacts phase creates artifacts that can be used to deploy the migrated workload, and their specifics vary across the different workloads and offered migration flows to address these. For more information about generated artifacts, see the workload specific documentation.
Deploy or integrate with CI/CD -- With the container artifacts now ready, you may continue with deployment in a test, staging, or production cluster. Alternatively, you may use the artifacts to integrate with a build and deploy pipeline using an orchestration tool such as Cloud Build.
Test -- Verify that the extracted container image and data volume function correctly when executed in a container. You may run a "sanity test" on the processing cluster, identify any issues or edits needed to the migration plan, then repeat migration and test again.
- For workload migration:
- Application owner, or Application migration analyst, with beginner knowledge of the Kubernetes managed object model, GKE deployments, and YAML editing.
- OPTIONAL: For data storage migration to a Persistent Volume other than a
Google Cloud persistent disk:
- Storage administrator or GKE administrator, familiar with Kubernetes persistent volume management on GKE.
Operate and Optimize phase
In the operate and optimize phase, leverage tools provided by Anthos and the larger Kubernetes ecosystem.
In this step, you can add access policies, encryption and authentication using Istio, monitoring and logging with Cloud Logging and Cloud Monitoring, all through changing configuration and not rebuilding your applications. You may also integrate with a CI/CD pipeline using tools like Cloud Build to implement day-2 maintenance procedures, such as software package and version updates.