Planning best practices

This topic offers advice for application migrations to Google Kubernetes Engine (GKE) or GKE Enterprise, based on migrations of real applications that have been performed with Migrate to Containers.

Migration Center discovery client CLI or mcdc CLI is a tool that you use to determine VM workload's fit for migration to a container.

Migrate to Containers is recommended for certain types of Linux and Windows workloads, which are detailed below.

Good fit


Linux applications that are a good fit for migration using Migrate to Containers include the following application architectures:

  • Web/Application Servers
  • Business logic Middleware (for example, Tomcat)
  • Multi-VM, multi-tier stacks (for example, LAMP)
  • Small/Medium sized Databases (for example, MySQL and PostgreSQL)

In addition, applications best suited for migration with Migrate to Containers have the following operational characteristics:

  • Low duty-cycle & bursty workloads
  • Development, testing and training lab environments
  • Always-on, low load services

In general, most Linux workloads are compatible with migration, except for those workloads explicitly listed below under Not a good fit.


Windows applications that are a good fit for migration using Migrate to Containers include workloads that meet all following characteristics:

  • IIS 7 or later, ASP.NET with .NET Framework 3.5 or later
  • Web and logic tiers
  • WS2008 R2 or higher

Operating system support

Migrate to Containers is compatible with these VM operating systems.

Not a good fit


For Linux, applications and servers that are not a good fit for migration with Migrate to Containers include:

  • High performance and large in-memory databases
  • VMs with special kernel drivers (for example, kernel-mode NFS)
  • Dependencies on specific hardware
  • Software with licenses tied to certain hardware ID registration


For Windows, workloads without IIS 7 or higher installed are not a fit for migration. Other types of applications not fit for migration include:

  • Applications with dependencies on GPUs or TPUs
  • Low level networking applications
  • Desktop, RDP, and VDI applications
  • Applications with BYOL

DNS and network access rules

Before migrating to GKE, be sure you understand the network resources, and services used by your migrated workloads, and ensure that they are accessible and addressable from your Virtual Private Cloud.

Plan your Kubernetes service names

If your workloads depend on DNS to access services, you need to plan your Kubernetes namespace scheme, network policies and service names.

The deployment spec generated by the migration process contains a suggested headless Service object of type "ClusterIP". "ClusterIP" means no load-balancing, and a single cluster internal IP reachable only from within the cluster. The Kubernetes endpoints controller modifies the DNS configuration to return records (addresses) that point to the Pods, which are labeled with "app": "app-name" in the deployment_spec.yaml.

Create and apply services for connections to pods and containers

After migrating a workload, hostnames are no longer applicable. To allow connections to your workloads, create and apply services.

Identify and configure migrated names and IPs

GKE manages the /etc/hosts file. In Migrate to Containers, adapting the hosts file from the source VM to GKE is not yet automated. The names and IPs in the hosts file on the migrated VM need to be identified and configured as hostAliases.

Place dependent services in the same namespace

Services that are dependent on each other should be colocated in the same Kubernetes namespace and use short DNS names (for example app and db) to communicate. This configuration also helps to replicate multiple application environments, such as production, staging, and test.

Control access surface with GKE networking

GKE has sophisticated networking controls. Pods can be accessed from different networks, such as the public internet, VPC network, or internal cluster network. This offers the opportunity to further control and restrict the access surface to a workload without the added complexity of managing VLANs, filters, or routing rules.

For example, a typical three-tier application has frontend, application, and database tiers. When migrated to Google Cloud, the frontend service is configured with a LoadBalancer on the VPC network. The other tiers are not directly accessible from outside the GKE cluster. A network access policy ensures the application service is accessible only by the frontend pods from inside the internal cluster network. Another policy ensures the database pods are accessible only by the application pods.


Define NFS mounts as Persistent Volumes

When you create the migration plan, NFS client mounts on the source VM are automatically discovered and added to the generated plan.

While the mounts are added to the plan, they are disabled by default. That means PersistentVolume and PersistentVolumeClaim definitions are not included in the deployment_spec.yaml file when you generate migration artifacts. If you want Migrate to Containers to generate PersistentVolume and PersistentVolumeClaim definitions, you must first edit the migration plan to enable the NFS client mount.

See Customizing NFS mounts for more information.

Kernel-mode NFS servers

VMs with NFS servers running in kernel-mode cannot be migrated into GKE with Migrate to Containers. These VMs must be migrated into VMs on Compute Engine. Alternatively, you can use Filestore for a cloud-native NFS solution.

Migrating data from source NFS shares

If your source VM is using an NFS share mount, this data cannot be migrated automatically. Either mount the share on the migrated workload container using an NFS persistent volume, or—if the source NFS share is remote—copy the data to another file share that provides lower latency access to the cluster.

For data copy options, see the following:

  1. Storage Transfer Service or Transfer Appliance.

  2. Copying data with gcloud storage rsync (from source file share to bucket and then from bucket to the file share in cloud).

  3. Third party solutions, such as NetApp SnapMirror to NetApp Cloud Volumes.

  4. OSS utilities such as Rsync.

Ensure databases are accessible

In case your application relies on a database, either one that runs locally on the VM, or on an external machine, you must ensure that the database is still accessible from the new migrated pod. You need to validate that your network firewall policy allows access from the cluster to the database.

For migrating the database to Google Cloud, we recommend using Database Migration Service

Alternatively, you can run the database inside the cluster. For more information, see Plan your database deployments on GKE.

Ensure that injected metadata is available

If your applications rely on injected metadata (for example, environment variables), you must ensure that this metadata is available on GKE. If the same metadata injection method is not available, GKE offers ConfigMaps and Secrets.

Configure necessary services to start at runlevel 3

Migrate to Containers workloads reach runlevel 3 only. VMs migrated into GKE with Migrate to Containers will be booted in the container at Linux runlevel 3. Certain services (for example X11 or XDM, for remote GUI access using VNC) are configured by default to start only at runlevel 5. Any necessary services should be configured to start at runlevel 3.

Disable unneeded services

Migrate to Containers automatically disables hardware- or environment-specific services, and a predefined set of additional services running on VMs. These services are not required after you migrate your workloads to containers.

For example, Migrate to Containers automatically disables iptables, ip6tables, and firewalld. For a complete list of services disabled by Migrate to Containers, download the blocklist.yaml file.

Customizing disabled services

By default, Migrate to Containers disables the unneeded services listed above. You can also define your own custom list of services to disable in a migrated container by customizing the migration plan to define a blocklist. With a blocklist, you specify one or more services to disable in a migrated container.

Maintain and update migrated VMs

Using the artifacts you generate during migration, you can apply application and user-mode OS software updates, security patches, editing embedded configurations, adding or replacing files, and for updating the Migrate to Containers runtime software.

For more information, see Post-migration image updates.

What's next