Migrating a monolith VM - Discovery and assessment

Before you're able to migrate VM workloads using Migrate to Containers, you must first confirm that the workloads are suited for migration. You will learn how you can quickly assess that fit using discovery tools. Also, you will get ready for the migration phase by creating a processing cluster which you install Migrate to Containers onto.


At the end of this tutorial, you will have learned how to:

  • Determine the fit of your workload for migration by using the Linux discovery tool.
  • Create a processing cluster specific to your migration environment.
  • Install Migrate to Containers.

Before you begin

This tutorial is a follow-up of the Overview and setup tutorial. Before starting this tutorial, follow the instructions on that page to set up your project and deploy Bank of Anthos.

Use the discovery tools

In this section, you learn how to use the migration CLI tools to collect information on your candidate monolith VM. You can also process whether that VM is suited for migration using Migrate to Containers.

  1. Still using Cloud Shell, create an SSH session into your VM. If asked for a passphrase, leave it blank by pressing the enter key.

    gcloud compute ssh ledgermonolith-service --tunnel-through-iap --project=PROJECT_ID
  2. Create a directory for the Linux discovery tool collection script and analysis tool.

    mkdir m2c && cd m2c
  3. Store the latest mFit version in an environment variable.

    MFIT_VERSION=$(curl -s https://mfit-release.storage.googleapis.com/latest)

  4. Download the collection script to the VM and make it executable.

    curl -O "https://mfit-release.storage.googleapis.com/${MFIT_VERSION}/mfit-linux-collect.sh"
    chmod +x mfit-linux-collect.sh
  5. Download the analysis tool, mfit, to the VM and make it executable.

    curl -O "https://mfit-release.storage.googleapis.com/${MFIT_VERSION}/mfit"
    chmod +x mfit
  6. Run the collection script on the VM.

    sudo ./mfit-linux-collect.sh

    The collection script generates a TAR archive named m4a-collect-ledgermonolith-service-TIMESTAMP.tar and saves it in the current directory. The timestamp is in the format YYYY-MM-DD-hh-mm.

  7. Run the analysis tool to import the archive, assess the VM, and generate a report.

    ./mfit report sample m4a-collect-ledgermonolith-service-TIMESTAMP.tar --format json > ledgermonolith-mfit-report.json

    The command saves a JSON file containing the fit assessment report named ledgermonolith-mfit-report.json in the current directory.

  8. Exit from the SSH session.

  9. To view the output of the migration discovery tool, you first copy the resulting report from the VM to your Cloud Shell environment.

    gcloud compute scp --tunnel-through-iap \
      ledgermonolith-service:~/m2c/ledgermonolith-mfit-report.json ${HOME}/
  10. Download the analysis report to your local machine.

    cloudshell download ${HOME}/ledgermonolith-mfit-report.json
  11. Open the Migrate to Containers page in the Google Cloud console.

    Go to the Migrate to Containers page

  12. Click Open fit assessment report, then click Browse and select the JSON report you have just downloaded on your local machine.

  13. Click Open. The console is going to process the report and generate the results in a readable format. Notice your VM in the list of assessed VMs.

  14. Click on the report name to open the repor details.

    The fit result of the VM should say Excellent fit.

Create a processing cluster

In the following step, you create the GKE cluster that is used as a processing cluster. The cluster is where you install Migrate to Containers and execute the migration. You're intentionally not using the same cluster as the one where Bank of Anthos is running to not disrupt its services. Once the migration is successfully completed, you can safely delete this processing cluster.

  1. Create a new Kubernetes cluster to use as a processing cluster.

    gcloud container clusters create migration-processing \
      --project=PROJECT_ID --zone=COMPUTE_ZONE --machine-type e2-standard-4 \
      --image-type cos_containerd --num-nodes 1 \
      --subnetwork default --scopes "https://www.googleapis.com/auth/cloud-platform" \
      --addons HorizontalPodAutoscaling,HttpLoadBalancing
  2. Open the Migrate to Containers page in the Google Cloud console.

    Go to the Migrate to Containers page

  3. In the Processing clusters tab, click Add processing cluster.

  4. Select Linux as the workloads type then click Next.

  5. Select the cluster that you created in the previous steps, migration-processing, from the drop-down list, then click Next.

  6. Click Next, then Continue, then Deploy.

  7. Select Done after installation succeeds.

What's next

Now that you have learned how to use the migration discovery tools to assess your VM and created your processing cluster, you can move on to the next section of the tutorial, Migration and deployment.

If you end the tutorial here, don't forget to clean up your Google Cloud project and resources.

Clean up

To avoid unnecessary Google Cloud charges, you should delete the resources used for this tutorial when you're done with it. These resources are:

  • The boa-cluster GKE cluster
  • The migration-processing GKE cluster
  • The ledgermonolith-service Compute Engine VM

You can either delete these resources manually, or use the following steps to delete your project, which will also get rid of all resources.

  • In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  • In the project list, select the project that you want to delete, and then click Delete.
  • In the dialog, type the project ID, and then click Shut down to delete the project.