Migrating a monolith VM - Discovery and assessment

Before you're able to migrate VM workloads using Migrate to Containers, you must first confirm that the workloads are suited for migration. You will learn how you can quickly assess that fit using discovery tools. Also, you will get ready for the migration phase by creating a processing cluster which you install Migrate to Containers onto.

Objectives

At the end of this tutorial, you will have learned how to:

  • Assess your workload for migration by using the Linux discovery tool.
  • Create a processing cluster specific to your migration environment.
  • Install Migrate to Containers.

Before you begin

This tutorial is a follow-up of the Overview and setup tutorial. Before starting this tutorial, follow the instructions on that page to set up your project and deploy Bank of Anthos.

Use the discovery tools

In this section, you learn how to use the migration CLI tools to collect information on your candidate monolith VM. You can also process whether that VM is suited for migration using Migrate to Containers.

  1. Still using Cloud Shell, create an SSH session into your VM. If asked for a passphrase, leave it blank by pressing the enter key.

    gcloud compute ssh ledgermonolith-service --tunnel-through-iap --project=PROJECT_ID
    
  2. Create a directory for the Linux discovery tool guest collection script and analysis tool.

    mkdir m2c && cd m2c
    
  3. Store the latest Migration Center discovery client CLI version in an environment variable.

    MCDC_VERSION=$(curl -s https://mcdc-release.storage.googleapis.com/latest)
    

  4. Download the guest collection script to the VM and make it executable.

    curl -O "https://mcdc-release.storage.googleapis.com/${MCDC_VERSION}/mcdc-linux-collect.sh"
    chmod +x mcdc-linux-collect.sh
    
  5. Download mcdc CLI to the VM and make it executable.

    curl -O "https://mcdc-release.storage.googleapis.com/${MCDC_VERSION}/mcdc"
    chmod +x mcdc
    
  6. Run the guest collection script on the VM.

    sudo ./mcdc-linux-collect.sh
    

    The guest collection script generates a TAR archive named mcdc-collect-ledgermonolith-service-TIMESTAMP.tar and saves it in the current directory. The timestamp is in the format YYYY-MM-DD-hh-mm.

  7. Run the analysis tool to import the archive, assess the VM, and generate a report.

    ./mcdc report sample mcdc-collect-ledgermonolith-service-TIMESTAMP.tar --format json > ledgermonolith-mcdc-report.json
    

    The command saves a JSON file containing the offline assessment report named ledgermonolith-mcdc-report.json in the current directory.

  8. Exit from the SSH session.

    exit
    
  9. To view the output of the migration discovery tool, you first copy the resulting report from the VM to your Cloud Shell environment.

    gcloud compute scp --tunnel-through-iap \
      ledgermonolith-service:~/m2c/ledgermonolith-mcdc-report.json ${HOME}/
    
  10. Download the analysis report to your local machine.

    cloudshell download ${HOME}/ledgermonolith-mcdc-report.json
    
  11. Open the Migrate to Containers page in the Google Cloud console.

    Go to the Migrate to Containers page

  12. Click Open fit assessment report, then click Browse and select the JSON report you have just downloaded on your local machine.

  13. Click Open. The console is going to process the report and generate the results in a readable format. Notice your VM in the list of assessed VMs.

  14. Click the report name to open the report details.

    The fit result of the VM should say Excellent fit.

Create a processing cluster

In the following step, you create the GKE cluster that is used as a processing cluster. The cluster is where you install Migrate to Containers and execute the migration. You're intentionally not using the same cluster as the one where Bank of Anthos is running to not disrupt its services. Once the migration is successfully completed, you can safely delete this processing cluster.

  1. Create a new Kubernetes cluster to use as a processing cluster.

    gcloud container clusters create migration-processing \
      --project=PROJECT_ID --zone=COMPUTE_ZONE --machine-type e2-standard-4 \
      --image-type cos_containerd --num-nodes 1 \
      --subnetwork default --scopes "https://www.googleapis.com/auth/cloud-platform" \
      --addons HorizontalPodAutoscaling,HttpLoadBalancing
    
  2. Open the Migrate to Containers page in the Google Cloud console.

    Go to Migrate to Containers

  3. In the Processing clusters tab, click Add processing cluster.

  4. Select Linux as the workloads type, then click Next.

  5. Select the cluster that you created in the previous steps, migration-processing, from the drop-down list, then click Next.

  6. In the Configuration section, leave the default values as is and click Next.

  7. In the Service account section, check that Create a new service account is selected.

  8. In the Service account name field, enter tutorial-sa1.

  9. Click Continue, then click Deploy.

    Allow a few minutes for the processing cluster setup to complete.

Clean up

To avoid unnecessary Google Cloud charges, you should delete the resources used for this tutorial when you're done with it. These resources are:

  • The boa-cluster GKE cluster
  • The migration-processing GKE cluster
  • The ledgermonolith-service Compute Engine VM
  • The tutorial-sa1 service account

You can either delete these resources manually, or use the following steps to delete your project, which gets rid of all resources.

  • In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  • In the project list, select the project that you want to delete, and then click Delete.
  • In the dialog, type the project ID, and then click Shut down to delete the project.
  • What's next

    Now that you have learned how to use the migration discovery tools to assess your VM and created your processing cluster, you can move on to the next section of the tutorial, Migration and deployment.

    If you end the tutorial here, don't forget to clean up your Google Cloud project and resources.