Migrate for Anthos provides a self-service tool that you run on a Linux VM workload to determine the workload's fit for migration to a container.
The tool outputs a report describing the analysis results for the VM that describes any issues that must be resolved before migration and an overall fit assessment of either:
- Excellent fit.
- Good fit with some manual work.
- Needs minor work before migrating.
- Needs moderate work before migrating.
- Needs major work before migrating.
- No fit.
How the tool works
The Linux discovery tool operates in two distinct phases:
Collect phase - A bash script named
m4a-fit-collect.shcollects information about the Linux VM to be migrated and writes the collected data to a tar file. A copy of the data remains on the VM filesystem for later usage during migration.
Analysis phase - The
m4a-fit-analysistool parses the output of the collect phase, applies a set of rules, and creates a fit assessment along with a detailed report describing the tool's findings. You can view the report as an HTML file or as a JSON file.
You can run the collect tool and analysis tool on the same VM. However, if you
have multiple VMs, you can instead run the collect tool on each VM separately,
then upload the tar file from each VM to a single machine for analysis.
m4a-fit-analysis tool can process multiple tar files at once to output
a fit assessment and analysis for each VM.
The following image shows the HTML output of the tool for the evaluation of a VM named "my-vm":
The table contains one row for each VM analyzed, including a link to details about the VM.
The Data collection date, Identified OS, M4A fit score, and Workload type columns contain summary information about the VM and the analysis results.
The Additional info column contains a link to details about each VM, including information such as listening ports, mount points, NFS mount points, and other information.
Each rule column shows the results from applying a rule to the VM. A value of:
- Not Detected means that rule detected no migration issue.
- Detected means the rule detected a migration issue for the VM. Click Detected to view details about the rule output.
The Linux discovery tool has the following prerequisites:
The target VM being evaluated must be running to ensure that applications, processes, and open ports are discoverable.
The machine used to run the analysis tool,
m4a-fit-analysis,must run Linux kernel version later than 2.6.23.
You must run the collect script as
Installing and running the tool
You must download the collection script and the analysis tool. You can either:
- Download both tools to a single VM.
- If you have multiple VMs, download the collection script to each workload VM, then upload the collected data to a central machine for analysis by the analysis tool.
To evaluate a VM:
Log in to your VM.
Create a directory for the collection script and analysis tool:
Download the collection script to the VM and make it executable:
chmod +x m4a-fit-collect.sh
Download the analysis tool to the VM and make it executable:
chmod +x m4a-fit-analysis
Run the collect script on the VM:
The script outputs a tar file named
m4a-collect-machinename-timestamp.tarto the current directory and to
The timestamp is in the format
YYYY-MM-DD-hh-mm. See Collect script operation for a description of the tar file format.
Note: If you installed the analysis tool on a central machine, upload the tar file to that machine for processing.
Run the analysis tool on the tar file:
The tool outputs two files to the current directory:
An HTML file named
analysis-report-timestamp.html. The timestamp is in the format
YYYY-MM-DD-hh-mm. View this file in a browser to examine the report.
A JSON file named
analysis-report-timestamp.jsoncontaining a JSON format of the output. You can use the file as input to data visualization tools.
The output files contain information about the analysis, including the fit assessment. See Report file format for more information.
To run the analysis tool on multiple tar files, you can use the command:
$ ./m4a-fit-analysis tarFile1 tarFile2 tarFile3 ...
The tool outputs a single row to the output files for each input tar file. Within the report, you can identify each VM by its hostname, meaning the value returned by running the
hostnamecommand on the VM.
--verbosityoption to control the output of the tool. Options include:
analysis-report-timestamp.htmlin a browser to view the report. See Analysis tool operation for a description of the file format.
Collect script operation
The collect script runs a series of Linux commands to gather information about the source VM and also collects information from several files on the VM.
The following sections describe the operation of the script. You can also examine the script in a text editor to see more detailed information.
The script runs the following Linux commands:
||List all active listening ports|
||List of all running user processes|
||List installed packages (debian based)|
||List of installed packages (rpm based)|
||Get SELinux status|
||Get loaded kernel modules|
||List running services (SystemD baseD)|
||List running services (Init.d /Upstart based)|
||List open handles to files and hardware devices|
||List running Docker containers|
||List IP addresses assigned to NICs|
||Show NIC configs and assigned IPs|
||List block device attributes|
||List block devices|
The script copies the following files to the generated tar file:
||List of mounts to be mounted at startup|
||Aliases for hosts and DNS data|
||The name of the Linux distribution|
||The configured interfaces|
||The current memory usage/total on the VM|
||The currently mounted devices|
||List of NFS exports|
||Websphere version (when installed at default)|
||Websphere info (when installed at default)|
The script searches the following directories, to a depth of two, to locate the directories of installed utilities and software:
Collect tar file format
m4a-fit-collect.sh script outputs a tar file named
m4a-collect-machinename-timestamp.tar to the
current directory and to
While not required, you can optionally expand the tar file by using the command:
tar xvf m4a-collect-machinename-timestamp.tar
The tar file has the following format:
collect.log # Log output of the script files # Directory containing files with their full path from root. For example: |- etc/fstab |- etc/hostname |- etc/network/interfaces |- ... commands # Output of commands run by the script: |- dpkg |- netstat |- ps |- ... found_paths # Text file with the list of installation directories machinename # Text file with machine name ostype # Text file with operating system type (Linux) timestamp # Text file with collection timestamp version # Text file with version number of the script
Analyze tool operation
The analyze tool examines the contents of the tar file from a VM, applies a set of rules, and outputs the following report files containing the fit assessment and analysis results:
An HTML file named
analysis-report-timestamp.htmlto the current directory. The timestamp is in the format
A JSON file named
analysis-report-timestamp.jsoncontaining a JSON format of the output. You can use the JSON file as input to data visualization tools.
Calculating the fit assessment
A rule violation detected by the tool affects the final fit assessment, where
each rule has its own predefined severity. For example, the tool detects the SELinux is enabled
on the VM, corresponding to rule
SEL01: SELinux enforced. That rule has a severity
of "Needs moderate work before migrating" so the final fit assessment of the tool is
"Needs moderate work before migrating".
If the tool detects multiple rule violations, then only the rule with the highest severity is applied to the final fit assessment. For example, two rule violations are detected:
An incompatible file system is detected, rule
IFS01: incompatible filesystem, with a severity of "No fit".
SELinux enabled is detected with a severity of "Needs moderate work before migrating".
The tool only returns the fit assessment associated with the most severe of the two rules. Therefore, it returns "No fit".
HTML report file format
The HTML file contains a row describing each VM analyzed, and a set of columns that show the results from applying each rule to the VMs. Each rule column displays either:
Not Detected means that rule detected no migration issue.
Detected means the rule detected a migration issue for the VM. Click Detected to view details about the rule output.
The HTML file contains the following columns:
||The name of the VM.|
||The timestamp of the analysis, in the form:
||The operating system which is always
||The fit assessment. See for information on interpreting this assessment.|
If detected, shows IBM WebSphere.
Summary of information about the VM, including:
||Good fit||See Mounting External Volumes for more on how to attach NFS/CIFS volumes to deployment YAML.|
||Incompatible file-system.||No fit||You cannot migrate workloads with an incompatible file system.|
||SELinux does not work well in nested containers so the recommendation is to disable it before migrating.||Needs moderate work||Disable SELinux or manually apply an
Detected an NFS export file and the NFS server kernel module is loaded.
Two rules are applied, where each returns a different severity:
|See description||Migrate NFS servers to Cloud Filestore.|
Binding to a specific network interface detected. If there are ports being listened to on a specific NIC (and not 0.0.0.0, *, or loopback) it usually means there is a multi-NIC setup.
Three rules are applied, where each returns a different severity:
|See description||Update VM to listen on any one NIC because Migrate for Anthos only supports one NIC.|
Check for running database applications that are not a good fit for migration:
|Needs minor work||Consider migrating to Cloud SQL.|
||Nesting Docker inside containers is not supported.||If
||Consider using Migrate for Compute Engine or running the containers directly on GKE/Anthos.|
||Static hosts definitions detected in
||Good fit||See Adding entries to Pod /etc/hosts with HostAliases for information on modifying your static hosts.|
||Open block device detected by
||No fit||Not compatible with Migrate for Anthos.|
Changes to the tool for Migrate for Anthos 1.7
For the Migrate for Anthos 1.7 release, we added new features and changed existing features of the tool. The following table describes these changes:
|Removed the fit score.||In the previous release, the fit score was in the range of 0 (no fit) to 10 (great fit). The score has been replaced by an assessment value as shown above.|
|The weight of all rules has been removed.||The weight of all rules have been removed and replaced with an assessment result.|
|Replaced the CSV file report format with an HTML and a JSON file.||Use the HTML file to view the report in a browser, and use the JSON file as input to a data visualization tool. See HTML report file format for more.|
|Location of the tar file created by
||As in previous release, the script writes the tar file to the current directory,
but now it also writes it to
|Version file added to the tar file created by
||The file contains the version of the script.|
|Added new column for the detected Workload type||If detected, shows IBM WebSphere.|
|Added new summary fields for each VM in the report.||These fields include the listening ports, mount points, NFS mount points, and other information.|