Customize migration plan for Linux VMs
Before executing a migration plan, you should review and optionally customize it. The details of your migration plan are used to extract the workload's container artifacts from the source VM, and also to generate Kubernetes deployment files that you can use to deploy the container image to other clusters, such as a production cluster.
This section describes the migration plan's contents and the kinds of customizations you might consider before you execute the migration and generate deployment artifacts.
Before you begin
This topic assumes that you've already created a migration and have the resulting migration plan file.
Edit the migration plan
You can edit the migration plan by using the migctl
tool or the Google Cloud console.
migctl
You must download the migration plan before you can edit it:
Download the migration plan. The migration plan of Linux workloads is represented by LinuxMigrationCommonSpec:
migctl migration get my-migration
Edit the downloaded migration plan,
my-migration.yaml
, in a text editor.When your edits are complete, save and upload the revised migration plan:
migctl migration update my-migration --main-config my-migration.yaml
Repeat these steps if more edits are necessary.
Console
Edit the migration plan in the Google Cloud console by using the YAML editor. The migration plan of Linux workloads is represented by the LinuxMigrationCommonSpec CRD:
Open the Migrate to Containers page in the Google Cloud console.
Click the Migrations tab to display a table containing the available migrations.
In the row for your desired migration, select the migration Name to open the Details tab.
Select the YAML tab.
Edit the migration plan as necessary.
When you are done editing, you can either:
Save the migration plan. You will then have to manually execute the migration to generate the migration artifacts. Use the procedure shown in Executing a migration.
Save and generate the artifacts. Execute the migration by using your edits to generate the migration artifacts. The process is the same as described in Executing a migration. You can then monitor the migration as described in Monitoring a migration.
CRD
You must download the migration plan, edit it, then apply it. The migration plan of Linux workloads is represented by the LinuxMigrationCommonSpec CRD:
Get the name of the
AppXGenerateArtifactsFlow
:kubectl get migrations.anthos-migrate.cloud.google.com -n v2k-system -o jsonpath={.status.migrationPlanRef.name} my-migration
The naming pattern is returned in the form of
appx-generateartifactsflow-id
.Get the migration plan by name and write to a file named
my-plan.yaml
:kubectl -n v2k-system get appxgenerateartifactsflows.anthos-migrate.cloud.google.com -o jsonpath={.spec.appXGenerateArtifactsConfig} appx-generateartifactsflow-id > my-plan.yaml
Edit the migration plan as necessary.
Apply the file:
kubectl patch appxgenerateartifactsflows.anthos-migrate.cloud.google.com --type merge -n v2k-system --patch '{"spec": {"appXGenerateArtifactsConfig": '"$(jq -n --rawfile plan my-plan.yaml '$plan')"'}}' appx-generateartifactsflow-id
Specify content to exclude from the migration
By default, Migrate to Containers excludes typical VM content that isn't relevant in the context of GKE. You can customize that filter.
The filters
field value lists paths that should be excluded from migration
and will not be part of the container image.
The field's value lists rsync filter rules that specify which files to transfer
and which to skip. Preceding each path and file with a minus sign specifies that
the item in the list should be excluded from migration. The list is processed
according to the order of lines in the YAML, and exclusions/inclusions are
evaluated accordingly.
For more, see the INCLUDE/EXCLUDE PATTERN RULES section of the rsync manpage
Files that are too large to be included in the docker image will be listed in the YAML file. This will flag files that are larger than 1GB for your consideration. Docker images that are too large, or greater than 15GB, are at risk of failure during migration.
You can edit the YAML list to add or remove paths. See an example YAML below, which includes example exclusions, as well as notifications for large and sparse files. Follow the inline guidance to either:
- Exclude the detected folders by uncommenting them and placing them under the global filters section.
- Move the detected folders to to a persistent volume by uncommenting them and placing them under the data folder section.
You may also consider excluding or moving the detected sparse files in the same manner.
global:
# Files and directories to exclude from the migration, in rsync format.
filters:
- "- *.swp"
- "- /etc/fstab"
- "- /boot/"
- "- /tmp/*"
- "- /var/log/*.log*"
- "- /var/log/*/*.log*"
- "- /var/cache/*"
## The data folders below are too large to be included in the docker image.
## Consider uncommenting and placing them under either the global filters
## or the data folder section.
# - '- /a' # (1.8GB, last access 2022-02-02 10:50:30, last modified 2020-02-02 10:50:30)
# - '- /a/c' # (1.8GB, last access 2022-02-02 10:50:30, last modified 2020-02-02 10:50:30)
## Sparse files will fail the run of a docker image. Consider excluding the below
## detected files and any other sparse files by uncommenting and placing them in
## the global filters section, or export them to a persistent volume by specifying
## them in the data folder section.
# - '- /a/b' # (1.8GB, last access 2022-02-02 10:50:30, last modified 2020-02-02 10:50:30)
# - '- /a/d' # (1.8GB, last access 2022-02-02 10:50:30, last modified 2020-02-02 10:50:30)
Set the name of the container image
The image
field value defines the names and locations of two images created from
a migrated VM. You can change these values if you prefer to use other names.
During migration, Migrate to Containers copies files and directories representing your migrating workload to (by default) Container Registry for use during migration. The migration process adapts the extracted workload to an image runnable on GKE.
Migrate to Containers preserves files and directories from the original VM (at
the base
path) in the registry. This image functions as a non-runnable base
layer that includes the extracted workload files, which are then combined with
the Migrate to Containers runtime software layer to build an executable container
image.
The use of separate layers simplifies later updates to the container image by allowing separate updates to the base layer or to the Migrate to Containers software layer, as needed.
This image isn't runnable, but makes it possible for Migrate to Containers to update the container from that original when you upgrade Migrate to Containers.
The base
and name
field values specify images created in the registry.
base
-- Name of the image that is created from the VM files and directories copied from the source platform. This image is not runnable on GKE because it hasn't been adapted for deployment there.name
-- Name of the runnable image that is used for the container. This image contains both the content from the source VM, and the Migrate to Containers runtime, which allows it to be runnable.
image: # Review and set the name for extracted non-runnable base image, # if an image tag is not specified, a random tag is auto-generated when the image is built. base: "centos-mini-non-runnable-base" # Review and set the name for runnable container image, # if an image tag is not specified, a random tag is auto-generated when the image is built. name: "centos-mini"
By default, a tag corresponding to the timestamp of the migration is automatically applied to these values. This tag is in the form:
MM-DD-YYYY--hh:mm:ss
To apply your own tag, overriding the default tag, edit the CRD and add it as shown below:
image: # Review and set the name for extracted non-runnable base image, # if an image tag is not specified, a random tag is auto-generated when the image is built. base: "centos-mini-non-runnable-base:tag" # Review and set the name for runnable container image, # if an image tag is not specified, a random tag is auto-generated when the image is built. name: "centos-mini:tag"
Customize the services list
By default, Migrate to Containers disables unneeded services on a VM when you migrate it to a container. These services can sometimes cause issues with the migrated container, or are not needed in a container context.
Along with the services automatically disabled by Migrate to Containers, you can optionally disable other services:
Migrate to Containers automatically discovers services that you can optionally disable and lists them in the migration plan. These services, such as
ssh
or a web server, might not be required in your migrated workload but it is up to you to make that decision. If necessary, edit the migration plan to disable these services.Migrate to Containers does not list all services running on the source VM. For example, it omits operating-system related services. You can optionally edit the migration plan to add your own list of services to disable in the migrated container.
The systemServices
field specifies the list of services discovered by Migrate to Containers.
For example:
systemServices: - enabled: true|false name: service-name probed: true|false - enabled: true|false name: service-name probed: true|false ...
To disable a service, set enabled
to false
.
Migrate to Containers does not list all services running on the source VM, such as
operating-system related services. You can also add additional services to the list.
For example, to disable service2
and the cron
service:
systemServices: - enabled: true name: service1 probed: false - enabled: false name: service2 probed: false - enabled: false name: cron probed: false
When you execute a migration to generate
the migration artifacts, Migrate to Containers creates the blocklist.yaml
file.
This file lists the container services to disable based on your settings in the migration plan.
For example:
service_list: - name: disabled-services services: # Because you used systemServices above to disabled service2 and cron: - service2 - cron
To later modify the list of disabled services:
- Edit the list of services in the migration plan.
- Execute the migration to regenerate
the migration artifacts, including an updated
blocklist.yaml
file,deployment_spec.yaml
file, and Dockerfile.
Alternatively, after you execute a migration
to generate the migration artifacts, you can edit the blocklist.yaml
file directly,
and then rebuild and push the container image yourself. For example:
Update the
blocklist.yaml
file.Rebuild and push the container image.
The way you rebuild and push the container image depends on your build environment. You can use:
gcloud
to rebuild the image and push it to the Container Registry as described at Quickstart: Build.docker build
as described at Build and run your image.
After you rebuild and push the new image, open the
deployment_spec.yaml
file in an editor to update the image location:spec: containers: - image: new-image-location
For example, new-image-location could be
my-new-image:v1.0
if you usedgcloud
to rebuild the image and push it to the Container Registry.
Customize service endpoints
The migration plan includes the endpoints
array that defines
the ports and protocols used to create the Kubernetes Services provided by the migrated workload.
You can add, edit, or delete endpoint definitions to customize the migration plan.
For each Service endpoint, add the following definition to the migration plan:
endpoints: - port: PORT_NUMBER protocol: PORT_PROTOCOL name: PORT_NAME
Where:
- PORT_NUMBER specifies the container port number to which requests to the service are routed.
- PORT_PROTOCOL specifies the port protocol, such as HTTP, HTTPS, or TCP. See Supported protocols for the complete list of protocols.
- PORT_NAME specifies the name used to access the Service endpoint. Migrate to Containers generates a unique PORT_NAME for each generated endpoint definitions.
For example, Migrate to Containers detects the following endpoints:
endpoints: - port: 80 protocol: HTTP name: backend-server-nginx - port: 6379 protocol: TCP name: backend-server-redis
To set the value of the name
property, Migrate to Containers combines the source VM name,
backend-server
in this example, with the program name of the Service.
The generated names are compatible with Kubernetes naming conventions, and are unique
within the migration plan. For example, the migration plan above creates a Service
that targets Nginx on port 80 over HTTP.
For any duplicate names, Migrate to Containers appends a counter suffix. For example,
if Nginx is associated with two ports, it adds the -2
suffix to the name
in the second definition:
endpoints: - port: 80 protocol: HTTP name: backend-server-nginx - port: 8080 protocol: HTTPS name: backend-server-nginx-2 - port: 6379 protocol: TCP name: backend-server-redis
When you execute a migration to generate the migration artifacts, Migrate to Containers
creates a Service
definition in the deployment_spec.yaml
file for each endpoint.
For example, shown below is a Service
definition in the deployment_spec.yaml
file:
apiVersion: v1 kind: Service metadata: creationTimestamp: null name: backend-server-nginx spec: ports: - port: 80 protocol: HTTP targetPort: 80 selector: app: backend-server status: loadBalancer: {}
Customize NFS mounts
Migrate to Containers includes NFS mounts in the generated migration plan.
This information is collected from the fstab
file and written to the
nfsMounts
array in the migration plan. You can add or edit
NFS mount point definitions to customize the migration plan.
When generating the migration plan, Migrate to Containers:
- Ignores NFS mounts for
/sys
and/dev
. - Ignores NFS mounts with a type other than
nfs
ornfs4
.
Each NFS mount in the migration plan includes the server's IP address and local mount path in the form:
nfsMounts: - mountPoint: MOUNT_POINT exportedDirectory: DIR_NAME nfsServer: IP mountOptions: - OPTION_1 - OPTION_2 enabled: false|true
Where:
- MOUNT_POINT specifies the mount point obtained from
fstab
. - DIR_NAME specifies the name of the shared directory.
- IP specifies the IP address of the server hosting the mount point.
- OPTION_N specifies any options extracted from the
fstab
for the mount point.
For example, for the following entry in fstab
:
<file system> <mount point> <type> <options> <dump> <pass> 10.49.232.26:/vol1 /mnt/test nfs rw,hard 0 0
Migrate to Containers generates the following entry in the migration plan:
nfsMounts: - mountPoint: /mnt/test exportedDirectory: /vol1 nfsServer: 10.49.232.26 mountOptions: - rw - hard enabled: false
To configure Migrate to Containers to process entries in the nfsMounts
array,
set enabled
to true
for the mountPoint
entry. You can enable one, some,
or all mountPoints
entries, edit the entries, or add your own entries.
When you execute a migration to generate the migration artifacts, Migrate to Containers
creates a volumes and volumeMounts
definition and a PersistentVolume and PersistentVolumeClaim
definition in the deployment_spec.yaml
file for each enabled NFS mount.
For example, shown below is a volumeMounts
definition in the deployment_spec.yaml
file:
spec: containers: - image: gcr.io/myimage-1/image-name name: some-app volumeMounts: - mountPath: /sys/fs/cgroup name: cgroups - mountPath: /mnt/test name: nfs-pv
Where the value of name
is a unique identifier generated by Migrate to Containers.
Shown below is an example PersistentVolumeClaim
and PersistentVolume
definitions
in the deployment_spec.yaml
file:
apiVersion: v1 kind: PersistentVolumeClaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Mi storageClassName: "" volumeName: nfs-pv apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: mountOptions: - rw - hard nfs: path: /vol1 server: 10.49.232.26
Customize log data written to Cloud Logging
Typically a source VM writes information to one or more log files. As part of migrating the VM, you can configure the migrated workload to write that log information to Cloud Logging.
When you generate the migration plan, Migrate to Containers automatically searches
for log destination files on the source VM. Migrate to Containers then writes
information about those detected files to the logPaths
area of the migration plan:
deployment: ... logPaths: - appName: APP_NAME globs: - LOG_PATH
For example:
deployment: ... logPaths: - appName: tomcat globs: - /var/log/tomcat*/catalina.out
When you generate the migration artifacts, Migrate to Containers generates the
logs.yaml
file from the migration plan. This file contains the list of log files
detected on the source VM. For example, from the logsPath
definition above,
logs.yaml
contains:
logs: tomcat: - /var/log/tomcat*/catalina.out
In this example, when you deploy the migrated workload, log information written
to catalina.out
is automatically written to Cloud Logging.
Entries each appear on a line in the log in the following form:
DATE TIME APP_NAME LOG_OUTPUT
The following example illustrates the form with an entry from Tomcat:
2019-09-22 12:43:08.681193976 +0000 UTC tomcat log-output
To configure logging, you can either:
Edit the migration plan before you generate the migration artifacts to add, remove, or edit
logPaths
entries. When you generate the migration artifacts, these edits are reflected in thelogs.yaml
file.Edit
logs.yaml
after you generate the migration artifacts to add, remove, or editlogs
entries.
The advantage to editing the migration plan is that your edits are reflected in
logs.yaml
every time you generate the artifacts. If you edit logs.yaml
directly,
then regenerate the artifacts for some reason, you have to reapply your edits to logs.yaml
.
Set Linux v2kServiceManager health probes
You can monitor the downtime and ready status of your managed containers by specifying probes in your Tomcat web server's migration plan. Health probe monitoring can help reduce the downtime of migrated containers and provide better monitoring.
Unknown health states can create availability degradation, false-positive availability monitoring, and potential data loss. Without a health probe, kubelet can only assume the health of a container and may send traffic to a container instance that is not ready. This can cause traffic loss. Kubelet may also not detect containers that are in a frozen state and will not restart them.
A health probe functions by running a small scripted statement when the container starts.
The script checks for successful conditions, which are defined by the type of probe
used, every period. The period is defined in the migration plan by a periodSeconds
field.
You can run or define these scripts manually.
To learn more about kubelet probes, see Configure Liveness, Readiness and Startup Probes in the Kubernetes documentation.
There are two types of probes available to configure, both probes are probe-v1-core defined in probe-v1-core reference and share the same function as the corresponding fields of container-v1-core
- Liveness probe - Liveness probes are used to know when to restart a container.
- Readiness probe - Readiness probes are used to know when a container is ready to start accepting traffic. To start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. A readiness probe may act similarly to a liveness probe, but a readiness probe in the specifications indicates that a Pod will start without receiving any traffic and only start receiving traffic after the probe succeeds.
After discovery, the probe configuration is added to the migration plan. The probes
can be used in their default configuration as shown below. To disable probes, remove the
probes
section from the yaml.
v2kServiceManager: true
deployment:
probes:
livenessProbe:
exec:
command:
- gamma
- /probe
readinessProbe:
exec:
command:
- gamma
- /probe
initialDelaySeconds: 60
periodSeconds: 5
image:
# Disable system services that do not need to be executed at the migrated workload containers.
# Enable the 'probed' property to include system services in the container health checks.
systemServices:
- enabled: true
name: apache2
probed: true
- enabled: true
name: atd
probed: false
By default, all service probing is disabled. You must define which subset of services will be monitored.
There are four pre-defined ways to check a container using a probe. Each probe must define exactly one of these four mechanisms:
exec
- Executes a specified command inside the container. Execution is considered successful if the exit status code is 0.grpc
- Performs a remote procedure call using `gRPC`. `gRPC` probes are an alpha feature.httpGet
- Performs an HTTP GET request against the Pod's IP address on a specified port and path. The request is considered successful if the status code is greater than or equal to 200 and less than 400.tcpSocket
- Performs a TCP check against the Pod's IP address on a specified port. The diagnostic is considered successful if the port is open.
By default, a migration plan enables the exec
probing method. Use manual
configuration for your migration plan to enable another method.
To add external dependencies for the readiness probe, while using the default liveness probe, define an exec readiness probe and a script that implements the logic.
What's next
- Learn how to execute the migration.