Device listed in /etc/fstab fails to mount
By default, the system parses
/etc/fstab and mounts all of the listed devices
to the required mount points. If a device is not recognized or not mounted,
the workload pod will not get to a ready state.
For example, consider a source Linux VM in Amazon EC2 that has an ephemeral disk where persistence is not guaranteed. These disks are not streamed to the target, causing the container to fail on mounting it.
If this happens, you might see messages such as the following:
Unable to locate resolve [X] fstab entries: … Error: Failed mount -o defaults /dev/mapper/mpathe-part /rootfs/mnt/ephemeral
You can work around this by either:
/etc/fstabon the Linux VM to remove the device mount command.
- Setting the
HC_INIT_SKIP_MOUNT_FAILURESenvironment variable to configure the system to skip mount failures and continue.
To set the
HC_INIT_SKIP_MOUNT_FAILURES environment variable:
Create a configmap in the migration namespace,
v2k-system, on the migration processing cluster. For example, define the configmap in a file named
apiVersion: v1 kind: ConfigMap metadata: name: jobs-config namespace: v2k-system data: environment: | HC_INIT_SKIP_MOUNT_FAILURES = true
Apply the configmap to the cluster:
kubectl apply -f jobs-config.yaml
To view the config map, use the command:
kubectl describe configmaps -n v2k-system
Edit the migration plan to add the configmap. The migration plan is the yaml file that you generated when you created the migration. See Customizing a migration plan for more on editing the migration plan.
In the migration plan, edit the
configssection to add the configmap:
configs: jobsConfig: name: jobs-config
Save your edits and then execute the migration as described in Executing a migration.