Migration process stuck during a Compute Engine migration

During Compute Engine to Google Kubernetes Engine migrations, Migrate to Containers may fail to recognize the UUID of the source VM's disk. You can add it manually:

  1. Load the logs for the pod using kubectl or Stackdriver.

  2. If you see the message [hcrunner] - Failed to find boot partition, continue with the following steps.

  3. Find the UUID for the boot disk printed in one of the messages, which will be a string of hexadecimal values. In the example below, the UUID is e823158e-f290-4f91-9c3d-6f33367ae0da.

    [util] - SHELL OUTPUT: {"name": "/dev/sdb1", "partflags": null, "parttype":
    "0x83", "uuid": "<strong>e823158e-f290-4f91-9c3d-6f33367ae0da</strong>",
    "fstype": "ext4"}
    
  4. Delete the existing workload using its YAML file:

    kubectl delete -f
    
  5. Open the YAML file in a text editor and find the section named env.

  6. Add the following:

        - name: "HC_BOOTDEVICE_UUID"
          value: ""
    
  • If you see the message touch: cannot touch '/vlsdata/etc/fstab': No such file or directory, check the following:

    • Your CSI driver workloads have a status of OK in the console.
    • Your workload is in the same cluster as your Migrate to Containers deployment.
  • If you see one of the following messages, delete the workload's failing PersistentVolumeClaim and recreate it.

    • hcutil.Error: Failed mount -o rw None /vlsdata (32) (Output:mount: /vlsdata: special device None does not exist.
    • [hcrunner] - [Errno 30] Read-only file system: '/vlsdata/rootdir/etc/dhcp/dhclient-up-hooks