Stay organized with collections
Save and categorize content based on your preferences.
Migration process stuck during a Compute Engine migration
During Compute Engine to Google Kubernetes Engine migrations, Migrate to Containers
may fail to recognize the UUID of the source VM's disk. You can add it manually:
Load the logs for the pod using kubectl or Stackdriver.
If you see the message [hcrunner] - Failed to find boot partition, continue
with the following steps.
Find the UUID for the boot disk printed in one of the messages, which will be
a string of hexadecimal values. In the example below, the UUID is e823158e-f290-4f91-9c3d-6f33367ae0da.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Migration process stuck during a Compute Engine migration\n=========================================================\n\nDuring Compute Engine to Google Kubernetes Engine migrations, Migrate to Containers\nmay fail to recognize the UUID of the source VM's disk. You can add it manually:\n\n1. Load the logs for the pod using kubectl or Stackdriver.\n\n2. If you see the message `[hcrunner] - Failed to find boot partition`, continue\n with the following steps.\n\n | **Note:** `hcrunner` is only used in the Linux flow. If you're migrating a Windows workload and there is mention of `hcrunner` in the logs, then you need to set the OS type as Windows when creating the migration.\n3. Find the UUID for the boot disk printed in one of the messages, which will be\n a string of hexadecimal values. In the example below, the UUID is `e823158e-f290-4f91-9c3d-6f33367ae0da`.\n\n [util] - SHELL OUTPUT: {\"name\": \"/dev/sdb1\", \"partflags\": null, \"parttype\":\n \"0x83\", \"uuid\": \"\u003cstrong\u003ee823158e-f290-4f91-9c3d-6f33367ae0da\u003c/strong\u003e\",\n \"fstype\": \"ext4\"}\n\n4. Delete the existing workload using its YAML file:\n\n kubectl delete -f\n\n5. Open the YAML file in a text editor and find the section named `env`.\n\n6. Add the following:\n\n - name: \"HC_BOOTDEVICE_UUID\"\n value: \"\"\n\n- If you see the message `touch: cannot touch '/vlsdata/etc/fstab': No such file or directory`, check the following:\n\n - Your CSI driver workloads have a status of OK in the console.\n - Your workload is in the same cluster as your Migrate to Containers deployment.\n- If you see one of the following messages, delete the workload's failing `PersistentVolumeClaim` and recreate it.\n\n - `hcutil.Error: Failed mount -o rw None /vlsdata (32) (Output:mount: /vlsdata: special device None does not exist.`\n - `[hcrunner] - [Errno 30] Read-only file system: '/vlsdata/rootdir/etc/dhcp/dhclient-up-hooks`"]]