[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# Migration process stuck during a Compute Engine migration\n=========================================================\n\nDuring Compute Engine to Google Kubernetes Engine migrations, Migrate to Containers\nmay fail to recognize the UUID of the source VM's disk. You can add it manually:\n\n1. Load the logs for the pod using kubectl or Stackdriver.\n\n2. If you see the message `[hcrunner] - Failed to find boot partition`, continue\n with the following steps.\n\n | **Note:** `hcrunner` is only used in the Linux flow. If you're migrating a Windows workload and there is mention of `hcrunner` in the logs, then you need to set the OS type as Windows when creating the migration.\n3. Find the UUID for the boot disk printed in one of the messages, which will be\n a string of hexadecimal values. In the example below, the UUID is `e823158e-f290-4f91-9c3d-6f33367ae0da`.\n\n [util] - SHELL OUTPUT: {\"name\": \"/dev/sdb1\", \"partflags\": null, \"parttype\":\n \"0x83\", \"uuid\": \"\u003cstrong\u003ee823158e-f290-4f91-9c3d-6f33367ae0da\u003c/strong\u003e\",\n \"fstype\": \"ext4\"}\n\n4. Delete the existing workload using its YAML file:\n\n kubectl delete -f\n\n5. Open the YAML file in a text editor and find the section named `env`.\n\n6. Add the following:\n\n - name: \"HC_BOOTDEVICE_UUID\"\n value: \"\"\n\n- If you see the message `touch: cannot touch '/vlsdata/etc/fstab': No such file or directory`, check the following:\n\n - Your CSI driver workloads have a status of OK in the console.\n - Your workload is in the same cluster as your Migrate to Containers deployment.\n- If you see one of the following messages, delete the workload's failing `PersistentVolumeClaim` and recreate it.\n\n - `hcutil.Error: Failed mount -o rw None /vlsdata (32) (Output:mount: /vlsdata: special device None does not exist.`\n - `[hcrunner] - [Errno 30] Read-only file system: '/vlsdata/rootdir/etc/dhcp/dhclient-up-hooks`"]]