CLUSTER_KUBECONFIG: 클러스터 kubeconfig 파일(관리자 또는 사용자)의 경로
ADMIN_CLUSTER_KUBECONFIG: 관리자 클러스터 kubeconfig 파일의 경로
USER_CLUSTER_NAME: 사용자 클러스터의 이름
출력에서 datastore 필드가 비어 있고 storagePolicyName 필드가 비어 있지 않으면 클러스터가 SPBM을 사용 중인 것입니다.
클러스터는 vSphere 트리 내 볼륨 플러그인을 사용하면 안 됩니다.
이전 버전의 Google Distributed Cloud에서 클러스터를 업그레이드한 경우 vSphere 트리 내 볼륨 플러그인으로 프로비저닝된 PV/PVC가 있을 수 있습니다.
이러한 볼륨은 kubeception 사용자 클러스터의 컨트롤 플레인 노드 또는 워커 노드에서 만든 워크로드에서 사용 중일 수 있습니다.
모든 StorageClass를 나열하고 어떤 프로비저닝 도구를 사용하고 있는지 확인합니다.
kubectl --kubeconfig CLUSTER_KUBECONFIG get storageclass
출력에서 PROVISIONER 열이 kubernetes.io/vsphere-volume이면 이 StorageClass로 생성된 PVC가 vSphere 트리 내 볼륨 플러그인을 사용하고 있는 것입니다. 이러한 PV/PVC를 사용하는 StatefulSet의 경우 이를 vSphere CSI 드라이버로 마이그레이션합니다.
스토리지 마이그레이션 수행
Google Distributed Cloud는 두 가지 카테고리의 스토리지 마이그레이션을 지원합니다.
노드에서 실행되는 포드에서 사용하는 연결된 vSphere CNS 볼륨 및 노드에 연결된 이러한 VM CNS 볼륨에서 사용하는 VMDK를 포함하여 VM 스토리지를 이동하는 VM용 스토리지 vMotion
VM용 스토리지 vMotion을 수행하지 않고 지정된 vSphere CNS 볼륨을 호환되는 데이터 스토어로 이동하는 CNS 볼륨 재배치
VM에 대해 스토리지 vMotion 수행
마이그레이션에는 vSphere 환경에서 수행하는 단계와 관리자 워크스테이션에서 실행하는 명령어가 포함됩니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-31(UTC)"],[],[],null,["This document shows how to migrate disks from one\n[vSphere datastore](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-D5AB2BAD-C69A-4B8D-B468-25D86B8D39CE.html)\nto another vSphere datastore with\n[Storage Policy Based Management (SPBM)](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/configure-storage-policy).\n\n1.29: Generally available \n\n1.28: Preview \n\n1.16: Not available\n\nYou can migrate the following kinds of storage:\n\n- Storage for system components managed by Google Distributed Cloud, including:\n\n - Data disks (VMDK files) used by the control-plane nodes of admin clusters\n and Controlplane V2 user clusters\n\n - Boot disks (VMDK files) used by all admin cluster and user cluster nodes\n\n - vSphere Volumes represented by PV/PVCs in the admin cluster and used by the\n control-plane components of\n [kubeception](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/plan-ip-addresses-kubeception)\n user clusters\n\n- Storage for workloads that you deploy on user cluster worker nodes with PV/PVCs\n provisioned by the in-tree vSphere volume plugin or the\n [vSphere CSI driver](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/using-vsphere-csi-driver)\n\nPrerequisites for an admin cluster\n\n1. The admin cluster must have an HA control plane. If your admin cluster has a\n non-HA control plane,\n [migrate to HA](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/migrate-admin-cluster-ha)\n before you continue.\n\n2. Verify that the admin cluster has an HA control plane:\n\n ```\n kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get nodes\n ```\n\n Replace \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e with the path of the admin cluster\n kubeconfig file.\n\n In the output, make sure that you see three control-plane nodes. For example: \n\n ```\n admin-cp-1 Ready control-plane,master ...\n admin-cp-2 Ready control-plane,master ...\n admin-cp-3 Ready control-plane,master ...\n ```\n\nPrerequisites for all clusters (admin and user)\n\n1. The cluster must have node auto repair disabled. If node auto repair is\n enabled,\n [disable node auto repair](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/node-auto-repair#disabling_node_repair_and_health_checking_for_a_user_cluster).\n\n2. The cluster must use\n [Storage Policy Based Management (SPBM)](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-720298C6-ED3A-4E80-87E8-076FFF02655A.html).\n If your cluster doesn't use SPBM,\n [create a storage policy](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/configure-storage-policy)\n before you continue.\n\n3. Verify that the cluster uses SPBM:\n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get onpremadmincluster --namespace kube-system \\\n -ojson | jq '{datastore: .items[0].spec.vCenter.datastore, storagePolicyName: .items[0].spec.vCenter.storagePolicyName}'\n ```\n\n (User cluster only) Verify that the node pools use SPBM: \n\n ```\n kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get onpremnodepools --namespace USER_CLUSTER_NAME-gke-onprem-mgmt \\\n -ojson | jq '.items[] | {name: .metadata.name, datastore: .spec.vsphere.datastore, storagePolicyName: .spec.vsphere.storagePolicyName}'\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the path of the cluster kubeconfig file\n (admin or user).\n\n - \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e: the path of the admin cluster\n kubeconfig file\n\n - \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e: the name of the user cluster\n\n In the output, if the `datastore` field is empty and the `storagePolicyName`\n field is non-empty, then the cluster is using SPBM.\n4. The cluster must not use the vSphere in-tree volume plugin.\n\n If your cluster was upgraded from an earlier version of Google Distributed Cloud,\n it might have PV/PVCs that were provisioned by the\n [vSphere in-tree volume plugin](/kubernetes-engine/distributed-cloud/vmware/docs/concepts/storage#kubernetes_in-tree_volume_plugins).\n This kind of volume might be in use by a control-plane node of a kubeception\n user cluster or by a workload that you created on a worker node.\n\n List of all PVCs and their StorageClasses: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get pvc --all-namespaces \\\n -ojson | jq '.items[] | {namespace: .metadata.namespace, name: .metadata.name, storageClassName: .spec.storageClassName}'\n ```\n\n List all StorageClasses and see what provisioners they are using: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get storageclass\n ```\n\n In the output, if the `PROVISIONER` column is `kubernetes.io/vsphere-volume`,\n then PVCs created with this StorageClass are using the vSphere in-tree volume\n plugin. For the StatefulSets using these PV/PVCs,\n [migrate them to the vSphere CSI driver](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/using-statefulset-csi-migration-tool).\n\nPerform the storage migration\n\nGoogle Distributed Cloud supports two categories of storage migration:\n\n- Storage vMotion for VMs, which moves VM storage, including attached vSphere\n CNS volumes used by Pods running on a node, and VMDKs used by these VM CNS\n volumes attached to the nodes\n\n- CNS volume relocation, which moves specified vSphere CNS volumes to a\n compatible datastore without performing storage vMotion for VMs\n\nPerform storage vMotion for VMs\n\nMigration involves steps that you do in your vSphere environment and commands\nthat you run on your admin workstation:\n\n1. In your vSphere environment, add your target datastores to your storage\n policy.\n\n2. In your vSphere environment, migrate cluster VMs using the old datastore to\n the new datastore. For instructions, see\n [Migrate a Virtual Machine to a New Compute Resource and Storage](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-23E67822-4559-4870-982A-BCE2BB88D491.html).\n\n3. On your admin workstation, verify that the VMs have been successfully\n migrated to the new datastore.\n\n Get the Machine objects in the cluster: \n\n ```\n kubectl --kubeconfig CLUSTER_KUBECONFIG get machines --output yaml\n ```\n\n In the output, under `status.disks`, you can see the disks attached to the\n VMs. For example: \n\n ```\n status:\n addresses:\n – address: 172.16.20.2\n type: ExternalIP\n disks:\n – bootdisk: true\n datastore: pf-ds06\n filepath: me-xvz2ccv28bf9wdbx-2/me-xvz2ccv28bf9wdbx-2.vmdk\n uuid: 6000C29d-8edb-e742-babc-9c124013ba54\n – datastore: pf-ds06\n filepath: anthos/gke-admin-nc4rk/me/ci-bluecwang-head-2-data.vmdk\n uuid: 6000C29e-cb12-8ffd-1aed-27f0438bb9d9\n ```\n\n Verify that all the disks of all the machines in the cluster have been\n migrated to the target datastore.\n4. On your admin workstation, run\n [`gkectl diagnose`](/kubernetes-engine/distributed-cloud/vmware/docs/troubleshooting/diagnose)\n to verify that the cluster is healthy.\n\nCall CNS Relocation APIs for moving CNS volumes\n\nIf you only want to move CNS volumes provisioned by the vSphere CSI driver, you\ncan follow the instructions in\n[Migrating Container Volumes in vSphere](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-536DEB75-84F5-48DC-A425-3BF703B8F54E.html).\nThis might be simpler if you only have CNS volumes in the old datastore.\n\nUpdate your storage policy if needed\n\nIn your vSphere environment, update the storage policy to exclude the old\ndatastores. Otherwise, new volumes and re-created VMs might get assigned to\nan old datastore."]]