Migrate for Anthos containerizes your source VMs into StatefulSet Pods on Google Kubernetes Engine. It runs migrated system containers with the source VM disks transformed and migrated to Google Kubernetes Engine PersistentVolumes.
Copy the code examples provided here into configuration files of your own, replacing placeholders and applying the configuration code to your own GKE instance to configure the instance.
The YAML you'll use here includes fields specific to Migrate for Anthos. For more information about them, see YAML reference.
Before migrating VMs to GKE, be sure you've set up or verified the prerequisites described in Prerequisites for migrating VMware VMs to GKE.
You'll also need the following values to complete the steps in this topic.
A unique identifier for your VM on VMware. You can use one of the following values.
The VM name. If you're confident that each VM name is unique across your VMware deployment, the simple VM name will work. If VM names might be duplicated, use the VM ID as described below.
You can get the VM name from the vSphere web client, as shown in the following image.
The VM ID from vSphere (also called a MoRef). This is visible from the URL of the vSphere web client when selecting the VM.
You can find also the MoRef by using PowerCLI
StorageClassname from your cluster's environment. For example,
csi-disk-sc01. This was defined when you created your Migrate for Anthos deployment.
Verify that your VMware instance is available as a source for migration. You can do this using the
StorageClassthat was created when you created the cluster using Marketplace.
Run the following command:
kubectl describe storageclass storageclass-name
In the command's output, look in the
Eventslist for warnings. If the connection is healthy, you'll see no events.
Configure storage for streaming data during migration
For more about portions of the YAML specific to Migrate for Anthos, be sure to see the YAML reference.
Create a YAML file and paste the following into it.
You can also configure logging to Stackdriver Logging here. For more, see Configuring logging to Stackdriver Logging.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: [PVC_NAME] annotations: # Replace vm-id with a unique identifier. See the prerequisites # earlier in this topic. anthos-migrate.gcr.io/vm-id: [VM_ID] anthos-migrate.gcr.io/vm-data-access-mode: "FullyCached" anthos-migrate.gcr.io/run-mode: "TestClone" spec: accessModes: - ReadWriteOnce # Replace with your Storage Class name defined when adding Migrate for Anthos to your cluster storageClassName: [STORAGE_CLASS_NAME] resources: requests: storage: 1Gi --- kind: StatefulSet apiVersion: apps/v1 metadata: name: [APPLICATION_NAME] namespace: default spec: serviceName: [SERVICE_NAME] replicas: 1 selector: matchLabels: app: [APPLICATION_NAME] template: metadata: labels: app: [APPLICATION_NAME] annotations: anthos-migrate.gcr.io/action: run anthos-migrate.gcr.io/source-type: streaming-disk # source-pvc needs to match the name of the PVC declared above. anthos-migrate.gcr.io/source-pvc: [PVC_NAME] spec: containers: - name: [APPLICATION_NAME] # The image for the Migrate for Anthos system container. image: anthos-migrate.gcr.io/v2k-run:v1.0.0
[VM_ID]is VM's unique identifier. See the prerequisites in this topic for more.
[STORAGE_CLASS_NAME]is the name from the environment configuration YAML you used in configuring your cluster.
[APPLICATION_NAME]is the name of your GKE workload.
If you would like to fully cache volumes on Google Cloud after starting them on GKE, change the value of
Save the file.
Apply the YAML to your cluster
kubectl apply -f deployment-yaml
Open the GKE Workloads page on the Google Cloud Console.
Check that your workload is running on the cluster.
Test your migrated VMs
Run tests that exercise the functionality on your VMs. The nature of your tests will depend on what your applications are designed to do. Your tests should verify that your applications work as they did before you migrated the VMs.
Once you've validated your migrated VMs with tests, you're ready to move on to export your VM storage to a permanent location. Exporting storage enables your migrated VMs to run independent of their source VM disks.
Be sure to export storage before turning down or deleting your source VMs.
For more, see Exporting streaming PVs to permanent storage.
bash commands on your container
You can access a container through a
bash shell using the
kubectl exec command.
kubectl describe podsto find the name of the Pod in your cluster that you want to connect to.
In the following example, the command lists the suitecrm-0 pod.
kubectl describe pods | grep Name Name: suitecrm-0.
- Execute shell commands using one of the following methods:
kubectl execto open a bash command shell where you can execute commands.
kubectl exec -it pod-name -- /bin/bash
The following example gets a shell to the suitecrm-0 pod:
kubectl exec -it suitecrm-0 -- /bin/bash
kubectl execto execute commands directly.
kubectl exec -it pod-name -- /bin/bash -c "command(s)"
The following example lists the root directory of the suitecrm-0 pod:
kubectl exec -it suitecrm-0 -- /bin/bash -c "ls /"
For more information, see the Kubernetes documentation.
My workload won't start
Export your VM storage to a permanent location. For more, see Exporting streaming PVs to permanent storage.