This tutorial demonstrates an end-to-end migration with Migrate for Anthos.
In completing this tutorial, you will:
- Migrate a two-tiered LAMP stack application, with both application and database VMs, from VMware to Google Kubernetes Engine.
- Improve security by making the database accessible from the application container only and not from outside the cluster.
- Replace SSH access with authenticated shell access through
- See container system logs through automatic Stackdriver integration.
The structure of this tutorial follows the phases in a Migration journey.
In this tutorial, you'll migrate VMs on VMware to GKE. For more on that process, see Migrating VMware VMs to GKE.
Discovery and Planning
The application running on VMware is accessible using the URL
https://app.mydomain.local. It consists of 2 VMs: application and
database. SSH is enabled on both for console access.
Creating a landing zone
A landing zone refers to the area configured to receive your migrated workloads. It includes:
A Migrate for Anthos deployment with:
- A Cloud VPN (Cloud VPN or Cloud Interconnect connecting Google Cloud and your corporate network)
- A Virtual Private Cloud
- A Migrate for Compute Engine environment including a Migrate for Compute Engine Manager and Cloud Extension
- A GKE cluster with Migrate for Anthos components
You will need to configure:
- Services to expose your applications on that cluster.
In this section, you will start with a configured Migrate for Anthos environment and create your application's services.
Before you complete this tutorial:
- Install Migrate for Compute Engine on Google Cloud and your VMware environment.
- Create a GKE cluster and deploy Migrate for Anthos to it.
Make sure that you have credentials for your GKE cluster.
gcloud container clusters get-credentials
[CLUSTER_NAME]\ --region= [REGION]--zone= [ZONE]
In this section, you will:
- Create a service for the application layer with an internal load balancer (ILB) and expose HTTPS on port TCP/443.
- Create a service for the database layer on the GKE cluster network.
- Create a Kubernetes Network Policy to allow network access to the database from the application's pod only.
Kubernetes configurations are defined in YAML files and applied from the
command line using
The YAML below defines:
LoadBalancerfor the application pod (
suitecrm-app), which opens port 443 externally
Servicefor the database (
suitecrm-db), open on port 3306
suitecrm-dbwhich allows only connections from
kind: Service apiVersion: v1 metadata: name: suitecrm-app annotations: cloud.google.com/load-balancer-type: "Internal" spec: type: LoadBalancer selector: app: suitecrm-app ports: - name: https protocol: TCP port: 443 targetPort: 443 --- kind: Service apiVersion: v1 metadata: name: suitecrm-db spec: type: ClusterIP selector: app: suitecrm-db ports: - protocol: TCP port: 3306 targetPort: 3306 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: suitecrm-db-restricted-access namespace: default spec: podSelector: matchLabels: app: suitecrm-db policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: suitecrm-app ports: - protocol: TCP port: 3306 ---
To apply the services configuration, run:
kubectl apply -f 1-anthos-migrate-landing-zone.yaml
Kubernetes creates the landing zone components. To check the configuration of your services, run:
kubectl get services
kubectl get networkpolicies
In the output, you will see the services and network policy, similar to the example below:
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.51.240.1 <none> 443/TCP 30m suitecrm-app LoadBalancer 10.51.243.234 10.130.0.39 443:31831/TCP 30m suitecrm-db ClusterIP 10.51.243.234 <none> 3306/TCP 30m
kubectl get networkpoliciesNAME POD-SELECTOR AGE suitecrm-db-restricted-access app=suitecrm-db 30m
Migrating the application
In this section, you will migrate VMs from VMware and launch them on GKE. After completing this, your application and database pods will be connected via the services you created in the Landing Zone step.
Before editing the YAML configuration below, you will need:
- The VM ID of your VMware VMs. To find that, see Migrating VMware to GKE.
- The name of the Storage Class you set when you created your Migrate for Anthos deployment.
Configuring your migration
In this section, you will create four Kubernetes resources for your migrated VMs:
PersistentVolumeClaimobjects to host storage from the migrated VM.
StatefulSetobjects to host the application and database pods.
When you apply this configuration, Migrate for Anthos will make a Test Clone of the source VMs, taking a snapshot and launching a container on GKE from that snapshot. The containers on GKE will start and stream storage from that snapshot.
Edit the YAML file as noted below with your VM IDs and Storage Class names. For more information on Migrate for Anthos specific parameters, see the YAML reference.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: csi-disk-suitecrm-app annotations: # Replace vm-id with application's VM ID anthos-migrate.gcr.io/vm-id: "vm-1" anthos-migrate.gcr.io/vm-data-access-mode: "FullyCached" anthos-migrate.gcr.io/run-mode: "TestClone" spec: accessModes: - ReadWriteOnce # Replace with your Storage Class name storageClassName: csi-disk-v2k-demo-env-sc resources: requests: storage: 1Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: csi-disk-suitecrm-db annotations: # Replace vm-id application's VM ID anthos-migrate.gcr.io/vm-id: "vm-2" anthos-migrate.gcr.io/vm-data-access-mode: "FullyCached" anthos-migrate.gcr.io/run-mode: "TestClone" spec: accessModes: - ReadWriteOnce storageClassName: csi-disk-v2k-demo-env-sc # Replace with your Storage Class name resources: requests: storage: 1Gi --- kind: StatefulSet apiVersion: apps/v1 metadata: name: suitecrm-app namespace: default spec: serviceName: "suitecrm-app-svc" replicas: 1 selector: matchLabels: app: suitecrm-app template: metadata: labels: app: suitecrm-app annotations: anthos-migrate.gcr.io/action: run anthos-migrate.gcr.io/source-type: streaming-disk # source-pvc needs to match the name of the PVC declared above. anthos-migrate.gcr.io/source-pvc: csi-disk-suitecrm-app spec: containers: - name: suitecrm-app # The image for the Migrate for Anthos system container. image: anthos-migrate.gcr.io/v2k-run:v1.0.1 --- kind: StatefulSet apiVersion: apps/v1 metadata: name: suitecrm-db namespace: default spec: serviceName: "suitecrm-db-svc" replicas: 1 selector: matchLabels: app: suitecrm-db template: metadata: labels: app: suitecrm-db annotations: anthos-migrate.gcr.io/action: run anthos-migrate.gcr.io/source-type: streaming-disk anthos-migrate.gcr.io/source-pvc: csi-disk-suitecrm-db spec: containers: - name: suitecrm-db image: anthos-migrate.gcr.io/v2k-run:v1.0.1 ---
After editing the YAML, apply the configuration to your cluster.
kubectl apply -f 2-anthos-migrate-migrate-vms.yaml
You will get a notification that the
StatefulSet for app and database are
At this point, you can update your corporate DNS server to point
app.mydomain.local to the IP address of the application service ILB.
Retrieve the application service's external IP with
kubectl get service.
kubectl get service suitecrm-appNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE suitecrm-app LoadBalancer 10.51.243.234 10.130.0.398080:31831/TCP 59m
When you update the configuration for your DNS with this IP address, clients
will be able to access it without reconfiguration.
bash commands on your container
You can access a container through a
bash shell using the
kubectl exec command.
kubectl describe podsto find the name of the Pod in your cluster that you want to connect to.
In the following example, the command lists the suitecrm-0 pod.
kubectl describe pods | grep Name Name: suitecrm-0.
- Execute shell commands using one of the following methods:
kubectl execto open a bash command shell where you can execute commands.
kubectl exec -it pod-name -- /bin/bash
The following example gets a shell to the suitecrm-0 pod:
kubectl exec -it suitecrm-0 -- /bin/bash
kubectl execto execute commands directly.
kubectl exec -it pod-name -- /bin/bash -c "command(s)"
The following example lists the root directory of the suitecrm-0 pod:
kubectl exec -it suitecrm-0 -- /bin/bash -c "ls /"
For more information, see the Kubernetes documentation.
Viewing container logs
By default, Migrate for Anthos is configured to send logs from your containers to Stackdriver.
Once you enable Stackdriver, you can view workload logs in the Stackdriver user interface.
You can use Stackdriver to view logs for the following aspects of your migration:
- Logs written to
stdoutby processes launched by
init. This is done by default.
- The content of var/log/syslog.
- Optionally, application logs written to the file system.
For more on file system logs written to Stackdriver, see Configuring logging to Stackdriver Logging.
You can view system logs that are in Stackdriver from the GCP Console. To do so:
- Open GKE Workloads.
- Find your workload and click on its Name. The Deployment Details page appears.
- Find the row labeled Logs and click on Container logs.
This loads Stackdriver, showing logs for this workload only.
In this section, you will finalize the migration to GKE by moving your container's streaming PVs to Compute Engine Persistent Disk using the Migrate for Anthos storage exporter.
Before performing an export, you need to calculate the total size of disks to be migrated with the source VM. We recommend an additional margin of 15-25% to accommodate for future growth. In this example, both application and database have a 16 GB volumes, and you will configure the exported container to have a size of 20 GB.
Configure your YAML
The YAML below defines a new:
StorageClassfor Zonal persistent SSD
PersistentVolumeClaimfor both the application and database
ConfigMapto hold configuration for the storage exporter
Jobto perform the storage export for the application and database.
--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gce-pd-ssd-intree provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd replication-type: none --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gce-pd-claim-suitecrm-app spec: storageClassName: gce-pd-ssd-intree accessModes: - ReadWriteOnce resources: requests: # Replace this with your target volume size storage: 20G --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gce-pd-claim-suitecrm-db spec: storageClassName: gce-pd-ssd-intree accessModes: - ReadWriteOnce resources: requests: # Replace this with your target volume size storage: 20G --- #Storage Exporter configuration: apiVersion: v1 kind: ConfigMap metadata: name: exporter-sample-config namespace: default data: config: |- appSpec: dataFilter: - "- *.swp" - "- /etc/fstab" - "- /boot/" - "- /tmp/" --- # Storage Exporter Jobs apiVersion: batch/v1 kind: Job metadata: name: exporter-sample-suitecrm-app namespace: default spec: template: metadata: annotations: sidecar.istio.io/inject: "false" anthos-migrate.gcr.io/action: export anthos-migrate.gcr.io/source-type: streaming-disk # source-pvc is the PVC of your existing disk anthos-migrate.gcr.io/source-pvc: csi-disk-suitecrm-app # target-pvc is populated by running this job anthos-migrate.gcr.io/target-pvc: gce-pd-claim-suitecrm-app anthos-migrate.gcr.io/config: exporter-sample-config spec: restartPolicy: OnFailure containers: - name: exporter-sample image: anthos-migrate.gcr.io/v2k-export:v1.0.1 --- apiVersion: batch/v1 kind: Job metadata: name: exporter-sample-suitecrm-db namespace: default spec: template: metadata: annotations: sidecar.istio.io/inject: "false" anthos-migrate.gcr.io/action: export anthos-migrate.gcr.io/source-type: streaming-disk anthos-migrate.gcr.io/source-pvc: csi-disk-suitecrm-db anthos-migrate.gcr.io/target-pvc: gce-pd-claim-suitecrm-db anthos-migrate.gcr.io/config: exporter-sample-config spec: restartPolicy: OnFailure containers: - name: exporter-sample image: anthos-migrate.gcr.io/v2k-export:v1.0.1 ---
After configuring your YAML, delete the
StatefulSet objects that are holding
onto the existing volumes. Don't worry, you will recreate them after the export
kubectl delete statefulset suitecrm-app kubectl delete statefulset suitecrm-db
Apply the YAML to run the export jobs.
kubectl apply -f 3-anthos-migrate-export-storage.yaml
GKE will run the jobs. To check on export progress, use
to get the pod name for the exporter job, then to get logs for the exporter.
kubectl get podNAME READY STATUS RESTARTS AGE exporter-sample-suitecrm-db-h8d3k 1/1 Running 0 51s suitecrm-app-0 1/1 Running 0 34m
In this example, the job's name is
this name to get a log displaying progress of the job. In particular, the
log shows the last file copied (usr/share/info/FileName/) and the number of
bytes copied (859MB of 981MB).
kubectl get logs exporter-sample-suitecrm-db-h8d3k... D0923 15:53:10.00000 10 hclog.py:68] [util] - TAIL: 'usr/share/info/FileName/' D0923 15:53:10.00000 10 hclog.py:68] [util] - PROGRESS: 859MB / 981MB
Once the export is complete, you may proceed to reconfigure the application pods to run using the exported PV.
Running containers with exported storage
After this step, the migration is complete and the container is ready to run independently from the source VM and Migrate for Anthos components.
The YAML file below recreates the
StatefulSet of the application and database
to use the exported storage.
--- kind: StatefulSet apiVersion: apps/v1 metadata: name: suitecrm-app namespace: default spec: serviceName: "suitecrm-app-svc" replicas: 1 selector: matchLabels: app: suitecrm-app template: metadata: labels: app: suitecrm-app annotations: anthos-migrate.gcr.io/action: run # Setting source-type to exported boots the container from a PVC without # other Migrate for Anthos components anthos-migrate.gcr.io/source-type: exported anthos-migrate.gcr.io/source-pvc: gce-pd-claim-suitecrm-app spec: containers: - name: suitecrm-app image:anthos-migrate.gcr.io/v2k-run:v1.0.1 --- kind: StatefulSet apiVersion: apps/v1 metadata: name: suitecrm-db namespace: default spec: serviceName: "suitecrm-db-svc" replicas: 1 selector: matchLabels: app: suitecrm-db template: metadata: labels: app: suitecrm-db annotations: anthos-migrate.gcr.io/action: run anthos-migrate.gcr.io/source-type: exported anthos-migrate.gcr.io/source-pvc: gce-pd-claim-suitecrm-db spec: containers: - name: suitecrm-db image: anthos-migrate.gcr.io/v2k-run:v1.0.1
Apply the YAML to launch your containers:
kubectl apply -f 4-anthos-migrate-update-storage.yaml
PersistentVolumeClaim which used the Migrate for Anthos CSI driver
for both the application and database.
kubectl delete pvc csi-disk-suitecrm-app kubectl delete pvc csi-disk-suitecrm-db
You have successfully migrated an application to GKE using Migrate for Anthos.