Migrating a monolith VM - Migration and deployment
With a processing cluster set-up and Migrate to Containers installed, you are ready to perform the migration. First, you need to add a relevant migration source for your processing cluster as well as generate a migration plan for your VM. After reviewing and customizing the plan to your needs, you are able to generate and deploy Kubernetes artifacts onto the GKE cluster where the rest of your application is already running.
Objectives
At the end of this tutorial, you will have learned how to:
- Add a migration source.
- Create a migration plan from your VM workload.
- Review and customize the migration plan.
- Generate and deploy migration artifacts to your GKE cluster.
Before you begin
This tutorial is a follow-up of the Discovery and assessment tutorial. Before starting this tutorial, follow the instructions on that page to run the discovery tools on the monolith VM and create your processing cluster.
Stop the monolith VM
Before you perform the migration, you must stop the monolith VM to avoid unintentional disruption or data corruption which could occur if data is in movement during migration processes.
In the Google Cloud console, go to the VM instances page.
Select the checkbox at the left end of the row for the
ledgermonolith-service
VM.At the top of the page, click the Stop button to stop the VM.
Wait for the VM to be fully stopped. This may take 1-2 minutes.
Migrate the VM
Before you can migrate the VM, you must create a migration source that represents the source platform (Compute Engine or VMWare).
Add a source
Open the Migrate to Containers page in the Google Cloud console.
In the Sources & candidates tab, click Add source.
Under Select processing cluster, choose the migration-processing cluster from the dropdown list and click Next.
Specify the Name of the source as
ledgermonolith-source
.Set the Source type to Compute Engine and click Next.
Ensure the right project is selected as the source project.
Create a service account that enables you to use Compute Engine as a migration source by selecting Create a new service account.
Click Continue and then Add source.
Create a migration
Open the Migrate to Containers page in the Google Cloud console.
In the Migrations tab, click Create migration.
Set the Migration name as
ledgermonolith-migration
.Select the migration source you created in the previous step:
ledgermonolith-source
.Set the Workload type to
Linux system container
.Set the source Instance name to
ledgermonolith-service
.Click Create migration. This may take 1-2 minutes.
When the migration completes, the Status column displays Migration plan generated.
In the table, click on the Name of your migration,
ledgermonolith-migration
to open the details page.In the Data configuration tab, create a new volume to migrate the VM's PostgreSQL database,
/var/lib/postgresql
. The configuration should look like this:volumes: - deploymentPvcName: ledgermonolith-db folders: # Folders to include in the data volume, e.g. "/var/lib/postgresql" # Included folders contain data and state, and therefore are automatically excluded from a generated container image - /var/lib/postgresql newPvc: spec: accessModes: - ReadWriteOnce resources: requests: storage: 10G
This will make sure that you persist the database throughout the migration. Click on Save.
In the Migration plan tab, under
deployment
, make sure that your service has the nameledgermonolith-service
, port8080
, and protocolTCP
. The object should look like this:... endpoints: - name: ledgermonolith-service port: 8080 protocol: TCP ...
Click Save and generate artifacts to begin the migration process. This process will take approximately 7-8 minutes.
The artifacts generated by Migrate to Containers for this VM are:
- A Docker image of the VM process.
- A StatefulSet and a Service to run the newly migrated process.
- A Namespace and a DaemonSet to hold the container runtime.
- A PersistentVolumeClaim and a PersistentVolume to hold the PostgreSQL database.
Deploy the migrated workload
In the previous section, you have successfully migrated your monolith VM to a set of Kubernetes resources that can be deployed in a cluster. You can now deploy these resources onto your Bank of Anthos cluster, reconfigure the application for it to talk to the right endpoint for your newly migrated ledger service, and verifying that it all works.
Now that the migration artifacts have been generated, you can connect to the processing cluster and download the artifacts to your Cloud Shell environment.
gcloud container clusters get-credentials migration-processing --zone COMPUTE_ZONE --project PROJECT_ID
cd ${HOME}/bank-of-anthos/src/ledgermonolith/
migctl migration get-artifacts ledgermonolith-migration
Connect to the Bank of Anthos cluster and deploy the generated Kubernetes resources. Additionally, install a container runtime using
migctl
for your cluster to be able to run your newly migrated Pod.gcloud container clusters get-credentials boa-cluster --zone COMPUTE_ZONE --project=PROJECT_ID
migctl setup install --runtime
kubectl apply -f ${HOME}/bank-of-anthos/src/ledgermonolith/deployment_spec.yaml
Fetching cluster endpoint and auth data. kubeconfig entry generated for boa-cluster. applying resources to the cluster namespace/v2k-system created daemonset.apps/runtime-deploy-node created statefulset.apps/ledgermonolith-service created service/ledgermonolith-service-java created persistentvolumeclaim/data-pvc-0-4e1b2e0e-021f-422a-8319-6da201a960e5 created persistentvolume/pvc-4d41e0f2-569e-415d-87d9-019490f18b1c created
Edit the ConfigMap containing the ledger hosts to point to your new Kubernetes Pod, rather than the ledger monolith VM that is no longer in service.
sed -i 's/'.c.PROJECT_ID.internal'//g' ${HOME}/bank-of-anthos/src/ledgermonolith/config.yaml
kubectl apply -f ${HOME}/bank-of-anthos/src/ledgermonolith/config.yaml
Delete all Pods to recreate them with your new configuration.
kubectl delete pods --all
You can view the state of the Pods using the following command:
kubectl get pods
It may take a few minutes for all the Pods to be up and running.
NAME READY STATUS RESTARTS AGE accounts-db-0 1/1 Running 0 5m43s contacts-d5dcdc87c-jbrhf 1/1 Running 0 5m44s frontend-5768bd978-xdvpl 1/1 Running 0 5m44s ledgermonolith-service-0 1/1 Running 0 5m44s loadgenerator-8485dfd-582xv 1/1 Running 0 5m44s userservice-8477dfcb46-rzw7z 1/1 Running 0 5m43s
Once all the Pods are set to
Running
, you can find thefrontend
LoadBalancer's external IP address.kubectl get service frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.79.248.161 ##.##.##.##. 80:31304/TCP 46m
Open a browser and visit the web page at the external IP address found above (be sure to use HTTP, rather than HTTPS).
http://EXTERNAL_IP
You should be able to login with the default credentials and see transactions. The transactions you see are coming from the ledger monolith which has now been migrated to a Kubernetes container.
What's next
Now that you have learned how to create and customize a migration plan from your VM workload, as well as performed the migration of your VM to containerized artifacts, you can move on to the next section of the tutorial, Optimization.
If you end the tutorial here, don't forget to clean up your Google Cloud project and resources.
Clean up
To avoid unnecessary Google Cloud charges, you should delete the resources used for this tutorial as soon as you are done with it. These resources are:
- The
boa-cluster
GKE cluster - The
migration-processing
GKE cluster - The
ledgermonolith-service
Compute Engine VM
You can either delete these resources manually, or follow the steps below to delete your project, which will also get rid of all resources.
What's next
- Learn about optimization (Day 2).