Migrating a monolith VM - Optimization
Now that your workload has been migrated from a VM to a container, a lot of possibilities are opened up with regards to optimization and leveraging modernization tooling and processes. Not only is modifying the source code of your workload and deploying it made much easier, but operations tools such as logging and monitoring can fully integrate with the workload out of the box.
Objectives
At the end of this tutorial, you will have learned how to:
- Explore the migration artifacts.
- Make modifications to the source code and the Dockerfile of the migrated workload.
- Leverage Cloud Operations to monitor and view the logs of the migrated workload.
- Further optimize the workload using modernization best practices.
Before you begin
This tutorial is a follow-up of the Migration and deployment tutorial. Before starting this tutorial, follow the instructions on that page to create and customize a migration plan for your VM, as well as deploy the resulting containerized artifacts.
Explore the migration artifacts
In this section, you learn about some of the artifacts that were created during the migration process and what their roles are. You will then be able to modify these files to augment and update your workload in the future.
View the
Dockerfile
configuration.cat ${HOME}/bank-of-anthos/src/ledgermonolith/Dockerfile
FROM anthos-migrate.gcr.io/v2k-run-embedded:v1.9.2 as migrate-for-anthos-runtime FROM gcr.io/my-project/ledgermonolith-service-non-runnable-base:11-24-2021--16-22-59 as source-content COPY --from=migrate-for-anthos-runtime / / ADD blocklist.yaml /.m4a/blocklist.yaml ADD logs.yaml /code/config/logs/logsArtifact.yaml ENTRYPOINT [ "/.v2k.go" ]
This file contains the step necessary to generate the container image for this workload. This is where you can add and update libraries, make modification to your source code, and add new files. An up-to-date reference for this configuration can be found here.
View the
deployment_spec.yaml
file.cat ${HOME}/bank-of-anthos/src/ledgermonolith/deployment_spec.yaml
apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: null labels: app: ledgermonolith-service migrate-for-anthos-optimization: "true" migrate-for-anthos-version: v1.9.2 name: ledgermonolith-service spec: replicas: 1 selector: matchLabels: app: ledgermonolith-service migrate-for-anthos-optimization: "true" migrate-for-anthos-version: v1.9.2 serviceName: ledgermonolith-service template: metadata: creationTimestamp: null labels: app: ledgermonolith-service migrate-for-anthos-optimization: "true" migrate-for-anthos-version: v1.9.2 spec: containers: - image: gcr.io/my-project/ledgermonolith-service:11-24-2021--16-22-59 imagePullPolicy: IfNotPresent name: ledgermonolith-service readinessProbe: exec: command: - /code/ready.sh resources: {} securityContext: privileged: true volumeMounts: - mountPath: /sys/fs/cgroup name: cgroups - mountPath: /var/lib/postgresql name: data-pvc-0-0954d1e7-698b-42f0-a668-cdbca31ff2da subPath: var/lib/postgresql volumes: - hostPath: path: /sys/fs/cgroup type: Directory name: cgroups - name: data-pvc-0-0954d1e7-698b-42f0-a668-cdbca31ff2da persistentVolumeClaim: claimName: data-pvc-0-0954d1e7-698b-42f0-a668-cdbca31ff2da updateStrategy: {} . . . . . .
This file contains the Kubernetes resource definitions for the migrated workload. It defines a StatefulSet for the processes, a Service for the port, and a pair of PersistentVolumeClaim and PersistentVolume holding the migrated database.
In this sample output, the Docker image set for the StatefulSet is
gcr.io/my-project/ledgermonolith-service:11-24-2021--16-22-59
, which was generated during the migration process and contains the workload from the original VM.
Modify the source code
Once comfortable with the migration artifacts, you learn in this section how to make modification to the source code of the workload as well as the Dockerfile, tag and push a new container image, and deploy this updated workload onto your cluster.
Modify the main controller of the ledger to always return a static balance when a balance is queried. Run the following command which will substitute
Long balance = info.getBalance();
forLong balance = 12345L;
.sed -i 's/Long balance = info.getBalance();/Long balance = 12345L;/g' \
${HOME}/bank-of-anthos/src/ledgermonolith/src/main/java/anthos/samples/bankofanthos/ledgermonolith/LedgerMonolithController.java
Navigate to the root of the service's source code and build the Java artifact.
cd ${HOME}/bank-of-anthos/src/ledgermonolith/
mvn -f src/ledgermonolith/ package
Add a
COPY
command in the Dockerfile to copy over the newly built Java artifact into the container image.echo "COPY ${HOME}/bank-of-anthos/src/ledgermonolith/target/ledgermonolith-1.0.jar /opt/monolith/ledgermonolith.jar" >> ${HOME}/bank-of-anthos/src/ledgermonolith/Dockerfile
Build and push the container image.
docker build . -t gcr.io/$PROJECT_ID/ledgermonolith-service:static-balance --no-cache
docker push gcr.io/$PROJECT_ID/ledgermonolith-service:static-balance
Change the source of the image to the recently pushed image,
sed -i 's/image:.*/gcr.io\/$PROJECT_ID\/ledgermonolith-service:static-balance/g' ${HOME}/bank-of-anthos/src/ledgermonolith/deployment_spec.yaml
kubectl apply -f ${HOME}/bank-of-anthos/src/ledgermonolith/deployment_spec.yaml
You can view the state of the Pods using the following command:
kubectl get pods
It may take a few seconds for the
ledgermonolith-service
Pod to be up and running.NAME READY STATUS RESTARTS AGE accounts-db-0 1/1 Running 0 3m53s contacts-d5dcdc87c-jbrhf 1/1 Running 0 3m53s frontend-5768bd978-xdvpl 1/1 Running 0 3m53s ledgermonolith-service-0 1/1 Running 0 1m11s loadgenerator-8485dfd-582xv 1/1 Running 0 3m53s userservice-8477dfcb46-rzw7z 1/1 Running 0 3m53s
Once all the Pods are set to
Running
, you can find thefrontend
LoadBalancer's external IP address.kubectl get service frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.79.248.161 ##.##.##.##. 80:31304/TCP 46m
Open a browser and visit the web page at the external IP address found above (be sure to use HTTP, rather than HTTPS).
http://EXTERNAL_IP
You should be able to login with the default credentials and see transactions. You will notice that the balance is now showing $123.45, as expected from the source code change.
Monitor the container
In this section, you learn how migrating your workloads to containers facilitates observability such as browsing logs and monitoring your application and services.
Open up the Cloud Console, browse to the GKE product, and click on Workloads.
Find the
ledgermonolith-service
workload and click on it to view its current and historical status. In this page, you can view memory and CPU usage, as well as events and specifications.Click on the Logs tab to view the workload's historical logs. You can also view this information using Cloud Operations for GKE.
Further optimization
From here, there are many optimization processes and activities that can enhance the container experience. Below are a few examples of these activities outside the scope of this tutorial.
Add security and identity policies. Setting up policies allows you to restrict access to the various workloads and configuration not only externally, but also internally between services. This includes policies such as role-based access control, and ingress and egress access.
Integrate with a continuous integration and deployment pipeline. By integrating your workloads in a continuous integration and deployment pipeline, you accelerate the speed of developing and testing features by automating the process of building and deployment.
Decouple the migrated workload into microservices. Currently the migrated
ledgermonolith-service
workload is composed of three logical processes and a database. These could be split up in multiple microservices, which allows you to set up more detailed scaling and policies targeting specific services and processes, thus reducing friction when developing and iterating.Configure a service mesh. Implementing a service mesh across your workloads provides features like traffic management, mutual authentication, and observability. This can be done within a single cluster or across multiple clusters.
Enable auto-scaling and rolling updates. By setting up auto-scaling and rolling updates, Kubernetes enables workloads to be fault-tolerant and highly available by scheduling multiple replicas of the workloads and upgrading them one at a time in a resilient manner. Auto-scaling includes horizontally scaling nodes and Pods, as well as vertically scaling allocated resources.
Summary
You have started this series of tutorials with a live application composed of multiple services, some living in a GKE cluster and some living on a VM in Compute Engine. With only a few easy steps and without any code change or difficult refactorization, you have successfully migrated a monolithic service along with a database from a VM to the GKE cluster, thus reducing compute costs and increasing the ease of development for developers. Finally, you have learned how to quickly iterate over your source code and modernization best practices.
Clean up
To avoid unnecessary Google Cloud charges, you should delete the resources used for this tutorial as soon as you are done with it. These resources are:
- The
boa-cluster
GKE cluster - The
migration-processing
GKE cluster - The
ledgermonolith-service
Compute Engine VM
You can either delete these resources manually, or follow the steps below to delete your project, which will also get rid of all resources.