Migrating apps to containers? Why Migrate to Containers is your best bet
Director of Engineering, Google Cloud
Cloud Migration Team, Google Cloud
Most of us know that there is real value to be had in modernizing workloads, and there are plenty of customer success stories to showcase that. But, even though the value in modernizing workloads to Kubernetes has been well documented, there are still plenty of businesses that haven’t been able to make the jump.
Reluctant businesses say that manually modernizing traditional workloads running on VMs into containers is a very complex/challenging project that involves significant time and costs. For instance, some proposals to refactor a single small to medium application can be $100,000 or more. Multiply that by 500 applications, and that’s a $50,000,000 project! To say nothing of how long it might take. Moreover, for some workloads (e.g., from third parties or ISVs) there is no access to the source code, precluding manual containerization altogether.
As a result, these become blockers for many enterprises in their data center migration, especially for customers that don’t just want to lift and shift their important workloads. However, there’s an alternative. By leveraging automated containerization technologies and the right solution partners, you can cut the time and cost of a modernization project by as much as 90%, while enjoying most of the benefits that come with manual refactoring.
Given that, using tools like Migrate to Containers is a uniquely smart, efficient way to modernize traditional applications away from virtual machines and into native containers. Our unique automation approach extracts critical application elements from a VM so you can easily insert those elements into containers running on Google Kubernetes Engine (GKE), without artifacts like guest OS layers that VMs need but that are unnecessary for containers.
For example, Migrate to Containers automatically generates a container image, a Dockerfile for day-2 image updates and application revisions, Kubernetes deployment YAMLs and (where relevant) a persistent data volume onto which the application data files and persistent state are copied. This automated, intelligent extraction is significantly faster and easier than manually modernizing the app, especially when source code or deep application rebuild knowledge is unavailable. That’s why using Migrate to Containers is one of the most scalable approaches to modernize applications with Kubernetes orchestration, image-based container management and DevOps automation.
One of our customers, British newspaper The Telegraph, used Migrate to Containers to accelerate its modernization and avoid the blockers we mentioned above. Here’s what Andrew Gregory, Systems Engineer Manager and Amit Lalani, Sr. Systems Engineer, had to say about the effort:
"The Telegraph was running a legacy content management system (CMS) in another public cloud on several instances. Upgrading the actual system or migrating the content to our main Website CMS was problematic, but we wanted to migrate it from the public cloud it was on. With the help of our partners at Claranet and Google engineers, Migrate to Containers delivered results quickly and efficiently. This legacy (but very important) system is now safely in GKE and joins its more modern counterparts, and is already seeing significant savings on infrastructure and reduced day-to-day operational costs.”
Like at The Telegraph, any means that can accelerate and enable modernization of enterprise workloads is of high business value to our customers. Migrate to Containers can accelerate and simplify the transition from VMs to GKE and Anthos by automating the containerization and “kubernetization” of the workloads. While manual refactoring typically takes many weeks or months, Migrate to Containers can deliver containerization in hours or days. And once you’ve done so, you’ll start seeing immediate benefits in terms of infrastructure efficiency, operational productivity, and developer experience.
Similarly, when you are ready to migrate existing applications to the cloud, Migrate to Containers makes that process simple and fast. According to a commissioned study conducted by Forrester Consulting, New Technology Projection: The Total Economic Impact™ of Anthos (2019), a composite organization is projected to "accelerate app migration and modernization by 58% to 75%".
Let’s take a deeper look at some of the benefits you can expect from modernizing your VM-based workloads into containers on Kubernetes with Migrate to Containers.
Internal Google studies have shown that converting VMs to containers in Kubernetes can yield between 30 - 65% savings on what you’re currently paying for your infrastructure, by means of:
Higher utilization and density, leveraging automatic bin-packing and auto-scaling capabilities, Kubernetes places containers optimally in nodes based on required resources while scaling as needed, without impairing availability. In addition, unlike VMs, all containers on a single node share one copy of the operating system and don’t each require their own OS image and vCPU, resulting in a much smaller memory footprint and CPU needs. This means more workloads running on fewer compute resources.
Shortened provisioning means you’re paying less to run the same workloads on account of them being ready sooner/easier.
Empowering your IT team to do more in less time also yields about 20 - 55% cost savings through reduced overall IT management and administration, for example:
Simplified OS management - In Anthos, the node and its operating system are managed by the system, so you don’t need to manage or be responsible for kernel security patches and upgrades.
Configuration encapsulation - By leveraging declarative specification (infrastructure as code) you can simplify and automate your deployment and more easily perform maintenance tasks like rollback and upgrades. This all leads to a faster, more agile IT lifecycle.
Reduced downtime - By leveraging Kubernetes features like self healing and dynamic scaling you’ll reduce incidents and have easier desired state management.
Unified management - By modernizing legacy workloads into containers, DevOps engineers can use the same method to manage all their workloads, both cloud-native and cloud “naturalized” workloads, making it faster and easier for IT to manage your hybrid IT landscape.
Environment parity with improved visibility and monitoring, makes finding and fixing problems less toilsome.
When you’ve got a better and more agile IT environment, your developers can do more with less, usually resulting in cost savings from developer efficiency and reduced infrastructure. Apps that have been converted into containers benefit from:
Layering efficiency - The ability to use Docker images and layers (which Migrate to Containers extracts as part of the container artifacts).
Developer velocity - You can finally “write once run everywhere,” and combine automated CI/CD pipelines with on-demand, repeatable test deployments using declarative models and Kubernetes orchestration.
Faster lifecycle - Get products to market quicker, yielding additional revenue and competitive market advantages, on top of savings.
In short, modernizing your VMs into containers running on Kubernetes has benefits across infrastructure, operations, and development. Although modernization may seem intimidating at first, Migrate to Containers helps make this process fast and painless. You can read more about it here, watch a quick video on using Migrate to Containers on Linux or Windows workloads, or try it yourself using Qwiklabs.
And if you’re interested in talking to someone about using Migrate to Containers please fill out this form (mention “Migrate to Containers” in the ‘Your Project’ field) and someone will contact you directly.