Jump to Content
Compute

Driving change: How Geotab is modernizing applications with Google Cloud

September 15, 2020
Patrick McClafferty

Associate Vice President, DevOps, Geotab

Editor’s note: Google Cloud is an “ideal platform” for running and modernizing applications, according to a recent Infobrief by IDC. Today, we hear from Patrick McClafferty, Associate Vice President, DevOps at Geotab, a leading provider of fleet management hardware and software. Read on to hear how the company is keeping ahead of change and increased demand, and reducing licensing costs by modernizing approximately 1,600 production servers running to containers and open source. 

As a global leader in IoT and connected transportation, our core objective at Geotab is to help businesses better manage their fleets by connecting their vehicles to the internet and providing access to data-driven, actionable insights. With the Geotab solution, our customers are equipped with the tools needed to help improve fleet productivity, optimization, compliance with regulatory changes, safety, and sustainability within their vehicles, helping to enable them to move their businesses towards the future.

https://storage.googleapis.com/gweb-cloudblog-publish/images/geotab.max-1500x1500.jpg

From a Geotab perspective, we are facing tremendous change, both current and impending. As the adoption of electric vehicles (EVs) becomes more prevalent, we are proactively creating new solutions to help address emerging business needs for EV-specific concerns surrounding battery degradation, charging and temperature. Additionally, in the future, we anticipate that many OEMs will install their own factory-fit telematics hardware. As a result, we are actively integrating with companies like Volvo, Mack Trucks, General Motors, and Ford to help expand customers’ fleet management capabilities through access to the Geotab platform and the Geotab Marketplace without the use of third-party hardware.

These types of industry trends mean inevitable change for the MyGeotab platform, the web-based fleet management software that is available to all Geotab customers. Originally hosted on-premises at co-location facilities in Canada and the United States, Geotab started exploring public cloud options in 2015. We eventually chose Google Cloud as our primary cloud provider as we found it to be the most stable of the different cloud providers we tried, with the least amount of unscheduled downtime. Google Cloud’s live migration feature, which migrates VM instances to a new host without requiring a reboot, was especially key for allowing our workloads to run reliably in the cloud. We also wanted a partner that was willing to work collaboratively with our team and help us meet our goals on our modernization journey. Google Cloud was the best choice for today and tomorrow.

As Geotab began its migration from on-prem to Google Cloud, software licensing costs began to increase quickly. MyGeotab is primarily written in a single language and built on a single development framework, and most of our production systems, both customer facing and internal, run on the same underlying platform. Additionally, the majority of our development environments currently run on that same platform, either on specialized development environments provisioned in the cloud, or on local computer equipment, such as laptops. 

Geotab’s migration to the cloud also coincided with a rapid acceleration in our growth. When we started the migration from on-premises, we had approximately 120 physical servers running a single VM per host. Currently, we are running over 1,600 VM instances on Compute Engine. Since licensing costs linearly scale with the number of cores we run, this large growth naturally increased our total licensing costs by a significant amount. 

Setting a new course

As Geotab grew into Google Cloud, we took advantage of various services and features to help improve our systems and operations. We use Cloud Load Balancing (both TCP and HTTP/HTTPS) across a large number of our systems. Billing labels and reporting assist us with a clearer understanding of costs and how to help manage them. In addition, we use persistent disk snapshots and Cloud Storage to manage our backups and DR operations.

We also decided to take on a long-term project to modernize MyGeotab, moving from proprietary platforms to open source and containers. Migrating a production workload of this size to an entirely new platform can be a daunting task, but it can be tackled piece by piece without a significant disruption to customers or end users. The main goal of this long-term project was to move towards a more cost-efficient business model, while keeping our customers as the top priority. 

To begin the transition process, we started with database migration, migrating our application from a proprietary relational database to Postgres, as it offered the largest licensing savings for the lowest amount of disruption. First, we developed custom migration code that allowed us to migrate a single customer’s databases to and from Postgres. 

Eventually, we added the ability to stream data for a single device from our gateway servers to multiple customer portals simultaneously. This allowed us to migrate a backup of our larger customers’ databases to Postgres and simultaneously push data to both Postgres and the origin server. This process helped optimize MyGeotab for Postgres for our larger customers while their production systems continued to run on the established non-OSS version. By mid-2016, we had migrated the last of our customers and ran 100% of our MyGeotab environment on Postgres, significantly reducing our licensing costs.

We also made the decision to migrate MyGeotab to .NET core to allow us the option of running on a wider variety of platforms. Again, our primary motivation for this decision was the potential long term cost savings.

More recently, we shifted our focus to containerizing our application and building out the necessary support tools and framework to enable running on Linux. We began by migrating test and internal development environments to work through any potential bugs and performance issues before they hit production. The motivation for running our Linux workloads in containers was to simplify workload deployments and reduce custom VM image sprawl. Previously, Geotab created custom immutable VM images for each unique server type in the company, and every month, we re-provisioned every instance in Compute Engine with an updated version of the image. As our Compute Engine footprint grew, the number of unique “golden images” expanded over time. 

To help manage this sprawl, we made adjustments when developing our Linux strategy. Now, all stand-alone Compute Engine Linux instances run the same base image with Docker containers deployed on the VM that run each unique workload. We did the necessary work to containerize our database and application environments for MyGeotab as part of our migration strategy. We also use vulnerability reporting in Container Registry and Binary Authorization to sign these images as part of our CI/CD process. Currently, these containers are deployed on stand-alone VMs without using orchestration tools such as Kubernetes. Over the long term, our strategy is to break software components and functionality out of the main application and run them in Google Cloud Kubernetes clusters. However, we’re still in the early stages.

Nearing our destination

While our first wave of migrations is still underway, we have already saved a lot on licensing fees. For the first wave, we targeted smaller customers with fewer than 1,000 Geotab devices, as we anticipated that larger customers would take a bit longer to migrate due to increased downtime that needs to be scheduled for the migration. After migrating all customers, we estimate that we will see double the monthly cost savings we are currently seeing as a result of reduced licensing and infrastructure costs, saving over 50% of our software licensing fees on the MyGeotab platform. 

Every journey comes with its challenges, the biggest for this transition being on the database side. Each step involved in migrating to Postgres on Linux has presented its own set of challenges and required us to develop custom implementation tools. Then there’s the question of how to minimize downtime with large database moves. Many of our larger customers run databases which are tens of terabytes in size and in some cases, migrating them can take over 24 hours. To help mitigate this challenge, we developed custom tools that allowed us to still perform large migrations while a customer is up and running, and then bring the customer down at the last stage of the migration in order to copy over any deltas from the initial move. This customized approach allows us to migrate some of our largest customers with only a few hours of actual downtime.

Challenges aside, the journey has paid off. We now run more systems with less downtime and our customers have access to new features. Given this transition, we are better positioned to update MyGeotab so it remains relevant in a constantly evolving landscape. Best of all, we’re doing all of this while enjoying tangible benefits in terms of licensing cost savings. We look forward to continuing on this journey with Google Cloud.

Posted in