Jump to Content
Data Analytics

Adopting cloud, with new inventions along the way, charges up HSBC

September 26, 2019
https://storage.googleapis.com/gweb-cloudblog-publish/images/gcp_hsbc.max-2600x2600.jpg
Srinivas Vaddadi

Delivery Head, Data Services Engineering, HSBC

Editor’s note: We’re hearing today from HSBC, the huge global financial institution. They worked closely with Google Cloud engineers to move their legacy data warehouse to BigQuery, using custom-built tools and an automation-first approach that’s allowed them to make huge leaps in data analytics capabilities and ensure high-fidelity data.

At HSBC, we serve 39 million customers, in-person and online, from consumers to businesses, in 66 countries. We maintain data centers in 21 countries, with more than 94,000 servers. With an on-premises infrastructure supporting our business, we kept running into capacity challenges, which really became an innovation blocker and ultimately a business constraint. Our teams wanted to do more with data to create better products and services, but the technology tools we had weren’t letting us grow and explore. And that data was growing continually. Just one of our data warehouses had grown 300% from 2014 to 2018.

We had a huge amount of data, but what’s the point of having all that data if we couldn’t get insights and business value from it? We wanted to serve our customers flexibly, in the ways that work best for them. 

We knew moving to cloud would let us store and process more data, but as a global bank, we were moving complex systems that needed to also be secure. It was a team effort to create the project scope and strategy up front, and it paid off in the end. Our cloud migration now enables us to use an agile, DevOps mindset, so we can fail fast and deliver smaller workloads, with automation built in along the way. This migration also helped us eliminate technical debt and build a data platform that lets us focus on innovation, not managing infrastructure. Along the way, we invented new technology and built processes that we can use as we continue migrating.

Planning for a cloud move
We chose cloud migration because we knew we needed cloud capabilities for our business to really reach its digital potential. We picked Google Cloud, specifically BigQuery, because it’s super fast over small and large datasets, and because we could use both a SQL interface and Connected Sheets to interact with it. We had to move our data and its schema into the cloud—without having to manually manage every detail and miss the timelines we had set. Our data warehouse is huge, complex, and mission-critical, and didn’t easily lend itself to fit into existing reference architectures. We needed to plan ahead and automate to make sure the migration was efficient, and to ensure we could simplify data and processes along the way.

The first legacy data warehouse we migrated had been built over a period of 15 years, with 30 years worth of data comprising millions of transactions and 180 TB of data. It ran 6,500 extract, transform, load (ETL) jobs and more than 2,500 reports, getting data from about 100 sources. Cloud migration choices usually involves either re-engineering or lift-and-shift, but we decided on a different strategy for ours: move and improve. This allowed us to take full advantage of BigQuery’s capabilities, including its capacity and elasticity, to help solve our essential problem of capacity constraints. 

Taking the first steps to cloud
We started creating our cloud strategy through a mapping exercise, which also helped start the change management process among internal teams. We chose architecture decision records as our migration approach, basing those on technical user journeys, which we mapped out using an agile board. User journeys included things like “change data capture,” “product event handling,” or “slowly changing dimensions.” 

These are typical data warehouse topics that have to be addressed when going through a migration, and we had others more specific to the financial services industry, too. For example, we needed to make sure the data warehouse would have a consistent, golden source of data at a specific point in time. We considered business impacts as well, so we prioritized initially moving archival and historical data to immediately take load off of the old system. We also worked to establish metrics early on and introduce new concepts, like managing queries and quotas rather than managing hardware, so that data warehouse users would be prepared for the shift to cloud.

To simplify as we went, we examined what we currently had stored in our data warehouse to see what was used or unused. We worked with stakeholders to assess reports, and identified about 600-plus reports that weren’t being used that we could deprecate. We also examined how we could simplify our ETL jobs to remove the technical debt added by previous migrations, giving our production support teams a bit more sleep at night. 

We used a three-step migration strategy for our data: first, migrating schema to BigQuery; second, migrating the reporting load to BigQuery, adding metadata tagging and performing the reconciliation process; and third, moving historical data by converting all the SQL script into data into BigQuery-compliant scripts. 

Creating new tools for migration automation
In keeping with our automation mantra, we invented multiple accelerators to speed up migration. We developed these to meet the timelines we’d set, and to eliminate human error. 

The schema parser and data reconciliation tool helped us migrate our data layer onto BigQuery. SQL parser helped migrate the data access layer onto Google Cloud Platform (GCP) without having to individually migrate 3,500 SQL instances that don’t have data lineage or documentation. This helped us to prioritize workloads. And the data lineage tool identified components across layers to find dependencies. This was essential for finding and eliminating integration issues during the planning stage, and for identifying application owners during the migration. Finally, the data reconciliation tool reconciles any discrepancies between the data source and the cloud data target. 

Building a cloud future
We used this first migration in our UK data center as a template, so we now have a tailored process and custom tools that we’re confident using going forward. Our careful approach has paid off for our teams and our customers. We’re enjoying better development and testing procedures. We’ve created an onboarding path for applications, we have a single source of truth in our data warehouse, and we use authorized views for secure data access. The flexibility and scalable capacity of BigQuery means that users can explore data without constraints and our customers get the information they need, faster. 

Learn more about BigQuery and about HSBC.

Posted in