A CIO’s guide to cloud success: decouple to shift your business into high gear
Jennifer Lin
Vice President of Product Management, Google Cloud
They say 80% of success is showing up—but unfortunately for enterprises moving to the cloud, this doesn’t always hold up.
A recent McKinsey survey, for example, found that despite migrating to the cloud, many enterprises are nonetheless "falling short of their IT agility expectations.” Because CTOs and CIOs are struggling to increase IT agility, many organizations are unable to achieve their larger business goals. McKinsey notes that 95% of CIOs indicated that the majority of the C-Suite's overall goals depend on them.
The disconnect between moving to the cloud and successful digital transformation can be traced back to the way most organizations adopt cloud:renting pooled resources from cloud vendors or investing in SaaS subscriptions. By adopting cloud in this cookie-cutter way, an enterprise basically keeps doing what it’s always done—perhaps just a little faster and a little more efficiently.
But we’re entering a new age. Cloud services are increasingly about intelligence, automation, and velocity—not just the economies of scale offered by big providers renting out their infrastructure. As McKinsey notes, enterprises sometimes stumble because they use the cloud for scale, but do not take advantage of the agility and velocity benefits it provides.
At its core, achieving velocity and agility isn’t about where an application is hosted so much as how fast, freely, and efficiently enterprises can launch and adjust strategies, whether creating ways to interact with customers on new technology platforms, quickly adding requested features to apps or monetizing data. This in turn relies on decoupling the dependencies between different systems and minimizing the amount of manual coordination that enterprise IT typically has to perform. The result is more loosely-coupled distributed systems that are far better equipped for today’s dynamic technology landscape.
This concept of decoupling, and how it can accelerate business results, drives much of what we do at Google—and it has strongly informed how we built Anthos, our open source-based multi-cloud platform that lets enterprises run apps anywhere, but also achieve the elusive IT agility and velocity that enterprises crave.
Decoupling = agility: shift your development into high gear
Migrating to the cloud does not, by default, transform an enterprise because digital transformation isn’t about the cloud itself. Rather, it’s about changing the way software is built and the consequent explosion in new business strategies that software can support—from selling products via voice assistants, to exposing proprietary data and functionality to partners at scale, to automating IT administration and security operations that used to require manual oversight.
Specifically, modern software development eschews ‘monolithic’ application architectures whose design makes it difficult to update or reuse functionality without impacting the entire application. Instead, developers increasingly build applications by assembling small, reusable, independently deployable microservices.
This shift not only makes software easier to reuse, combine, and modify (which can help an enterprise to be more responsive to changing business needs), but also lets developers work in small parallel teams rather than large groups (which helps them to create and deploy applications much faster.) What’s more, microservices exposed as APIs can help developers leverage resources from a range of providers spread across many different clouds, giving them the tools to create richer applications and connected experiences.
These decouplings of services from an application and developers from one another is often done via containers. By abstracting applications and libraries from the underlying operating system and hardware, containers make it easier for one team of developers to focus on its work without worrying about what any of the teams with which they’re collaborating are doing.
Containers also represent another important form of decoupling that can dramatically change the relationship among an IT department, severs, and maintenance. Thanks to containers, for example, many applications can reside on the same server without impacting one another, which reduces the need for application-specific hardware deployments. Containers can also be ported from one machine to another, opening opportunities for developers to create applications on-premises and scale them via the cloud, or to move applications from one cloud to another based on changing needs. This abstraction from the hardware they run on is one reason containers are often referred to as “cloud-native.”
This overview only scratches the surface, but the point is, by decoupling functionality and creating new architectures built around loosely-coupled distributed systems, enterprises can empower their developers to work faster in smaller, parallel teams and unlock the IT agility through which modern, software-driven business strategies are executed.
But doesn’t decoupling increase complexity?
Containers and distributed systems offer many advantages, but adoption isn’t as simple as flipping a switch.
Decomposing fat applications into hundreds of smaller services can increase an enterprise’s agility, but orchestrating all those services can be tremendously complicated, as can authenticating their users and protecting against threats. When millions of microservices are communicating with one another, it becomes literally impossible to put a human being in the middle of those processes, requiring automated solutions. Many enterprises consequently struggle with not only governance across these distributed environments, but also identifying the right solutions to put in place.
Moreover, not everything within a large enterprise will evolve at the same pace. Running containers in the cloud can help an enterprise focus on building great applications while handing off infrastructure management to a vendor. In fact, teams in almost every large enterprise are already operating this way—but other teams accustomed to legacy approaches may require a more incremental transition.
Additionally, enterprises may have a variety of reasons, whether strategic or regulatory, for keeping data on-prem—but they may still want ways to apply cloud-based analytics and machine learning services to that data and otherwise merge the cloud with their on-prem assets. Assembling the orchestration, management, and monitoring solutions for such deployments has historically been difficult.
Another significant challenge is that though containers are intrinsically portable, the various public clouds provide different platforms, which can make moving containers—let alone giving developers and administrators consistent experiences—quite difficult. Many open-source options are not the panacea they once seemed because the open-source version of a solution and the managed deployment sold by a cloud provider may be meaningfully different. These challenges can be particularly vexing because enterprises want the flexibility to change cloud vendors, utilize multiple clouds, and otherwise avoid lock-in.
Helping enterprises to enjoy the benefits of distributed systems while avoiding these challenges shaped our development of Anthos.
Anthos: Agility minus the complexity
Google runs multiple web services with billions of users and is an enormously complex organization whose IT systems connect tens of thousands of employees, contractors, and partners. No surprise then that we’ve spent a lot of time solving the puzzle of distributed systems and their dynamic, loosely-coupled components. For example, we open-sourced Kubernetes, the de facto standard for container orchestration, and Istio, a leading service mesh for managing microservices—and both are major components in Anthos and both are based on internal best practices.
Istio, provides systematic centralized management for microservices and enables what is arguably the most important form of decoupling: policies from services. Developers supported by Istio are free to write code without encoding policies into their microservices, allowing administrators to change policies in a controlled rollout without redeploying individual services. This automates away the expensive, time-consuming coordination and bureaucracy traditionally required for IT governance and helps accelerate developer velocity.
Recognizing that enterprises demand choice and openness, Anthos launched with hybrid support and will soon include multi-cloud functionality as well, with all options offering simplified management via single-pane-of-glass views, policy-driven controls, and a consistent experience across all environments, whether on Google Cloud Platform, in a corporate data center with Anthos deployed on VMware, or, after our coming update, in a third-party cloud such as Azure or AWS. Because Anthos is software-based, on-prem deployments don’t require stack refreshes, letting enterprises utilize existing hardware investments, ensuring developers and administrators have a consistent experience, regardless of where workloads are located or whose hardware they run on.
We’re already seeing fantastic momentum with customers using Anthos. For example, KeyBank, a superregional bank that’s been in business for almost 200 years, is adopting Anthos after using containers and Kubernetes for several years for customer-facing applications.
“The speed of innovation and competitive advantage of a container-based approach is unlike any technology we’ve used before,” said Keybank’s CTO Keith Silvestri and Director of DevOps Practices Chris McFee in a recent blog post, adding that the technologies also helped the bank spin up infrastructure on demand when traffic spiked, such as during Black Friday or Cyber Monday.
KeyBank chose Anthos to bring this agility and “burstability” to the rest of its IT operations, including internal-facing applications, while staying as close as possible to the open-source version of Kubernetes. “We deploy Anthos locally on our familiar and high-performance Cisco HyperFlex hyperconverged infrastructure,” Silvestri and McFee noted. “We manage the containerized workloads as if they’re all running in GCP, from the single source of truth, our GCP console.”
Anthos includes much more—such as Migrate for Anthos to auto-migrate virtual machines into containers in Google Kubernetes Engine (GKE) and an ecosystem of more than 40 hardware and software partners. But as the preceding attests, at the highest level, the platform helps enterprises to balance developer agility, operational efficiency, and platform governance by facilitating the decoupling central to successful digital transformation:
- Infrastructure is decoupled from the applications
- Teams are decoupled from one another
- Development is decoupled from operations
- Security is decoupled from development and operations
Successful decoupling minimizes the need for manual coordination, cuts costs, reduces complexity, and significantly increases developer velocity, operational efficiency, and business productivity. Decoupling delivers a framework, implementation, and operating model to ensure consistency across an open, hybrid, and multi-cloud future—a future Anthos has been built to serve.
Check out McKinsey’s report “Unlocking Business Acceleration in a Hybrid Cloud World” for more about how hybrid technologies can accelerate digital transformation, and tune in to our “Cloud OnAir with Anthos” session to learn even more about how Anthos is helping enterprises digitally transform—including special appearances by KeyBank and OpenText!