Some businesses find it necessary to operate in multiple
clouds. Data gravity is real, and if you have a global
customer base, you’ll inevitably find yourself serving
customers that want to minimize latency and networking fees
by keeping their compute close to where their data actually
In such cases, multicloud can expand your addressable
market; you’ll wind up supporting managed services and data
on other providers anyway. Other companies value multicloud
as a risk mitigation strategy. In either situation, the key
is adopting multicloud correctly, and industry-standard
solutions can help.
First: make sure you can re-leverage as many of
your workflows for getting things done as
possible. By workflows, we mean enabling task
automation to create the dataflow architecture between the
database and compute. This is where open source is
important; if you select a database that isn’t open, you’ll
have a hard time implementing the dataflow architecture and
automation. You’ll do better to go with something like the
Postgres protocol (e.g.
or a managed Postgres database (e.g.
on any cloud provider.
Second: on the compute side, Kubernetes saves substantial
time on deployments and automation, so when you’re picking a
set of technologies, be sure they work across the
boundaries established by your provider.
Boundaries could be different regions or zones, different
projects or accounts, and even on-premises or cloud. Don’t
waste time building and maintaining separate infrastructures
for each cloud provider (e.g. one on AWS, one on Google and
one on Azure); your engineers will drown trying to keep them
on par with each other. If you build once and deploy across
multiple clouds, when you need to make updates, you can do
so centrally and consistently. Compute stacks like
Kubernetes lend a huge advantage to customers that are
serious about doing multicloud in a way that’s efficient and
doesn’t require reinventing the wheel every time you want to
onboard a new cloud provider.
Third: risk management. Having the ability
to run your stack in another environment will help mitigate
the risk of your cloud provider going down or starting to
compete with your business. To comply with regulations,
organizations will pick providers to ensure their business
continuity. For example, if you lose operations in one
region, you won’t experience any downtime with a backup
The multicloud migrations that tend to work well are those
that leverage open standards. Take Kubernetes, for example,
which offers a provider-agnostic API both for running
applications and for configuring and deploying them and for
integrating things like security policies, networking, and
more. Think of Kubernetes as a multicloud operating system;
once it’s your layer of abstraction, you usually can hide
the differences between most major cloud providers.
When you decide to use Kubernetes, you have choices. You
can certainly run it yourself; Kubernetes is an open source
project, so you can download it and spend years integrating
it into your cloud provider or preferred environment.
But if you decide this isn’t the best use of your time, you
can use a managed Kubernetes offering. If you’re on AWS,
that would mean EKS. If you’re on Azure, that will mean AKS.
And if you’re on Google Cloud, that will mean
Google Kubernetes Engine
(GKE). All those options will give you a common Kubernetes
API, so when your team builds out its toolings and workflows
you can reuse them across various providers.
But not all managed service offerings are created equal.
Kubernetes is only as good as the infrastructure it runs on,
and GKE fills the gaps as a mature, fully managed Kubernetes
orchestration service. It offers fully integrated IaaS
ranging from VM provisioning of
with a 42% better price-performance over comparable
general-purpose offerings,1 autoscaling across
multiple zones and upgrades, to creating and managing GPUs,
TPUs for machine learning, storage volumes, and security
credentials on demand. All you have to do is put your
application in a container and choose a system based on your
What if you’ve chosen AWS as your cloud provider for VMs?
Do you need to stick with EKS? At a high level, Kubernetes
is equal across all cloud providers; you’ll end up with the
Kubernetes API. But beneath that API is a cluster, worker
nodes, security policies, the whole nine yards—and this is
where GKE stands out.
For example, if you still need those other clusters, you
can connect them to
which will give you a single place to manage, view,
troubleshoot and debug them all, while also centrally
managing things like credentials. GKE is best-of-breed
Kubernetes because of its end-to-end manageability, not just
because of its control plane or its multi-region or
multi-zone high availability. GKE can also leverage global
load balancers using the centrally managed
Multi Cluster Ingress across
multiple clusters and multiple regions.
What if you want the Kubernetes API but not the
responsibility of provisioning, scaling, and upgrading
clusters? For the majority of workloads,
abstracts away the cluster’s underlying infrastructure,
including nodes and node pools, and you pay only for the
workload. GKE Autopilot is all about giving you standard
Kubernetes API with all the strong security defaults, so you
can focus on your workloads and not the cluster.
1 Results are based on estimated
SPECrate®2017_int_base run on production VMs of two other
leading cloud vendors and pre-production Google Cloud Tau
VMs using vendor recommended compilers. SPECrate is a
trademark of the Standard Performance Evaluation
Corporation. More information available at
Kubernetes moved organizations from VMs to a new set of
abstractions that allow you to automate operations and focus
on your applications. But for more specific workloads (e.g.
web and mobile apps, REST APIs back end, data processing,
workflow automation), you can simplify even further and
optimize your deployment by leveraging the serverless model.
Maybe you’re using AWS Lambda, a popular serverless
platform that lets you write functions as a service and
connect them to all kinds of events. But because you end up
connecting to a database and handling security concerns,
these functions tend to grow in complexity, some actually
bigger than normal applications. So what happens when you
have an application that outgrows the simplicity of a
function as a service, or an existing application that you
want to run in serverless fashion?
Unlike a traditional serverless platform that requires you
to rewrite your applications,
offers an approach that helps you reuse your existing
containerized application investments. Even though GKE is a
managed service, you still have to make some key decisions:
what zones to run in, where to store logs, how to manage
traffic between different versions of applications,
registered domain names, managing SSL certificates.
Cloud Run eliminates all those decisions, letting you run
more traditional workloads and avoid cold starts by
disabling scaling to zero altogether. If your applications
have to always run, Cloud Run also supports that, along with
other traditional requirements like NFS, WebSockets, and VPC
integrations. But like most traditional serverless
platforms, Cloud Run is opinionated and offers features like
built-in traffic management and autoscaling.