Jump to Content
Anthos

3 keys to multicloud success you’ll find in Anthos 1.7

April 22, 2021
Jeff Reed

VP, Product Management, Cloud Security

Most organizations choose to work with multiple cloud providers, for a host of different reasons. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. And as well you should! It’s completely reasonable to use the capabilities from multiple cloud providers to achieve your desired business outcomes. 

Beyond simply letting you run apps in on-prem and in different clouds, we’ve noticed that successful multicloud implementations share characteristics that enable higher-level benefits for both developers and operators. To do multicloud right, you need to: 

  • Establish a strong “anchor” to a single cloud provider 

  • Create a consistent operator experience

  • Standardize software deployment for developers 

We recently released Anthos 1.7, our run-anywhere Kubernetes platform that’s connected to Google Cloud, delivering an array of capabilities that make multicloud more accessible and sustainable. Let’s take a look at how our latest Anthos release tracks to a successful multicloud deployment. 

1. Create an anchor in the cloud

Your cloud journey should be anchored to a single cloud. Is that controversial? At Google Cloud, we think that instead of dragging your current state to the desired location, you bring characteristics of your desired state to your current location. And instead of re-creating foundational behaviors in each cloud, you anchor on a single cloud, and use those practices everywhere else.

Let’s be specific. Cloud Logging is our scalable, high-performing service for infrastructure and application logs. In addition to sending logs from on-premises Anthos environments, you can now send logs and metrics from Anthos on AWS to Cloud Logging and Cloud Monitoring. Use one powerful logging system that all your environments feed into, and retire your on-prem logging infrastructure.

When all your clusters are attached to Google Cloud, you can also simplify management. With the new Connect gateway, you can interact with any cluster, anywhere, all from Google Cloud. Deploy workloads to a cluster on-prem. Read logs from a workload running inside an AWS VPC. By using Google Cloud and Anthos as your multicloud anchor, you can centralize activities and reduce the toil of per-cloud management.

Letting the public cloud manage more things allows you to focus on what matters: your software. In this release, we enabled a preview of our managed control plane for Anthos Service Mesh on Google Cloud. This gives you an Istio-powered mesh with the data plane in your cluster, but with us scaling, patching, and operating the control plane itself. 

You can even use this for your virtual machine workloads. Take advantage of the cloud’s innovation and add your Compute Engine workloads into Anthos Service Mesh. The reality is that most enterprise compute resources are still in VMs and many will remain there for a long time to come. This way, all of your VM-based workloads can have the same mesh functionality as your container-based workloads—even if the operating system is in Managed Instance Group (MIG). You can also use Anthos Service Mesh to apply Common Vulnerabilities and Exposures (CVEs) updates, for better lifecycle management.

2. Create a consistent experience for operators

No multicloud solution can eliminate all per-cloud management for operations teams. There will always be some level of direct management of each cloud. Can we reduce the amount so that operations teams don’t waste so much time with bespoke configurations? Yes, we can. 

Anthos normalizes a significant portion of your operational effort, regardless of where your Kubernetes cluster resides. And we’re working to bring more and more consistency to the Anthos experience on each of its target platforms. This helps operators learn something once, and apply it everywhere.

In Anthos 1.7, we delivered Windows container support for vSphere environments, as well as support for our own container-optimized OS. That brings Anthos to parity with what we offer in GKE on Google Cloud. We also GA-ed the CSI driver for vSphere, giving on-prem clusters the same experience with storage volumes as Google Cloud customers.

Then there’s Anthos Config Management (ACM), which delivers a powerful, declarative way to define desired state and keep your environment in that state. That means defining and deploying security policies, reference data, and required agents with source-controlled configuration files. And in Anthos 1.7, we’re extending ACM to a wider range of supported cluster types (besides GKE) including EKS 1.19, AKS 1.19, OpenShift 4.6, KIND 0.10 and Rancher 1.2.4. Whether you’re deploying GKE clusters with Anthos, or attaching your existing Kubernetes clusters running in other environments, components like ACM and Connect gateway give you a consistent operational experience.

3. Establish a secure, familiar deployment target for developers

From what I’ve observed, the main beneficiaries of multicloud are developers—and by extension, the end users of the software those developers create. With multicloud, developers can use the best services from each cloud and run each workload in the right place. 

The hard part? Creating some level of repeatability across all these environments. How a developer deploys to a hypervisor or container environment on-prem is very different from how they deploy to an app-centric platform in the cloud. There are different requirements for how to package up the software, different deployment tools, and different handoffs or automated integrations to expose the application for use. Can we normalize it a bit? Indeed we can, by creating a consistent dev experience for the inner loop, and a standard deployment API for every environment.

To that end, the Google Cloud Code team have added extensions to your favorite IDEs to make it easier to build YAML for use in any Anthos environment. Create standard Kubernetes deployment manifests, a Cloud Build definition, or even a configuration that represents a first-party cloud managed service. And with local emulators for things like Kubernetes and Cloud Run, you can build and test locally before packaging up your software for deployment to Anthos.

Speaking of builds, with the new Connect gateway, you can create Cloud Build definitions that deploy to any Anthos-connected cluster. Cloud Build is a powerful service for packaging and deploying software, and the ability to use it to deploy anywhere is a big deal.

There’s more. How should developers securely access cloud services from their apps? You don’t want something unique for each environment. In Google Cloud, Workload Identity is used to map Kubernetes services accounts to IAM accounts so that you never need to stash credentials in the environment. With Anthos 1.7, we’ve made our Workload Identity capability available on-premises, and in AWS. Just build your apps, and at runtime they can securely talk to managed services with appropriate permissions.

Don’t just take our word for it

Multicloud is an idea whose time has come, and the new features and capabilities that we’re building into Anthos are rapidly translating into industry recognition and successful customer deployments.

When it comes to analyst firms, Forrester recently named Google Cloud a “Leader” in Multicloud Container Development Platforms, citing Anthos’ automated cluster lifecycle operations, control plane management, logging, and policy-driven security features.

When it comes to customers, we’re working with global enterprises across a number of industries who want to modernize their application portfolios for agility and drive cost savings. Here are three recent customers Anthos customers: 

Major League Baseball uses Anthos to run applications like Statcast that need to run in the ballpark for best performance and low latency. Anthos on bare metal also makes it easier for them to swap out a server in the event of a hardware failure. 

PKO Bank Polski, the largest bank in Poland, uses Anthos to scale its services up dynamically when peaks occur unexpectedly. Marcin Dzienniak, PKO’s Chief Product Owner of the Cloud Center of Excellence, said “using Anthos, we'll be able to speed up our development and deliver new services faster.” 

Finally, the Wellcome Sanger Institute, one of the world’s leading centers for genomic science, uses Anthos to improve the stability of their research IT infrastructure. Anthos deployment was a quick and easy process: the team had JupyterHub, an open-source research collaboration tool, up and running in just five days, including all notebooks and secure researcher access.

With the launch of Anthos 1.7, we hope to continue delivering exceptional experience for even more Anthos customers.

Next steps

Download the Forrester Total Economic Impact study today to hear directly from enterprise engineering leaders and dive deep into the economic impact Anthos can deliver your organization. For a complete guide to using Anthos clusters on AWS, including cluster setup and administration, refer to setting up Anthos on other public clouds. To learn more about Anthos on bare metal, read about one Developer Advocate’s experience getting hands-on with Anthos on bare metal and then, to try it yourself, check out the Anthos Developer Sandbox.

Posted in