Jump to Content
Networking

Welcome to the service mesh era: Introducing a new Istio blog post series

January 22, 2019
https://storage.googleapis.com/gweb-cloudblog-publish/images/Kub_Istio_05JVI6I.max-2600x2600.png
Megan O'Keefe

Staff Developer Advocate

Adopting a microservices architecture brings a host of benefits, including increased autonomy, flexibility, and modularity. But the process of decoupling a single-tier monolithic application into smaller services introduces new obstacles: How do you know what's running? How do you roll out new versions of your services? How do you secure and monitor all those containers?   

To address these challenges, you can use a service mesh: software that helps you orchestrate, secure, and collect telemetry across distributed applications. A service mesh transparently oversees and monitors all traffic for your application, typically through a set of network proxies that sit alongside each microservice. Adopting a service mesh allows you to decouple your application from the network, and in turn, allows your operations and development teams to work independently.

Alongside IBM, Lyft, and others, Google launched Istio in 2016 as an open-source service mesh solution. Built on the high-performance Envoy proxy, Istio provides a configurable overlay on your microservices running in Kubernetes. It supports end-to-end encryption between services, granular traffic and authorization policies, and unified metrics— all without any changes to your application code.  

Istio’s architecture is based on trusted service mesh software used internally at Google for years. And much in the same way we brought Kubernetes into the world, we wanted to make this exciting technology available to as many users as possible. To that end, we recently announced the beta availability of Istio on GKE, an important milestone in our quest to deliver a managed, mature service mesh that you can deploy with one click. You also heard from us about our vision for a service mesh that spans both the Cloud and on-prem.

To kick off 2019, we thought we'd take a step back and dive deep into how you can use Istio right now, in production. This is the first post in a practical blog series on Istio and service mesh, where we will cover all kinds of user perspectives, from developers and cluster operators to security administrators and SREs. Through real use cases, we will shed light on the "what" and "how" of service mesh— but most importantly, how Istio can help you deliver immediate business value to your customers.

To start, let's explore why Istio matters in the context of other ongoing shifts in the cloud-native ecosystem: towards abstraction from infrastructure, towards automation, and towards a hybrid cloud environment.

Automate everything  

The world of modern software moves quickly. Increasingly, organizations are looking for ways to automate the development process from source code to release, in order to address business demands and increase velocity in a competitive landscape. Continuous delivery is a pipeline-based approach for automating application deployments, and represents a key pillar in DevOps best practices.

Istio's declarative, CRD-based configuration model integrates seamlessly with continuous delivery systems, allowing you to incorporate Istio resources into your deployment pipelines. For example, you can configure your pipeline to automatically deploy Istio VirtualServices to manage traffic for a canary deployment. Doing so lets you leverage Istio's powerful features—from granular traffic management to in-flight chaos testing—with zero manual intervention. With its declarative configuration model, Istio can also work with modern GitOps workflows, where source control serves as the central source of truth for your infrastructure and application configuration.

Serverless, with Istio  

Serverless computing, meanwhile, transforms source code into running workloads that execute only when called. Adopting a serverless pattern can help organizations reduce infrastructure costs, while allowing developers to focus on writing features and delivering business value.

Serverless platforms work well because they decouple code and infrastructure. But most of the time, organizations aren’t only running serverless workloads— they also have stateful applications, including microservices apps on Kubernetes infrastructure. To address this, several open-source, Kubernetes-based serverless platforms have emerged in the open-source community. These platforms allow Kubernetes users to deploy both serverless functions and traditional Kubernetes applications onto the same cluster.

Last year, we released Knative, a new project that provides a common set of building blocks for running serverless applications on Kubernetes. Knative includes components for serving requests, handling event triggers, and building containerized functions from source code. Knative Serving is built on Istio, and brings Istio's telemetry aggregation and security-by-default to serverless functions.

Knative aims to become the standard across Kubernetes-based serverless platforms. Further, the ability to treat serverless functions as services  in the same way you treat traditional containers will help provide much-needed uniformity between the serverless and Kubernetes worlds. This standardization will allow you to use the same Istio traffic rules, authorization policies, and metrics pipelines across all your workloads.

Build once, run anywhere

As Kubernetes matures, users are increasingly adopting more complex cluster configurations. Today, you might have several clusters, not one. And those clusters might span hybrid environments, whether in the public cloud, in multiple clouds, or on-prem. You might also have microservices that have to talk to single-tier applications running in virtual machines, or service endpoints to manage and secure, or functions to spin up across clusters.

Driven by the need for lower latency, security, and cost savings, the era of multi-cloud is upon us, introducing the need for tools that span both cloud and on-prem environments.

Released with 1.0, Istio Multicluster is a feature that allows you to manage a cross-cluster service mesh using a single Istio control plane, so you can take advantage of Istio's features even with a complex, multicluster mesh topology. With Istio Multicluster, you can use the same security roles across clusters, aggregate metrics, and route traffic to a new version of an application. The multicluster story gets easier in 1.1, as the new Galley component helps synchronize service registries between clusters.

Cloud Services Platform is another example of the push towards interoperable environments, combining solutions including Google Kubernetes Engine, GKE On-Prem, and Istio, towards the ultimate goal of creating a seamless Kubernetes experience across environments.

What's next?

Subsequent posts in this series will cover Istio's key features: traffic management, authentication, security, observability, IT administration, and infrastructure environments. Whether you're just getting started with Istio, or working to move Istio into your production environment, we hope this blog post series will have something relevant and actionable for you.

We're excited to have you along for the ride on our service mesh journey. Stay tuned!

Posted in