Technical overview

Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-prem environments. This page provides an overview of each layer of the Anthos infrastructure and shows how you can leverage its features.

The following diagram shows the Anthos components and their interactions in a typical enterprise environment:

Diagram showing Anthos components
Anthos components (click to enlarge)

Anthos is composed of multiple products and features. Below is a table of each component and its availability:

Core Anthos Components Cloud On-Premises
GKE GKE GKE On-Prem (1.0)
Multicluster Management Yes Yes
Configuration Management Anthos Config Management (1.0) Anthos Config Management (1.0)
Migration Migrate for Anthos (Beta) N/A
Service Mesh Anthos Service Mesh (Beta)
Traffic Director
Istio OSS (1.1.13)
Logging & Monitoring Stackdriver Logging, Stackdriver Monitoring, alerting Stackdriver for system components
Marketplace Kubernetes Applications in GCP Marketplace Kubernetes Applications in GCP Marketplace

Computing environment

The primary computing environment for Anthos relies on Google Kubernetes Engine (GKE) and GKE On-Prem to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters.

Kubernetes has two main parts: the control plane and the node components. With GKE, Google Cloud Platform hosts the control plane, and the Kubernetes API server is the only control-plane component accessible to customers. GKE manages the node components in the customer's project using instances in Compute Engine. With GKE On-Prem, all components are hosted in the customer's on-prem virtualization environment.

With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.

Networking environment

To take advantage of the full functionality provided by Anthos, your GKE and GKE On-Prem clusters need IP connectivity among them. Additionally your GKE On-Prem environment must be able to reach Google's API endpoints, specifically Stackdriver for monitoring and alerting, and GKE Hub for registering clusters with Google Cloud Platform Console.

You can connect your on-prem and GCP environments in various ways. The easiest way to get started is by implementing a site-to-site VPN between the environments using Cloud VPN. If you have more stringent latency and throughput requirements, you can choose between Dedicated and Partner Cloud Interconnect. For more information on choosing an interconnect type, see Cloud VPN and other hybrid connectivity solutions.

Kubernetes allows users to provision Layer 4 and Layer 7 load balancers. GKE uses Network Load Balancing for Layer 4 and HTTP(S) Load Balancing for Layer 7. Both are managed services and do not require any additional configuration or provisioning on your part. In contrast, GKE On-Prem uses an on-prem load balancing appliance.

Microservice architecture support

In Kubernetes, services are composed of many Pods, which execute containers. In a microservices architecture, a single application may consist of numerous services, and each service may have multiple versions deployed concurrently.

With a monolithic application, you have no network-related concerns, because communication happens through function calls that are isolated within the monolith. In a microservice architecture, service-to-service communication occurs over the network.

Networks can be unreliable and insecure, so services must be able to identify and deal with network idiosyncrasies. For instance, if Service A calls Service B, and there is a network outage, what should Service A do when it doesn't get a response? Should it retry the call? If so, how often? Or how does Service A know that it is Service B returning the call?

You can deal with these types of network concerns in a number of ways. For example:

  • You can use client libraries inside your application to mitigate some of the network's inconsistencies. Managing multiple library versions in a polyglot environment requires diligence, rigor, and—sometimes—duplicated effort.

  • You can use Istio, which is an open-source implementation of the service mesh model. Istio uses side-car proxies to enhance network security, reliability, and visibility. With a service mesh, like Istio, these functions are abstracted away from the application's primary container, and implemented in a common out-of-process proxy delivered as a separate container in the same Pod.

    Istio's features include:

    • Fine-grained control of traffic with rich routing rules for HTTP, gRPC, WebSocket, and TCP traffic.

    • Request resiliency features such as retries, failovers, circuit breakers, and fault injection.

    • A pluggable policy layer and configuration API supporting access control and rate limiting.

    • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.

    • Secure service-to-service communication with authentication and authorization based on service accounts.

    • Facilitation of tasks like A/B testing, canary rollouts, fault injection, and rate-limiting.

Managed service mesh

Anthos Service Mesh manages your Istio environment and provides you with many features along with all of Istio's functionality:

  • Service metrics and logs for all traffic within your mesh's GKE cluster are automatically ingested to GCP.

  • In-depth telemetry in the Anthos Service Mesh Dashboard lets you dig deep into your metrics and logs, filtering and slicing your data on a wide variety of attributes.

  • Service-to-service relationships at a glance: understand who connects to each service and the services it depends on.

  • Quickly see the communication security posture not only of your service, but its relationships to other services.

  • Dig deeper into your service metrics and combine them with other GCP metrics using Stackdriver.

  • Gain clear and simple insight into the health of your service with service level objectives (SLOs), which allow you to easily define and alert on your own standards of service health.

Centralized config management

Diagram showing Anthos Config Management architecture
Anthos Config Management architecture (click to enlarge)

Spanning multiple environments adds complexity in terms of resource management and consistency. Anthos provides a unified model for computing, networking, and even service management across clouds and datacenters.

Configuration as code is one common approach to managing this complexity. Anthos provides configuration as code via Anthos Config Management, which deploys the Anthos Config Management Operator to your GKE or GKE On-Prem clusters, allowing you to monitor and apply any configuration changes detected in a Git repo.

This approach leverages core Kubernetes concepts, such as Namespaces, labels, and annotations to determine how and where to apply the config changes to all of your Kubernetes clusters, no matter where they reside. The repo provides a versioned, secured, and controlled single source of truth for all of your Kubernetes configurations. Any YAML or JSON that can be applied with kubectl commands can be managed with the Anthos Config Management Operator and applied to any Kubernetes cluster using Anthos Config Management.

Anthos Config Management has the following benefits for your Kubernetes Engine clusters:

  • Single source of truth, control, and management

    • Enables the use of code reviews, validation, and rollback workflows.

    • Avoids shadows ops, where Kubernetes clusters drift out of sync due to manual changes.

    • Enables the use of CI/CD pipelines for automated testing and rollout.

  • One-step deployment across all clusters

    • Anthos Config Management turns a single Git commit into multiple kubectl commands across all clusters.

    • Rollback by simply reverting the change in Git. The reversion is then automatically deployed at scale.

  • Rich inheritance model for applying changes

    • Using Namespaces, you can create configuration for all clusters, some clusters, some Namespaces, or even custom resources.

    • Using Namespace inheritance, you can create a layered Namespace model that allows for configuration inheritance across the repo folder structure.

Consolidated logging and monitoring

Access to logs for applications and infrastructure components is critical for running and maintaining production infrastructure. Stackdriver Logging provides a unified place to store and analyze logs. Logs generated by your cluster's internal components are sent automatically to Stackdriver Logging. Workloads running inside your clusters have logs automatically enriched with relevant labels like the pod name and cluster that generated them. Once labeled, logs are easier to explore through advanced queries. In addition, audit logs allow you to capture and analyze the interactions that your applications and human users are having with your control components.

Another key element for operating your Kubernetes environments is the ability to store metrics that provide visibility into your system's behavior and health. Stackdriver Kubernetes Engine Monitoring provides an automatic and idiomatic integration that stores your application's critical metrics for use in debugging, alerting, and post-incident analysis.

GKE users can choose to enable Stackdriver Kubernetes Engine Monitoring; for GKE On-Prem users, it is enabled by default.

Unified user interface

The GCP Console provides you with a secure, unified user interface.

GKE Dashboard

As the number of Kubernetes clusters in an organization increases, it can be difficult to understand what is running across each environment. The GKE Dashboard provides a secure console to view the state of your Kubernetes clusters and Kubernetes workloads. With GKE Connect, you will be able to register your GKE On-Prem clusters securely with GCP and view the same information that is available for your GKE clusters through the GKE Dashboard.

Service Mesh Dashboard

Anthos Service Mesh provides observability into the health and performance of your services. Using the Stackdriver adapter, Anthos Service Mesh collects and aggregates data about each service request and response, which means that service developers don't have to instrument their code to collect telemetry data or manually set up dashboards and charts. Anthos Service Mesh automatically uploads metrics and logs to Stackdriver for all traffic within your cluster. This detailed telemetry enables operators to observe service behavior, and empowers them to troubleshoot, maintain, and optimize their applications.

On the Service Mesh Dashboard, you can:

  • Get an overview of all services in your mesh, providing you critical, service-level metrics on three of the four golden signals of monitoring: latency, traffic, and errors.

  • Define, review, and set alerts against service level objectives (SLOs), which summarize your service's user-visible performance.

  • View metric charts for individual services and deeply analyze them with filtering and breakdowns, including by response code, protocol, destination Pod, traffic source, and more.

  • Get detailed information about the endpoints for each service, and see how traffic is flowing between services, and what performance looks like for each communication edge.

  • Explore a service topology graph visualization that shows your mesh's services and their relationships.

Third party application marketplace

The Kubernetes ecosystem is continually expanding and creating a wealth of functionality that can be enabled on top of your existing clusters. For easy installation and management of third-party applications, you can use GCP Marketplace, which can deploy to your Anthos clusters no matter where they are running. GCP Marketplace solutions have direct integration with your existing GCP billing and are supported directly by the software vendor.

In the marketplace solution catalog, you'll find:

  • Storage solutions
  • Databases
  • Continuous integration and delivery tools
  • Monitoring solutions
  • Security and compliance tools

Benefits

Anthos is a platform of distinct services that work in concert to deliver value across the entire enterprise. This section describes how it can be used by the various roles in an organization, and the benefits each group will see.

Anthos for development

For the developer, Anthos provides a state-of-the-art container management platform based on Kubernetes. Developers can use this platform to quickly and easily build and deploy existing container-based applications and microservices-based architectures.

Key benefits include:

  • Git-compliant management and CI/CD workflows for configuration as well as code using Anthos Config Management.

  • Code-free instrumentation of code using Istio and Stackdriver to provide uniform observability.

  • Code-free protection of services using mTLS and throttling.

  • Support for GCP Marketplace to quickly and easily drop off-the-shelf products into clusters.

Anthos for migration

For migration, Anthos includes Migrate for Anthos, which allows you to orchestrate migrations using Kubernetes in GKE.

Read about the benefits of migrating to containers with Migrate for Anthos.

Anthos for operations

For Operations, Anthos provides centralized, efficient, and templatized deployment and management of clusters, allowing the operations team to quickly provide and manage infrastructure that is compliant with corporate standards.

Key benefits include:

  • Single command deployment of new clusters with GKE and GKE On-Prem (gkectl).

  • Centralized configuration management and compliance with configuration-as-code and Anthos Config Management.

  • Simplified deployment and rollback with Git check-ins and Anthos Config Management.

  • Single pane of glass visibility across all clusters from infrastructure through to application performance and topology with Stackdriver.

Anthos for security

For Security, Anthos provides the ability to enforce security standards on clusters, deployed applications, and even the configuration management workflow using a configuration-as-code approach and centralized management.

Key benefits include:

  • Centralized, auditable, and securable workflow using Git compliant configuration repos with Anthos Config Management.

  • Compliance enforcement of cluster configurations using Namespaces and inherited config with Anthos Config Management.

  • Code-free securing of microservices using Istio, providing in-cluster mTLS and certificate management.

  • Built-in services protection using Istio throttling and routing.

What's next

¿Te sirvió esta página? Envíanos tu opinión:

Enviar comentarios sobre…