Google Distributed Cloud, a component of Google Distributed Cloud (GDC), is software that brings Google Kubernetes Engine (GKE) to on-premises data centers. With Google Distributed Cloud, you can create and manage Kubernetes clusters on hardware in your own data center, while using Google Cloud features.
Why Google Distributed Cloud?
Google Distributed Cloud takes advantage of your existing enterprise infrastructure, and helps you modernize applications throughout their lifecycle.
Bring your own node
Google Distributed Cloud lets you deploy applications directly on your own hardware infrastructure, so it can deliver the best performance and flexibility. You have direct control over application scale, security, and network latency. You also get the benefits of containerized applications through Google Kubernetes Engine (GKE) and GKE Enterprise components.
Improved performance and lowered cost
Google Distributed Cloud manages application deployment and health across existing corporate data centers for more efficient operation. You can run Google Distributed Cloud at the edge of the network, so analytic apps can run with full performance.
Because Google Distributed Cloud runs on physical machines, instead of virtual machines, you can also manage application containers on a wide variety of performance optimized hardware types, like GPUs. Google Distributed Cloud also allows for direct application access to hardware.
Compatible security
Because you control your node environment, you can optimize your network, hardware, and applications to meet your specific requirements. As a result, you can directly control system security, without having to worry about your compatibility with virtual machines and operating systems.
Monitored application deployment
Google Distributed Cloud provides advanced monitoring of the health and performance of your environment. You can more easily adjust the scale of applications while maintaining reliability despite fluctuations in workload and network traffic.
You manage monitoring, logging, and analysis of clusters and workloads through Connect.
Network latency and flexibility
Because you manage your network requirements, your network can be optimized for low latency. This network optimization can be crucial for performance in commercial or finance analytics and other enterprise or network edge applications.
Highly available
Google Distributed Cloud includes support for multiple control nodes in a cluster. If a control plane node goes down, you can still administer your environment.
Secure design and control
Your infrastructure security can be customized for your own needs, with minimal connections to outside resources. Most importantly, there is no additional VM complexity when deploying security systems, and you maintain complete OS control when interacting with existing security systems.
Google Distributed Cloud works with lightweight secure connections to Google APIs. You can manage clusters and applications from a central location with Connect and Cloud Monitoring. This centralization also helps keep your deployments running smoothly, and lets Google Cloud Support troubleshoot issues more effectively.
Preflight checks on installation
Google Distributed Cloud runs on open source and enterprise Linux systems, and on a minimal hardware infrastructure, and so is flexible in your environment. Google Distributed Cloud also includes various pre-flight checks to help ensure successful configurations and installations.
Application deployment and load balancing
Google Distributed Cloud includes Layer 4 and Layer 7 load balancing mechanisms at cluster creation.
Improved etcd
reliability
To monitor the size and defragment etcd
databases, Google Distributed Cloud
control planes include an etcddefrag
Pod. The etcddefrag
Pod reclaims
storage from large etcd
databases and recovers etcd
when disk space is
exceeded.
How it works
Google Distributed Cloud extends Google Kubernetes Engine (GKE) to let you create GKE clusters on your own Linux servers on your own premises. You manage these clusters in Google Cloud along with regular Google Kubernetes Engine clusters and clusters in other environments as part of a fleet.
GKE runs in Google Cloud where the Kubernetes control plane and network infrastructure are managed by Google Cloud. Because Google Distributed Cloud runs in your data center, you must install some admin and control plane software in addition to the GKE software. The software that runs in your data center is downloaded as part of the installation and upgrade processes.
The following diagram shows the simplified result of a completed installation.
Key components
The following components make up an Google Distributed Cloud installation:
The admin cluster consists of one or more control plane nodes. Each node is a physical machine running a supported Linux operating system. The standard deployment consists of an admin cluster that manages the lifecycle of one or more user clusters through Kubernetes Resource Management (KRM). Each node/machine in the installation has its own IP address.
A user cluster is where the workloads that implement your applications run, like in GKE on Google Cloud. Each user cluster consists of at least one control plane node and one worker node.
The admin workstation is a separate machine (typically) that includes the tools and cluster artifacts, such as configuration files. Cluster creators and developers use these tools and artifacts to manage Google Distributed Cloud, with appropriate permissions:
- Running
bmctl
from the admin workstation lets you create and update clusters and perform some other administrative tasks - Running
kubectl
from the admin workstation lets you interact with your admin and user clusters, including deploying and managing workloads
- Running
The Google Cloud console provides a web interface for your Google Cloud project, including Google Distributed Cloud. You can perform a subset of administrative tasks, like creating new user clusters, from the Google Cloud console as an alternative to logging into the admin workstation.
The Google Cloud console and related clients, such as Terraform or Google Cloud CLI, use the Anthos On-Prem API to operate on your on-premises clusters.
Cluster administrators and developers use
kubectl
to access cluster control planes through virtual IP addresses (VIPs) specified as part of cluster configuration. Application users and developers use Service and Ingress VIPs to access and expose workloads respectively.
Connecting to the fleet
All Google Distributed Cloud user clusters (and optionally admin clusters) are members of a fleet: a logical grouping of Kubernetes clusters. Fleets let your organization extend management from individual clusters to entire groups of clusters. Fleets can help your teams adopt similar best practices to those practices used at Google. You can view and manage fleet clusters together in the Google Cloud console. Fleet-enabled GKE Enterprise features help you manage, govern, and operate your workloads at scale. You can see a complete list of available fleet features for on-premises environments in GKE Enterprise deployment options.
Cluster connections to Google Cloud are managed by a Connect Agent, which is deployed as part of the Google Distributed Cloud installation process. You can learn more about how this agent works in the Connect Agent overview.
Fleet membership is also used to manage Google Distributed Cloud pricing, as described in the next section.
For a deeper discussion of GKE Enterprise features and how they work together, see the GKE Enterprise technical overview.
Purchasing Google Distributed Cloud
Enabling the GKE Enterprise platform lets you use all GKE Enterprise features, including Google Distributed Cloud, for a single per-vCPU charge for fleet clusters. You enable the platform by enabling the GKE Enterprise API in your Google Cloud project.
For full pricing information, including pay-as-you-go and subscription options and how to contact sales, see Anthos pricing.
Versions
To learn about Google Distributed Cloud versions, see Version Support Policy.
Installing Google Distributed Cloud
Because Google Distributed Cloud runs in your own infrastructure, it's highly configurable. Once you select the deployment model that meets your particular organizational and use case needs: you can choose from a range of supported load balancing modes, IP addressing options, security features, connectivity options, and more. Setting up Google Distributed Cloud involves making decisions before and during installation. This documentation set includes guides to help your team decide on the right features and options for you. To ensure that your installation meets the needs of your organization, consult with your networking and application teams.