Google Distributed Cloud overview

Google Distributed Cloud, a component of Google Distributed Cloud (GDC), is software that brings Google Kubernetes Engine (GKE) to on-premises data centers. Google Distributed Cloud is part of GKE Enterprise: Google's modern application platform with tools and features that help you manage, govern, and operate containerized workloads at enterprise scale, including in on-premises environments. With Google Distributed Cloud, you can create, manage, and upgrade Kubernetes clusters on your own premises, while using Google Cloud features.

Google Distributed Cloud runs on your premises in a vSphere environment. vSphere is VMware's virtualization platform. The two main components of vSphere are ESXi and vCenter Server.

This page provides an overview of how Google Distributed Cloud works, giving you the background you need before going on to a minimal or production installation.

How it works

Google Distributed Cloud extends Google Kubernetes Engine (GKE) to let you create GKE clusters in a vSphere environment on your own premises, and manage them in Google Cloud along with regular Google Kubernetes Engine clusters and clusters in other environments as part of a fleet.

Because Google Distributed Cloud runs in your data center rather than on Google Cloud (where the Kubernetes control plane and network infrastructure are managed by Google Cloud), it requires you to install some admin and control plane software in addition to the GKE software itself. The software that runs in your data center is downloaded as part of the installation and upgrade processes.

The following diagram shows the simplified result of a completed installation.

Diagram of an admin cluster and a user cluster
Google Distributed Cloud architecture with one user cluster (Click to enlarge)

Key components

The following components make up an Google Distributed Cloud installation:

  • A user cluster is where the workloads that implement your applications run, like in GKE on Google Cloud. Each node in a user cluster is called a worker node.

  • The admin cluster is where the Kubernetes control planes for the admin cluster itself and its associated user clusters run, as well as any add-ons such as Prometheus or Grafana. Updates to user clusters are managed through the admin cluster. A single admin cluster can manage multiple user clusters.

  • The admin workstation is a separate VM that includes the tools cluster creators and developers need to manage Google Distributed Cloud, with appropriate permissions:

    • Running gkectl from the admin workstation lets you create and update clusters and perform some other administrative tasks
    • Running kubectl from the admin workstation lets you interact with your admin and user clusters, including deploying and managing workloads
  • The Google Cloud console provides a web interface for your Google Cloud project, including Google Distributed Cloud. You can perform a subset of Google Distributed Cloud administrative tasks from the Google Cloud console as an alternative to logging into the admin workstation, including creating new user clusters.

  • Cluster admins and developers using kubectl access the control planes in the admin cluster using virtual IP addresses (VIPs) that you configure as part of setup. Users/developers calling workloads in your user clusters use Service and Ingress VIPs. Each node in the installation also has its own IP address. You can learn more about IP planning for Google Distributed Cloud in Plan your IP addresses.

Connecting to the fleet

All Google Distributed Cloud user clusters (and optionally admin clusters) are members of a fleet: a logical grouping of Kubernetes clusters. Fleets let your organization uplevel management from individual clusters to entire groups of clusters, and can help your teams adopt similar best practices to those used at Google. You can view and manage fleet clusters together in the Google Cloud console, and use fleet-enabled GKE Enterprise features to help you manage, govern, and operate your workloads at scale. You can see a complete list of available fleet features for on-premises environments in GKE Enterprise deployment options.

Each fleet cluster's connection to Google Cloud is managed by a Connect Agent, which is deployed as part of the Google Distributed Cloud installation process. You can learn more about how this agent works in the Connect Agent overview.

Fleet membership is also used to manage Google Distributed Cloud pricing, as described in the next section.

For a deeper discussion of GKE Enterprise features and how they work together, see the GKE Enterprise technical overview.

Purchasing Google Distributed Cloud

Enabling the GKE Enterprise platform lets you use all GKE Enterprise features, including Google Distributed Cloud, for a single per-vCPU charge for fleet clusters. You enable the platform by enabling the GKE Enterprise API in your Google Cloud project.

For full pricing information, including pay-as-you-go and subscription options and how to contact sales, see Anthos pricing.

Versions

To learn about Google Distributed Cloud versions, see Version history.

Installing Google Distributed Cloud

Because Google Distributed Cloud runs in your own infrastructure, it is highly configurable to meet your particular organizational and use case needs: you can choose from a range of supported load balancing modes, vSphere configurations, IP addressing options, security features, connectivity options, and more. This means that setting up Google Distributed Cloud involves making decisions before and during installation in consultation with your networking, vSphere, and application teams to ensure that your installation meets your needs. This documentation set includes guides to help your team make these decisions.

However, if you just need to see Google Distributed Cloud in action, we also provide a simple installation path for a small test installation where we've made a lot of these choices for you, letting you quickly get a workload up and running.

In each case, the installation process is as follows:

  1. Plan your installation. Minimally this includes ensuring you can meet the resource and vSphere requirements for Google Distributed Cloud, as well as planning your IP addresses.
  2. Set up your on-premises environment to support Google Distributed Cloud, including setting up vSphere inventory objects and your connection to Google.
  3. Set up Google Cloud resources, including the Google Cloud project you will use when setting up and managing Google Distributed Cloud.
  4. Create an admin workstation with the resources and tools you need to create clusters.
  5. Create an admin cluster to host the Kubernetes control plane for your admin and user clusters and to manage and update user clusters.
  6. Create user clusters to run your actual workloads.

What's next?