GKE On-Prem is hybrid cloud software that brings Google Kubernetes Engine (GKE) to on-premises data centers. With GKE On-Prem, you can create, manage, and upgrade Kubernetes clusters in your on-prem environment and connect them to Google Cloud console.
Learning about Kubernetes
GKE On-Prem and GKE are built on top of Kubernetes, an open-source, extensible platform for managing containerized applications. Kubernetes orchestrates clusters, which are sets of machines (also called "nodes") that run containerized applications.
Getting GKE On-Prem
GKE On-Prem is included as a core component of GKE Enterprise.
Preparing for GKE On-Prem
You install GKE On-Prem to a VMware vSphere 6.5 cluster running on hardware in your on-prem environment. For layer 4 load balancing, you can choose to use F5 BIG-IP (GKE On-Prem's integrated load balancer), or to configure another L4 load balancer.
Preparation includes setting up your Google Cloud project and downloading the necessary command line interface (CLI) tools, including Terraform version 11. It also includes using Terraform to create an admin workstation virtual machine (VM) in vSphere.
About the admin workstation
The admin workstation is the vSphere VM from which cluster administrators install and interact with GKE On-Prem.
If you're a cluster admin, you use Terraform to create the admin workstation in vSphere. To create the admin workstation, you download three files:
- Open Virtual Appliance (OVA) file. This is a versioned VM image that provides a "VM template" for Terraform.
- Terraform configuration file (TF).
- Terraform configuration variables file (TFVARS).
The admin workstation includes:
About the bundle
GKE On-Prem's bundle is a versioned TGZ archive that contains all of the components needed to create and upgrade GKE On-Prem clusters.
There are two types of bundles:
- Full bundle
The full bundle,
gke-onprem-vsphere-[VERSION]-full.tgz
, is included with the admin workstation. You can find it at/var/lib/gke/bundles
. The full bundle is used for installing GKE On-Prem for the first time. It's a large file that includes:- a TAR file with container images of all cluster components deployed to clusters.
- YAML files of those cluster components.
- GKE On-Prem's node image.
- Upgrade bundle
The upgrade bundle,
gke-onprem-vsphere-[VERSION].tgz
, is provided for upgrading clusters. You can also find it in Downloads. It only has YAML files of the cluster components installed during installation.
How installing GKE On-Prem works
Here is a high-level summary of steps taken during an installation:
- You SSH into your admin workstation.
- You run
gkectl create-config
to generate a GKE On-Prem configuration file. The configuration file declares a specification for installing GKE On-Prem. You modify it to suit your needs. - You run
gkectl check-config
to validate that the modified configuration file can be used for an installation. - You run
gkectl prepare
to move GKE On-Prem's OS image to vSphere and mark it as a template for VMs. If you configure a private Docker registry, this command also pushes GKE On-Prem's container images to the registry. - You run
gkectl create clusters --config
, passing in the configuration file, to declaratively install GKE On-Prem.
At the end of a successful installation, you should have the following in vSphere:
- an admin workstation.
- an admin cluster.
- a user cluster.
Architecture
Figure: GKE On-Prem architecture with one user control plane.
In GKE On-Prem, there are admin clusters and user clusters.
Admin cluster
The admin cluster is the base layer of GKE On-Prem. It runs the following GKE On-Prem components:
- Admin control plane: The admin control plane handles all
gkectl
and Kubernetes API calls to and from GKE On-Prem, and all calls to vSphere APIs. - User control planes: A user cluster's control plane. Routes API requests to the cluster's nodes. Each clusters create has its own control plane that runs in the admin cluster.
- Add-on VMs: VMs that run the admin cluster's add-ons, like Grafana, Prometheus, Istio components, and Stackdriver.
Note that user control planes are managed by the admin cluster. They run on nodes in admin clusters, not in the user clusters themselves. To manage user control planes, admin clusters need to:
- Manage the machines that run the user cluster control planes.
- Create, update, and delete the control plane components.
- Expose the Kubernetes API server to the user cluster.
- Manage cluster certificates.
User cluster
User clusters are where you deploy and run your containerized workloads and services.
CLI tools
When you install GKE On-Prem, you download the following CLI tools to your local workstation or laptop:
govc
terraform
gkectl
kubectl
(included in the Google Cloud CLI)gcloud
(included in the gcloud CLI)
govc
govc
is the CLI to vSphere. You use govc
when you create the admin workstation.
terraform
terraform
is the
CLI to HashiCorp Terraform. You use terraform
to create the admin workstation.
gkectl
gkectl
is the primary CLI to GKE On-Prem. You use gkectl
for many
cluster administration tasks, including:
- Cluster creation and management.
- Diagnosing and troubleshooting issues.
- Capturing and exporting cluster logs.
kubectl
kubectl
is the CLI to Kubernetes. You use kubectl
to interact with Kubernetes, and for
tasks including:
- Deploying, managing, and deleting containerized workloads running in clusters.
- Managing, editing, and deleting Kubernetes resources.
gcloud
Google Cloud CLI is the CLI to Google Cloud. You use the gcloud CLI for several purposes, including:
- Authenticating against your Google Cloud project.
- Creating service accounts and their private keys.
- Binding Identity and Access Management roles to accounts.
Registering clusters with Google Cloud console
When you create GKE On-Prem user clusters, you can choose enable Connect to automatically register them with Google Cloud console. Connect enables you to view and sign in to your on-premises and on-cloud Kubernetes clusters from the same Google Cloud user interface.
Enabling Connect creates a Connect Agent in each user cluster. The Connect Agent is a Deployment that establishes a long-lived, encrypted connection from on-premises clusters to Google Cloud.
The Connect Agent's container image is pulled from a
Container Registry repository that lives at gcr.io
.
If your user cluster doesn't or can't have a connection to gcr.io, you need
use a private Docker registry to connect it to Google Cloud console.
Versioning
To learn all about how versioning works, see Versions.