GKE on Azure architecture
GKE on Azure is a managed service that helps you provision, operate, and scale Kubernetes clusters in your Azure account.
This page is for Admins and architects and Operators who want to define IT solutions and system architecture in accordance with company strategy and requirements. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Resource management
GKE on Azure uses Azure APIs to provision the resources needed by your cluster, including virtual machines, managed disks, virtual machine scale set, network security groups, and load balancers.
You can create, describe, and delete clusters with the Google Cloud CLI or GKE Multi-Cloud API.
Authenticating to Azure
When you set up GKE on Azure, you create an Azure Active Directory (Azure AD) application and service principal with the required permissions. You also create a client certificate that the GKE Multi-Cloud API uses to authenticate as the application's service principal.For more information about Azure AD and service principals, see Application and service principal objects in Azure Active Directory.
For more information, see Authentication overview.
Resources on Google Cloud
GKE on Azure uses a Google Cloud project to store cluster configuration information on Google Cloud.
Fleets and Connect
GKE on Azure registers each cluster with a Fleet when it is created. Connect enables access to cluster and workload management features from Google Cloud. A cluster's Fleet membership name is the same as its cluster name.
You can enable features such as Config Management and Cloud Service Mesh within your Fleet.
Cluster architecture
GKE on Azure provisions clusters using private subnets inside your Azure Virtual Network. Each cluster consists of the following components:
Control plane: The Kubernetes control plane uses a high-availability architecture with three replicas. Each replica runs all Kubernetes components including
kube-apiserver
,kube-controller-manager
,kube-scheduler
, andetcd
. Eachetcd
instance stores data in an Azure Disk volume, and uses a network interface to communicate with otheretcd
instances. A standard load balancer is used to balance traffic to the Kubernetes API endpoint,kube-apiserver
.You can create a control plane in multiple zones, or in a single zone. For more information, see Create a cluster.
Node pools: A node pool is a group of Kubernetes worker nodes with the same configuration, including instance type, disk configuration, and instance profile. All nodes in a node pool run on the same subnet. For high availability, you can provision multiple node pools across different subnets in the same Azure region.
What's next
- Complete the Prerequisites.