GKE on Azure architecture

GKE on Azure is a managed service that helps you provision, operate, and scale Kubernetes clusters in your Azure account.

Architecture of GKE on Azure, showing the Google Cloud service and clusters containing a control plane and node pools

Resource management

GKE on Azure uses Azure APIs to provision the resources needed by your cluster, including virtual machines, managed disks, virtual machine scale set, network security groups, and load balancers.

You can create, describe, and delete clusters with the Google Cloud CLI or GKE Multi-Cloud API.

Authenticating to Azure

When you set up GKE on Azure, you create an Azure Active Directory (Azure AD) application and service principal with the required permissions. You also create a client certificate that the GKE Multi-Cloud API uses to authenticate as the application's service principal.

For more information about Azure AD and service principals, see Application and service principal objects in Azure Active Directory.

For more information, see Authentication overview.

Resources on Google Cloud

GKE on Azure uses a Google Cloud project to store cluster configuration information on Google Cloud.

Fleets and Connect

GKE on Azure registers each cluster with a Fleet when it is created. Connect enables access to cluster and workload management features from Google Cloud. A cluster's Fleet membership name is the same as its cluster name.

You can enable features such as Config Management and Anthos Service Mesh within your Fleet.

Cluster architecture

GKE on Azure provisions clusters using private subnets inside your Azure Virtual Network. Each cluster consists of the following components:

  • Control plane: The Kubernetes control plane uses a high-availability architecture with three replicas. Each replica runs all Kubernetes components including kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. Each etcd instance stores data in an Azure Disk volume, and uses a network interface to communicate with other etcd instances. A standard load balancer is used to balance traffic to the Kubernetes API endpoint, kube-apiserver.

    You can create a control plane in multiple zones, or in a single zone. For more information, see Create a cluster.

  • Node pools: A node pool is a group of Kubernetes worker nodes with the same configuration, including instance type, disk configuration, and instance profile. All nodes in a node pool run on the same subnet. For high availability, you can provision multiple node pools across different subnets in the same Azure region.

What's next