This page shows you how to tell Google Kubernetes Engine (GKE) to run your Pods on nodes in specific Google Cloud zones using zonal topology. This type of placement is useful in situations such as the following:
- Pods must access data that's stored in a zonal Compute Engine persistent disk.
- Pods must run alongside other zonal resources such as Cloud SQL instances.
You can also use zonal placement with topology-aware traffic routing to reduce latency between clients and workloads. For details about topology-aware traffic routing, see Topology aware routing.
Using zonal topology to control Pod placement is an advanced Kubernetes mechanism that you should only use if your situation requires that Pods run in specific zones. In most production environments, we recommend that you use regional resources, which is the GKE default, when possible.
Zonal placement methods
Zonal topology is built into Kubernetes with the
topology.kubernetes.io/zone: ZONE
node label. To tell
GKE to place a Pod in a specific zone, use one of the following
methods:
- nodeAffinity: Specify a nodeAffinity rule in your Pod specification for one or more Google Cloud zones. This method is more flexible than a nodeSelector because it lets you place Pods in multiple zones.
- nodeSelector: Specify a nodeSelector in your Pod specification for a single Google Cloud zone.
Considerations
Zonal Pod placement using zonal topology has the following considerations:
- The cluster must be in the same Google Cloud region as the requested zones.
- In Standard clusters, you must use node auto-provisioning or create node pools with nodes in the requested zones. Autopilot clusters automatically manage this process for you.
- Standard clusters must be regional clusters.
Pricing
Zonal topology is a Kubernetes scheduling capability and is offered at no extra cost in GKE.
For pricing details, see GKE pricing.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
- Ensure that you have an existing GKE cluster in the same Google Cloud region as the zones in which you want to place your Pods. To create a new cluster, see Create an Autopilot cluster.
Place Pods in multiple zones using nodeAffinity
Kubernetes nodeAffinity provides a flexible scheduling control mechanism that
supports multiple label selectors and logical operators. Use nodeAffinity if you
want to let Pods run in one of a set of zones (for example, in either
us-central1-a
or us-central1-f
).
Save the following manifest as
multi-zone-affinity.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx-multi-zone template: metadata: labels: app: nginx-multi-zone spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - us-central1-a - us-central1-f
This manifest creates a Deployment with three replicas and places the Pods in
us-central1-a
orus-central1-f
based on node availability.Ensure that your cluster is in the
us-central1
region. If your cluster is in a different region, change the zones in the values field of the manifest to valid zones in your cluster region.Create the Deployment:
kubectl create -f multi-zone-affinity.yaml
GKE creates the Pods in nodes in one of the specified zones. Multiple Pods might run on the same node. You can optionally use Pod anti-affinity to tell GKE to place each Pod on a separate node.
Place Pods in a single zone using a nodeSelector
To place Pods in a single zone, use a nodeSelector in the Pod specification. A
nodeSelector is equivalent to a requiredDuringSchedulingIgnoredDuringExecution
nodeAffinity rule that has a single zone specified.
Save the following manifest as
single-zone-selector.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: nginx-singlezone spec: replicas: 3 selector: matchLabels: app: nginx-singlezone template: metadata: labels: app: nginx-singlezone spec: nodeSelector: topology.kubernetes.io/zone: "us-central1-a" containers: - name: nginx image: nginx:latest ports: - containerPort: 80
This manifest tells GKE to place all replicas in the Deployment in the
us-central1-a
zone.Create the Deployment:
kubectl create -f single-zone-selector.yaml
Verify Pod placement
To verify Pod placement, list the Pods and check the node labels. Multiple Pods might run in a single node, so you might not see Pods spread across multiple zones if you used nodeAffinity.
List your Pods:
kubectl get pods -o wide
The output is a list of running Pods and the corresponding GKE node.
Describe the nodes:
kubectl describe node NODE_NAME | grep "topology.kubernetes.io/zone"
Replace
NODE_NAME
with the name of the node.The output is similar to the following:
topology.kubernetes.io/zone: us-central1-a
If you want GKE to spread your Pods evenly across multiple zones for improved failover across multiple failure domains, use topologySpreadConstraints.
What's next
- Separate GKE workloads from each other
- Keep network traffic in the same topology as the node
- Spread Pods across multiple failure domains