Automatically created firewall rules

This page describes the firewall rules that Google Kubernetes Engine (GKE) creates automatically in Google Cloud.

In addition to the GKE-specific rules listed on this page, default Google Cloud projects include several Pre-populated firewall rules.

Firewall rules

GKE creates firewall rules automatically when creating the following resources:

  • GKE clusters
  • GKE Services
  • GKE Ingresses

The priority for all automatically created firewall rules is 1000, which is the default value for firewall rules. If you would like more control over firewall behavior, you can create firewall rules with a higher priority. Firewall rules with a higher priority are applied before automatically created firewall rules.

If you expand a subnet that is used by a cluster, you must manually update the automatically created firewall rule gke-[cluster-name]-[cluster-hash]-vms for the cluster with the expanded IP address range.

GKE cluster firewall rules

GKE creates the following ingress firewall rules when creating a cluster:

Name Purpose Source Destination Protocol and ports
gke-[cluster-name]-[cluster-hash]-master For private clusters only. Permits the control plane to access the Kubelet and metrics-server on cluster nodes. Master CIDR (/28) Node tag TCP: 443 (metrics-server) and TCP: 10250 (Kubelet)
gke-[cluster-name]-[cluster-hash]-ssh For public clusters only. Permits the control plane to access the Kubelet and metrics-server on cluster nodes.

For zonal clusters, the master public IP. This is the same value as the cluster endpoint.

For regional clusters, the master public IPs. These values are not the same as the cluster endpoints.

Node tag TCP: 22
gke-[cluster-name]-[cluster-hash]-vms Permits the agents on a node, such as system daemons and Kubelet, to communicate with Pods on a node, as required by the Kubernetes networking model. Permits Pods in the host network of a node to communicate with all Pods on all nodes without NAT. Permits other VMs in a VPC to communicate with nodes. Node CIDR, 10.128.0.0/9 (auto-networks), cluster subnet (custom-networks) Node tag TCP: 1-65535, UDP: 1-65535, ICMP
gke-[cluster-name]-[cluster-hash]-all Permits traffic between all Pods on a cluster, as required by the Kubernetes networking model.

Pod CIDR

For clusters with discontiguous multi-Pod CIDR enabled, all Pod CIDR blocks used by the cluster.

Node tag TCP, UDP, SCTP, ICMP, ESP, AH

GKE Service firewall rules

GKE creates the following ingress firewall rules when creating a Service:

Name Purpose Source Destination Protocol and ports
k8s-fw-[loadbalancer-hash] Permits ingress traffic to reach a Service. Specified in the Service manifest. Defaults to 0.0.0.0/0 (any source) Node tag TCP and UDP on the ports specified in the Service manifest.
k8s-[cluster-id]-node-http-hc Permits health checks of a network load balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP: 10256
k8s-[loadbalancer-hash]-http-hc Permits health checks of a network load balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP on the port specified by the NodePort health check.
k8s-[cluster-id]-node-hc Permits health checks of an internal TCP/UDP load balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP: 10256
[loadbalancer-hash]-hc Permits health checks of an internal TCP/UDP load balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP on the port specified by the NodePort health check.
k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash] Permits ingress traffic to reach a Service when internal load balancer subsetting is enabled. Specified in the Service manifest. Defaults to 0.0.0.0/0 (any source) Node tag TCP and UDP on the ports specified in the Service manifest.
k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash]-fw Permits health checks of the Service when externalTrafficPolicy is set to Local and internal load balancer subsetting is enabled.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP on the port specified by the NodePort health check.
k8s2-[cluster-id]-l4-shared-hc Permits health checks of the Service when externalTrafficPolicy is set to Cluster and internal load balancer subsetting is enabled.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP: 10256
gke-[cluster-name]-[cluster-hash]-mcsd Permits the control plane to access the kubelet and metrics-server on cluster nodes for Multi-cluster Services. This rule has a priority of 900. Health check IP addresses Node tag TCP, UDP, SCTP, ICMP, ESP, AH

GKE Ingress firewall rules

GKE creates the following ingress firewall rules when creating an Ingress:

Name Purpose Source Destination Protocol and ports
k8s-fw-l7-[random-hash]

Permits health checks of a NodePort Service or network endpoint group (NEG).

The Ingress controller creates this rule when the first Ingress resource is created. The Ingress controller can update this rule if more Ingress resources are created.

For GKE v1.17.13-gke.2600 or later:
  • 130.211.0.0/22
  • 35.191.0.0/16
  • User-defined proxy-only subnet ranges (for internal HTTP(S) load balancers)
Node tag TCP: 30000-32767, TCP:80 (for internal HTTP(S) load balancers), TCP: all container target ports (for NEGs)

What's next