Automatically created firewall rules

Stay organized with collections Save and categorize content based on your preferences.

This page describes the firewall rules that Google Kubernetes Engine (GKE) creates automatically in Google Cloud.

In addition to the GKE-specific rules listed on this page, default Google Cloud projects include several Pre-populated firewall rules.

Firewall rules

GKE creates firewall rules automatically when creating the following resources:

  • GKE clusters
  • GKE Services
  • GKE Ingresses

The priority for all automatically created firewall rules is 1000, which is the default value for firewall rules. If you would like more control over firewall behavior, you can create firewall rules with a higher priority. Firewall rules with a higher priority are applied before automatically created firewall rules.

GKE cluster firewall rules

GKE creates the following ingress firewall rules when creating a cluster:

Name Purpose Source Target (defines the destination) Protocol and ports
gke-[cluster-name]-[cluster-hash]-master For private clusters only. Permits the control plane to access the Kubelet and metrics-server on cluster nodes. Control plane IP address range (/28) Node tag TCP: 443 (metrics-server) and TCP: 10250 (Kubelet)
gke-[cluster-name]-[cluster-hash]-ssh For public clusters only. Permits the control plane to access the Kubelet and metrics-server on cluster nodes.

For zonal clusters, the control plane public IP address. This is the same value as the cluster endpoint.

For regional clusters, the control plane public IP addresses. These values are not the same as the cluster endpoints.

Node tag TCP: 22
gke-[cluster-name]-[cluster-hash]-vms Used for intra-cluster communication required by the Kubernetes networking model. Allows software running on nodes to send packets, with sources matching node IP addresses, to destination Pod IP and node IP addresses in the cluster. For example, traffic allowed by this rule includes:
  • Packets sent from system daemons, such as kubelet, to node and Pod IP address destinations of the cluster.
  • Packets sent from software running in Pods with hostNetwork:true to node and Pod IP address destinations of the cluster.
The node IP address range or a superset of this node IP address range:
  • For auto mode VPC networks, GKE uses the 10.128.0.0/9 CIDR because that range includes all current and future subnet primary IPv4 address ranges for the automatically created subnetworks.
  • For custom mode VPC networks, GKE uses the primary IPv4 address range of the cluster's subnet.
GKE does not update the source IPv4 range of this firewall rule if you expand the primary IPv4 range of the cluster's subnet. You must create the necessary ingress firewall rule manually if you expand the primary IPv4 range of the cluster's subnet.
Node tag TCP: 1-65535, UDP: 1-65535, ICMP
gke-[cluster-name]-[cluster-hash]-all Permits traffic between all Pods on a cluster, as required by the Kubernetes networking model.

Pod CIDR

For clusters with discontiguous multi-Pod CIDR enabled, all Pod CIDR blocks used by the cluster.

Node tag TCP, UDP, SCTP, ICMP, ESP, AH
gke-[cluster-hash]-ipv6-all For dual-stack network clusters only. Permits traffic between nodes and Pods on a cluster.

Same IP address range allocated in subnetIpv6CidrBlock.

Node tag TCP, UDP, SCTP, ICMP for IPv6, ESP, AH

GKE Service firewall rules

GKE creates the following ingress firewall rules when creating a Service:

Name Purpose Source Target (defines the destination) Protocol and ports
k8s-fw-[loadbalancer-hash] Permits ingress traffic to reach a Service. Source comes from spec.loadBalancerSourceRanges. Defaults to 0.0.0.0/0 if spec.loadBalancerSourceRanges is omitted.

For more details, see Firewall rules and source IP address allowlist.

Node tag TCP and UDP on the ports specified in the Service manifest.
k8s-[cluster-id]-node-http-hc Permits health checks of a network load balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP: 10256
k8s-[loadbalancer-hash]-http-hc Permits health checks of a network load balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

For more details, see Health check port.

k8s-[cluster-id]-node-hc Permits health checks of an internal TCP/UDP load balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP: 10256
[loadbalancer-hash]-hc Permits health checks of an internal TCP/UDP load balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

For more details, see Health check port.

k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash] Permits ingress traffic to reach a Service when one of the following is enabled:
  • GKE subsetting.
  • Backend service-based external network load balancer.
  • Source comes from spec.loadBalancerSourceRanges. Defaults to 0.0.0.0/0 if spec.loadBalancerSourceRanges is omitted.

    For more details, see Firewall rules and source IP address allowlist.

    Node tag TCP and UDP on the ports specified in the Service manifest.
    k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash]-fw Permits health checks of the Service when externalTrafficPolicy is set to Local and any of the following are enabled:
  • GKE subsetting.
  • Backend service-based external network load balancer.
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 209.85.152.0/22
    • 209.85.204.0/22
    Node tag TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

    For more details, see Health check port.

    k8s2-[cluster-id]-l4-shared-hc-fw Permits health checks of the Service when externalTrafficPolicy is set to Cluster and any of the following are enabled:
  • GKE subsetting.
  • Backend service-based external network load balancer.
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 209.85.152.0/22
    • 209.85.204.0/22
    Node tag TCP: 10256
    gke-[cluster-name]-[cluster-hash]-mcsd Permits the control plane to access the kubelet and metrics-server on cluster nodes for Multi-cluster Services. This rule has a priority of 900. Health check IP addresses Node tag TCP, UDP, SCTP, ICMP, ESP, AH

    GKE Ingress firewall rules

    GKE creates the following ingress firewall rules when creating an Ingress:

    Name Purpose Source Target (defines the destination) Protocol and ports
    k8s-fw-l7-[random-hash]

    Permits health checks of a NodePort Service or network endpoint group (NEG).

    The Ingress controller creates this rule when the first Ingress resource is created. The Ingress controller can update this rule if more Ingress resources are created.

    For GKE v1.17.13-gke.2600 or later:
    • 130.211.0.0/22
    • 35.191.0.0/16
    • User-defined proxy-only subnet ranges (for internal HTTP(S) load balancers)
    Node tag TCP: 30000-32767, TCP:80 (for internal HTTP(S) load balancers), TCP: all container target ports (for NEGs)

    Shared VPC

    When a cluster that is located in a Shared VPC uses a Shared VPC network, the Ingress controller cannot use the GKE service account in the service project to create and update ingress allow firewall rules in the host project. You can grant the GKE service account in a service project permissions to create and manage the firewall resources. For more information, see Shared VPC.

    Required firewall rule for expanded subnet

    If you expand the primary IPv4 range of the cluster's subnet, GKE does not automatically update the source range of the gke-[cluster-name]-[cluster-hash]-vms firewall rule. Because nodes in the cluster can receive IPv4 addresses from the expanded portion of the subnet's primary IPv4 range, you must manually create a firewall rule to allow communication between nodes of the cluster.

    The ingress firewall rule you must create must allow TCP and ICMP packets from the expanded primary subnet IPv4 source range, and it must at least apply to all nodes in the cluster.

    To create an ingress firewall rule that only applies to the cluster's nodes, set the firewall rule's target to the same target tag used by your cluster's automatically-created gke-[cluster-name]-[cluster-hash]-vms firewall rule.

    What's next