Automatically created firewall rules


This page describes the firewall rules that Google Kubernetes Engine (GKE) creates automatically in Google Cloud.

In addition to the GKE-specific rules listed on this page, by default, Google Cloud projects include Pre-populated firewall rules. GKE clusters are typically deployed within a VPC network. These rules grant essential network access for GKE clusters. These rules are sufficient for basic cluster operation, but you may need to create additional rules depending on your specific needs.

Firewall rules

GKE creates firewall rules automatically when creating the following resources:

  • GKE clusters
  • GKE Services
  • GKE Gateways and HTTPRoutes
  • GKE Ingresses

Unless otherwise specified, the priority for all automatically created firewall rules is 1000, which is the default value for firewall rules. If you would like more control over firewall behavior, you can create firewall rules with a higher priority. Firewall rules with a higher priority are applied before automatically created firewall rules.

GKE cluster firewall rules

GKE creates the following ingress firewall rules when creating a cluster:

Name Purpose Source Target (defines the destination) Protocol and ports Priority
gke-[cluster-name]-[cluster-hash]-master For private Autopilot and Standard clusters only. Permits the control plane to access the kubelet and metrics-server on cluster nodes. Control plane IP address range (/28) Node tag TCP: 443 (metrics-server) and TCP: 10250 (kubelet) 1000
gke-[cluster-name]-[cluster-hash]-vms

Used for intra-cluster communication required by the Kubernetes networking model. Allows software running on nodes to send packets, with sources matching node IP addresses, to destination Pod IP and node IP addresses in the cluster. For example, traffic allowed by this rule includes:

  • Packets sent from system daemons, such as kubelet, to node and Pod IP address destinations of the cluster.
  • Packets sent from software running in Pods with hostNetwork:true to node and Pod IP address destinations of the cluster.
The node IP address range or a superset of this node IP address range:
  • For auto mode VPC networks, GKE uses the 10.128.0.0/9 CIDR because that range includes all current and future subnet primary IPv4 address ranges for the automatically created subnetworks.
  • For custom mode VPC networks, GKE uses the primary IPv4 address range of the cluster's subnet.
GKE does not update the source IPv4 range of this firewall rule if you expand the primary IPv4 range of the cluster's subnet. You must create the necessary ingress firewall rule manually if you expand the primary IPv4 range of the cluster's subnet.
Node tag TCP: 1-65535, UDP: 1-65535, ICMP 1000
gke-[cluster-name]-[cluster-hash]-all Permits traffic between all Pods on a cluster, as required by the Kubernetes networking model.

Pod CIDR

For clusters with discontiguous multi-Pod CIDR enabled, all Pod CIDR blocks used by the cluster.

Node tag TCP, UDP, SCTP, ICMP, ESP, AH 1000
gke-[cluster-hash]-ipv6-all For dual-stack network clusters only. Permits traffic between nodes and Pods on a cluster.

Same IP address range allocated in subnetIpv6CidrBlock.

Node tag TCP, UDP, SCTP, ICMP for IPv6, ESP, AH 1000
gke-[cluster-name]-[cluster-hash]-inkubelet Allow access to port 10255 (Kubelet read-only port) from internal Pod CIDRs and Node CIDRs in new GKE clusters running version 1.23.6 or later. Clusters running versions later than 1.26.4-gke.500 use the Kubelet authenticated port (10250) instead. Do not add firewall rules blocking 10250 within the cluster.

Internal Pod CIDRs and Node CIDRs.

Node tag TCP: 10255 999
gke-[cluster-name]-[cluster-hash]-exkubelet Deny public access to port 10255 in new GKE clusters running version 1.23.6 or later.

0.0.0.0/0

Node tag TCP: 10255 1000

GKE Service firewall rules

GKE creates the following ingress firewall rules when creating a Service:

Name Purpose Source Target (defines the destination) Protocol and ports
k8s-fw-[loadbalancer-hash] Permits ingress traffic to reach a Service. Source comes from spec.loadBalancerSourceRanges. Defaults to 0.0.0.0/0 if spec.loadBalancerSourceRanges is omitted.

For more details, see Firewall rules and source IP address allowlist.

LoadBalancer virtual IP address TCP and UDP on the ports specified in the Service manifest.
k8s-[cluster-id]-node-http-hc Permits health checks of an external passthrough Network Load Balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
LoadBalancer virtual IP address TCP: 10256
k8s-[loadbalancer-hash]-http-hc Permits health checks of an external passthrough Network Load Balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

For more details, see Health check port.

k8s-[cluster-id]-node-hc Permits health checks of an internal passthrough Network Load Balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP: 10256
[loadbalancer-hash]-hc Permits health checks of an internal passthrough Network Load Balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

For more details, see Health check port.

k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash] Permits ingress traffic to reach a Service when one of the following is enabled:
  • GKE subsetting.
  • Backend service-based external passthrough Network Load Balancer.
  • Source comes from spec.loadBalancerSourceRanges. Defaults to 0.0.0.0/0 if spec.loadBalancerSourceRanges is omitted.

    For more details, see Firewall rules and source IP address allowlist.

    LoadBalancer virtual IP address TCP and UDP on the ports specified in the Service manifest.
    k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash]-fw Permits health checks of the Service when externalTrafficPolicy is set to Local and any of the following are enabled:
  • GKE subsetting.
  • Backend service-based external passthrough Network Load Balancer.
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 209.85.152.0/22
    • 209.85.204.0/22
    LoadBalancer virtual IP address TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

    For more details, see Health check port.

    k8s2-[cluster-id]-l4-shared-hc-fw Permits health checks of the Service when externalTrafficPolicy is set to Cluster and any of the following are enabled:
  • GKE subsetting.
  • Backend service-based external passthrough Network Load Balancer.
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 209.85.152.0/22
    • 209.85.204.0/22
    Node tag TCP: 10256
    gke-[cluster-name]-[cluster-hash]-mcsd Permits the control plane to access the kubelet and metrics-server on cluster nodes for Multi-cluster Services. This rule has a priority of 900. Health check IP addresses Node tag TCP, UDP, SCTP, ICMP, ESP, AH

    GKE Gateway firewall rules

    GKE creates the following Gateway firewall rules when creating a Gateway and HTTPRoute resources:

    Name Purpose Source Target (defines the destination) Protocol and ports
    • gkegw1-l7-[network]-[region/global]
    • gkemcg1-l7-[network]-[region/global]

    Permits health checks of a network endpoint group (NEG).

    The Gateway controller creates this rule when the first Gateway resource is created. The Gateway controller can update this rule if more Gateway resources are created.

    Node tag TCP: all container target ports (for NEGs)

    GKE Ingress firewall rules

    GKE creates the following Ingress firewall rules when creating an Ingress resource:

    Name Purpose Source Target (defines the destination) Protocol and ports
    k8s-fw-l7-[random-hash]

    Permits health checks of a NodePort Service or network endpoint group (NEG).

    The Ingress controller creates this rule when the first Ingress resource is created. The Ingress controller can update this rule if more Ingress resources are created.

    For GKE v1.17.13-gke.2600 or later:
    • 130.211.0.0/22
    • 35.191.0.0/16
    • User-defined proxy-only subnet ranges (for internal Application Load Balancers)
    Node tag TCP: 30000-32767, TCP:80 (for internal Application Load Balancers), TCP: all container target ports (for NEGs)

    Shared VPC

    When a cluster that is located in a Shared VPC uses a Shared VPC network, the Ingress controller cannot use the GKE service account in the service project to create and update ingress allow firewall rules in the host project. You can grant the GKE service account in a service project permissions to create and manage the firewall resources. For more information, see Shared VPC.

    Required firewall rule for expanded subnet

    If you expand the primary IPv4 range of the cluster's subnet, GKE does not automatically update the source range of the gke-[cluster-name]-[cluster-hash]-vms firewall rule. Because nodes in the cluster can receive IPv4 addresses from the expanded portion of the subnet's primary IPv4 range, you must manually create a firewall rule to allow communication between nodes of the cluster.

    The ingress firewall rule you must create must allow TCP and ICMP packets from the expanded primary subnet IPv4 source range, and it must at least apply to all nodes in the cluster.

    To create an ingress firewall rule that only applies to the cluster's nodes, set the firewall rule's target to the same target tag used by your cluster's automatically-created gke-[cluster-name]-[cluster-hash]-vms firewall rule.

    Firewall rule logging

    Firewall rule logging is disabled by default. To enable logging for a firewall rule, use the --enable-logging command.

    What's next