Setting up NodeLocal DNSCache

This page explains how to configure NodeLocal DNSCache on a Google Kubernetes Engine (GKE) cluster. NodeLocal DNSCache improves DNS lookup latency, makes DNS lookup times more consistent, and reduces the number of DNS queries to kube-dns by running a DNS cache on each cluster node.

For an overview of how service discovery and managed DNS works on GKE, see Service discovery and DNS.


NodeLocal DNSCache is an optional GKE add-on that you can run in addition to kube-dns. NodeLocal DNSCache is implemented as a DaemonSet that runs a DNS cache on each node in your cluster. When a Pod makes a DNS request, the request goes to the DNS cache running on the same node as the Pod. If the cache can't resolve the DNS request, the cache forwards the request to:

  • Cloud DNS for external hostname queries. These queries are forwarded to Cloud DNS by the local MetaData Server running on the same node as the Pod the query originated from.
  • kube-dns for all other DNS queries. The kube-dns-upstream service is used by node-local-dns Pods to reach out to kube-dns Pods.

A diagram of the path of a DNS request, as described in the previous paragraph

Pods do not need to be modified to use NodeLocal DNSCache. NodeLocal DNSCache consumes compute resources on each node of your cluster.

Benefits of NodeLocal DNSCache

  • Reduced average DNS lookup time
  • Connections from Pods to their local cache don't create conntrack table entries. This prevents dropped and rejected connections caused by conntrack table exhaustion and race conditions.


  • NodeLocal DNSCache is not supported with Windows Server node pools.
  • NodeLocal DNSCache requires GKE version 1.15 or higher.
  • Connections between the local DNS cache and kube-dns use TCP instead of UDP for improved reliability.
  • DNS queries for external URLs (URLs that don't refer to cluster resources) are forwarded directly to the local Cloud DNS metadata server, bypassing kube-dns.
  • The local DNS caches automatically pick up stub domains and upstream nameservers that are specified in the kube-dns ConfigMap.

  • DNS records are cached for:

    • The record's TTL, or 30 seconds if the TTL is more than 30 seconds.
    • 5 seconds if the DNS response is NXDOMAIN.
  • NodeLocal DNSCache Pods listen on port 53, 9253 and 8080 on the nodes. Running any other hostNetwork Pod using the above ports or configuring hostPorts with the above ports causes NodeLocal DNSCache to fail and result in DNS errors.

  • You can use NodeLocal DNSCache with Cloud DNS for GKE.

Enabling NodeLocal DNSCache

You can enable NodeLocal DNSCache in an existing cluster or when creating a new cluster. Enabling NodeLocal DNSCache in an existing cluster is a disruptive process, all cluster nodes running GKE 1.15 and higher are recreated. Nodes are recreated per the GKE node upgrade process.


Enabling NodeLocal DNSCache in a new cluster

To enable NodeLocal DNSCache in a new cluster, use the --addons NodeLocalDNS flag:

gcloud container clusters create cluster-name \
  --zone compute-zone \
  --cluster-version cluster-version \
  --addons NodeLocalDNS

Replace the following:

  • cluster-name: the name of your new cluster.
  • compute-zone: the zone for your cluster.
  • cluster-version: the version for your cluster (1.15 or higher).

Enabling NodeLocal DNSCache in an existing cluster

To enable NodeLocal DNSCache in an existing cluster, use the --update-addons=NodeLocalDNS=ENABLED flag:

gcloud container clusters update cluster-name \


You can use Google Cloud Console to enable NodeLocal DNSCache when creating a new cluster.

  1. Go to the Google Kubernetes Engine page in the Cloud Console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. For Name, enter cluster-name.

  4. For Zone, select us-central1-a.

  5. For Number of nodes, enter 1.

  6. From the navigation pane, under Cluster, click Networking.

  7. Under Advanced networking options, select the Enable NodeLocal DNSCache checkbox.

  8. Click Create.

Verifying that NodeLocal DNSCache is enabled

You can verify that NodeLocal DNSCache is running by listing the node-local-dns Pods. There should be a node-local-dns Pod running on each node running GKE version 1.15 or higher.

kubectl get pods -n kube-system -o wide | grep node-local-dns

Disabling NodeLocal DNSCache

NodeLocal DNSCache can be disabled using gcloud:

gcloud container clusters update cluster-name \

Troubleshooting NodeLocal DNSCache

See Debugging DNS Resolution for general information about diagnosing Kubernetes DNS issues.

Validating Pod configuration

To verify that a Pod is using NodeLocal DNSCache, check /etc/resolv.conf on the Pod to see if the Pod is configured to use the correct nameserver:

kubectl exec -it pod-name -- cat /etc/resolv.conf | grep nameserver

The nameserver IP should match the IP address output by:

kubectl get svc -n kube-system kube-dns -o jsonpath="{.spec.clusterIP}"

If the nameserver IP address configured in /etc/resolv.conf doesn't match, you need to modify the configuration to use the correct nameserver IP address.

Network policy with NodeLocal DNSCache

When using NetworkPolicy with the NodeLocalDNS add-on, additional rules are needed to permit node-local-dns Pods to send and receive DNS queries. Use an ipBlock rule in your NetworkPolicy to allow communication between node-local-dns Pods and kube-dns:

  - ports:
    - port: 53
      protocol: TCP
    - port: 53
      protocol: UDP
    - ipBlock:
        cidr: kube-dns-cluster-ip/32
  podSelector: {}
  - Ingress
  - Egress

Replace kube-dns-cluster-ip with the IP address of kube-dns service obtained using:

kubectl get svc -n kube-system kube-dns -o jsonpath="{.spec.clusterIP}"

This example uses an ipBlock rule because node-local-dns Pods run in hostNetwork:True mode. A matchLabels rule would not match these Pods.

Known issues

DNS timeout in ClusterFirstWithHostNet dnsPolicy when using NodeLocal DNSCache and GKE Dataplane V2

On clusters using GKE Dataplane V2 and NodeLocal DNSCache, pods with hostNetwork set to true and dnsPolicy set to ClusterFirstWithHostNet cannot reach cluster DNS backends. DNS logs might contain entries similar to the following:

connection timed out; no servers could be reached

The output indicates that the DNS requests cannot reach the backend servers.

A workaround is to set the dnsPolicy and dnsConfig for hostNetwork pods:

 dnsPolicy: "None"
     - cluster.local
     - svc.cluster.local
     - NAMESPACE.svc.cluster.local
     - c.PROJECT_ID.internal
     - google.internal
     - name: ndots
       value: "5"

Replace the following:

  • NAMESPACE: the namespace of the hostNetwork pod.
  • PROJECT_ID: the ID of your Google Cloud project.
  • KUBE_DNS_UPSTREAM: the ClusterIP of the upstream kube-dns service. You can get this value using the following command:

    kubectl get svc -n kube-system kube-dns-upstream -o jsonpath="{.spec.clusterIP}"

DNS requests from the Pod can now reach kube-dns and bypass NodeLocal DNSCache.

NodeLocal DNSCache timeout errors

On clusters with NodeLocal DNSCache enabled, the logs might contain entries similar to the following:

[ERROR] plugin/errors: 2 <hostname> A: read tcp <node IP: port>-><kubedns IP>:53: i/o timeout

The output includes the IP address of the kube-dns-upstream Cluster IP Service. In this example, the response to a DNS request was not received from kube-dns in 2 seconds. This could be due to one of the following reasons:

  • An underlying network connectivity problem.
  • A known issue with dnsmasq handling TCP connections.

node-local-dns Pods access kube-dns using TCP for improved reliability. When handling connections from multiple source IPs, dnsmasq prioritizes connections from existing connections over new connections. As a result, on a cluster with high DNS queries per second (QPS), node-local-dns Pods on newly created nodes can experience higher DNS latency. This can also occur on clusters with cluster autoscaler enabled because cluster autoscaler dynamically changes the number of nodes.

This issue is resolved in the following GKE versions:

  • 1.19.7-gke.1500 and later
  • 1.18.16-gke.1200 and later
  • 1.17.17-gke.5400 and later

One workaround is to increase the number of kube-dns replicas by tuning the autoscaling parameters.

Scaling up kube-dns

You can use a lower value for nodesPerReplica to ensure that more kube-dns Pods are created as cluster nodes scale up. We highly recommend setting an explicit max value to ensure that the GKE control plane virtual machine (VM) is not overwhelmed due to large number of kube-dns pods watching the Kubernetes API.

You can set max to the number of nodes in the cluster. If the cluster has more than 500 nodes, set max to 500.

For Standard clusters, you can modify the number of kube-dns replicas by editing the kube-dns-autoscaler ConfigMap. This configuration is not supported in Autopilot clusters.

kubectl edit configmap kube-dns-autoscaler --namespace=kube-system

The output is similar to the following:

linear: '{"coresPerReplica":256, "nodesPerReplica":16,"preventSinglePointFailure":true}'

The number of kube-dns replicas is calculated using the following formula:

replicas = max( ceil( cores × 1/coresPerReplica ) , ceil( nodes × 1/nodesPerReplica ), maxValue )

To scale up, change nodesPerReplica to a smaller value and include a max value.

linear: '{"coresPerReplica":256, "nodesPerReplica":8,"max": 15,"preventSinglePointFailure":true}'

The config creates 1 kube-dns pod for every 8 nodes in the cluster. A 24-node cluster will have 3 replicas and a 40-node cluster will have 5 replicas. If the cluster grows beyond 120 nodes, the number of kube-dns replicas does not grow beyond 15, the max value.

What's next