Setting up a custom kube-dns Deployment


This page explains how to run a custom Deployment of kube-dns for Google Kubernetes Engine (GKE) Standard mode clusters. This page doesn't apply to GKE Autopilot, where Google manages kube-dns.

Overview

To configure kube-dns CPU, memory, and other parameters, you must run a custom Deployment and disable the Deployment that GKE provides. You can also run a custom Deployment of Core DNS or any other DNS provider that follows the Kubernetes DNS specification using the instructions on this page.

To learn more about how GKE implements service discovery, see Service discovery and DNS.

Creating a custom Deployment

  1. Create a Deployment manifest for kube-dns, Core DNS or other DNS provider.

    The following sample kube-dns manifest includes the -q flag to log the results of queries. Save the manifest as custom-kube-dns.yaml.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: DNS_DEPLOYMENT_NAME
      namespace: kube-system
      annotations:
        deployment.kubernetes.io/revision: "1"
        k8s-app: kube-dns
    spec:
      selector:
        matchLabels:
          k8s-app: kube-dns
      strategy:
        rollingUpdate:
          maxSurge: 10%
          maxUnavailable: 0
        type: RollingUpdate
      template:
        metadata:
          creationTimestamp: null
          labels:
            k8s-app: kube-dns
        spec:
          containers:
          - name: kubedns
            image: registry.k8s.io/dns/k8s-dns-kube-dns:1.17.3
            resources:
              limits:
                memory: '170Mi'
              requests:
                cpu: 100m
                memory: '70Mi'
            livenessProbe:
              httpGet:
                path: /healthcheck/kubedns
                port: 10054
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            readinessProbe:
              httpGet:
                path: /readiness
                port: 8081
                scheme: HTTP
              initialDelaySeconds: 3
              timeoutSeconds: 5
            args:
            - --domain=cluster.local.
            - --dns-port=10053
            - --config-dir=/kube-dns-config
            - --v=2
            env:
            - name: PROMETHEUS_PORT
              value: "10055"
            ports:
            - containerPort: 10053
              name: dns-local
              protocol: UDP
            - containerPort: 10053
              name: dns-tcp-local
              protocol: TCP
            - containerPort: 10055
              name: metrics
              protocol: TCP
            volumeMounts:
            - name: kube-dns-config
              mountPath: /kube-dns-config
            securityContext:
              allowPrivilegeEscalation: false
              readOnlyRootFilesystem: true
              runAsUser: 1001
              runAsGroup: 1001
          - name: dnsmasq
            image: registry.k8s.io/dns/k8s-dns-dnsmasq-nanny:1.17.3
            livenessProbe:
              httpGet:
                path: /healthcheck/dnsmasq
                port: 10054
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            args:
            - -v=2
            - -logtostderr
            - -configDir=/etc/k8s/dns/dnsmasq-nanny
            - -restartDnsmasq=true
            - --
            - -k
            - --cache-size=1000
            - --no-negcache
            - --dns-forward-max=1500
            - --log-facility=-
            - --server=/cluster.local/127.0.0.1#10053
            - --server=/in-addr.arpa/127.0.0.1#10053
            - --server=/ip6.arpa/127.0.0.1#10053
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            resources:
              requests:
                cpu: 150m
                memory: 20Mi
            volumeMounts:
            - name: kube-dns-config
              mountPath: /etc/k8s/dns/dnsmasq-nanny
            securityContext:
              capabilities:
                drop:
                - all
                add:
                - NET_BIND_SERVICE
                - SETGID
          - name: sidecar
            image: registry.k8s.io/dns/k8s-dns-sidecar:1.17.3
            livenessProbe:
              httpGet:
                path: /metrics
                port: 10054
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            args:
            - --v=2
            - --logtostderr
            - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
            - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
            ports:
            - containerPort: 10054
              name: metrics
              protocol: TCP
            resources:
              requests:
                memory: 20Mi
                cpu: 10m
            securityContext:
              allowPrivilegeEscalation: false
              readOnlyRootFilesystem: true
              runAsUser: 1001
              runAsGroup: 1001
          dnsPolicy: Default
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          serviceAccount: kube-dns
          serviceAccountName: kube-dns
          terminationGracePeriodSeconds: 30
          tolerations:
          - key: CriticalAddonsOnly
            operator: Exists
          volumes:
          - configMap:
              defaultMode: 420
              name: kube-dns
              optional: true
            name: kube-dns-config
    

    Replace DNS_DEPLOYMENT_NAME with a name for the custom kube-dns Deployment.

  2. Apply the manifest to the cluster:

    kubectl create -f custom-kube-dns.yaml
    
  3. Verify that the Pods are running:

    kubectl get pods -n kube-system -l=k8s-app=kube-dns
    

    The output is similar to the following, showing the custom kube-dns Pods:

    NAME                               READY   STATUS    RESTARTS   AGE
    custom-kube-dns-5685645b44-kzs8w   3/3     Running   0          22h
    

    The new Deployment has the same selector as kube-dns, which means that Pods use the same kube-dns service IP address to communicate with the Pods running the custom kube-dns Deployment.

    After this step completes, you must follow the steps in the next section to scale down kube-dns.

Scaling down kube-dns

Disable the kube-dns managed by GKE by scaling the kube-dns Deployment and autoscaler to zero using the following command:

kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
kubectl scale deployment --replicas=0 kube-dns --namespace=kube-system

Creating a custom autoscaler

If your custom DNS requires autoscaling, you must configure and deploy a separate autoscaler. The kube-dns-autoscaler only scales the default kube-dns Deployment on the cluster. You also need to configure a custom ClusterRole for your autoscaler and add permissions to modify your custom kube-dns deployments.

  1. Create a ClusterRole and Deployment manifest for the autoscaler and save the manifest as custom-dns-autoscaler.yaml:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: system:custom-dns-autoscaler
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:custom-dns-autoscaler
    subjects:
    - kind: ServiceAccount
      name: kube-dns-autoscaler
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: system:custom-dns-autoscaler
    rules:
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - list
      - watch
    - apiGroups:
      - apps
      resourceNames:
      - DNS_DEPLOYMENT_NAME
      resources:
      - deployments/scale
      verbs:
      - get
      - update
    - apiGroups:
      - ""
      resources:
      - configmaps
      verbs:
      - get
      - create
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: custom-dns-autoscaler
      namespace: kube-system
      labels:
        k8s-app: custom-dns-autoscaler
    spec:
      selector:
        matchLabels:
          k8s-app: custom-dns-autoscaler
      template:
        metadata:
          labels:
            k8s-app: custom-dns-autoscaler
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
        spec:
          priorityClassName: system-cluster-critical
          securityContext:
            supplementalGroups: [ 65534 ]
            fsGroup: 65534
          nodeSelector:
            kubernetes.io/os: linux
          containers:
          - name: autoscaler
            image: registry.k8s.io/cluster-proportional-autoscaler-amd64:1.7.1
            resources:
                requests:
                    cpu: "20m"
                    memory: "10Mi"
            command:
              - /cluster-proportional-autoscaler
              - --namespace=kube-system
              - --configmap=custom-dns-autoscaler
              - --target=Deployment/DNS_DEPLOYMENT_NAME
              - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
              - --logtostderr=true
              - --v=2
          tolerations:
          - key: "CriticalAddonsOnly"
            operator: "Exists"
          serviceAccountName: kube-dns-autoscaler
    
  2. Apply the manifest to the cluster:

    kubectl create -f custom-dns-autoscaler.yaml
    

What's next