DaemonSet

This page describes Kubernetes DaemonSet objects and their use in Google Kubernetes Engine.

What is a DaemonSet?

Like other workload objects, a DaemonSet manages groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.

DaemonSets use a Pod template, which contains a specification for its Pods. The Pod specification determines how each Pod should look: what applications should run inside its containers, which volumes it should mount, its labels and selectors, and more.

DaemonSet Pods are subject to the same rules of priority as any other Pod. DaemonSet Pods respect taints and tolerations; however, DaemonSet Pods have some implicit tolerations.

Usage patterns

DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluent-bit, and node monitoring daemons like collectd.

For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.

DaemonSets on GKE Autopilot

GKE administers nodes in clusters that you create using the Autopilot mode of operation. You cannot manually add, remove, or modify the nodes or the underlying Compute Engine virtual machines (VMs). However, the Kubernetes node object is still visible, and Autopilot supports DaemonSets as your workloads.

GKE Autopilot limits some administrative functions that affect all workload Pods, including Pods managed by DaemonSets. DaemonSets that perform administrative functions on nodes using elevated privileges, such as the privileged security context, won't run on Autopilot clusters unless explicitly allowed by GKE.

For more information on the limits enforced by Autopilot, see Workload limitations and restrictions. You can use DaemonSets with workloads that meet the restrictions set by Autopilot, as well as DaemonSets from some Google Cloud partners.

Best practices for DaemonSets on Autopilot

GKE uses the total size of your deployed workloads to determine the size of the nodes that Autopilot provisions for the cluster. If you add or resize a DaemonSet after Autopilot provisions a node, GKE won't resize existing nodes to accommodate the new total workload size. DaemonSets with resource requests larger than the allocatable capacity of existing nodes, after accounting for system pods, also won't get scheduled on those nodes.

We recommend the following best practices when deploying DaemonSets on Autopilot:

  • Deploy DaemonSets before any other workloads.
  • Set a higher PriorityClass on DaemonSets than regular Pods. The higher PriorityClass lets GKE evict lower-priority Pods to accommodate DaemonSet pods if the node can accommodate those pods. This helps to ensure that the DaemonSet is present on each node.
  • To ensure DaemonSet presence on your nodes, keep your CPU resource requests for DaemonSets to 250m or less. Larger DaemonSets, especially those that are larger than your regular Pods, might not fit on every node.

Creating DaemonSets

You can create a DaemonSet using kubectl apply or kubectl create.

Create DaemonSets with a higher priority than regular Pods to more consistently schedule a DaemonSet Pod on every node.

The following is an example of a DaemonSet manifest file with a PriorityClass assigned:

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: ds-priority
value: 1000000
preemptionPolicy: PreemptLowerPriority
globalDefault: false
description: "DaemonSet services."
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: prometheus-exporter
spec:
  selector:
      matchLabels:
        name: prometheus-exporter
  template:
    metadata:
      labels:
        name: prometheus-exporter
    spec:
      priorityClassName: ds-priority
      containers:
      - name: prometheus-exporter
        image: us-docker.pkg.dev/google-samples/containers/gke/prometheus-dummy-exporter:v0.2.0
        command: ["./prometheus-dummy-exporter"]
        args:
        - --metric-name=custom_prometheus
        - --metric-value=40
        - --port=8080
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi

In this example:

  • A PriorityClass named ds-priority is created.
  • The ds-priority PriorityClass has a PreemptLowerPriority preemption policy. GKE evicts lower priority Pods to make room on nodes for the DaemonSet Pods.
  • A DaemonSet named prometheus-exporter is created, with Pods named prometheus-exporter.
  • A DaemonSet Pod is present on all nodes.
  • The Pod is assigned the ds-priority PriorityClass.
  • The Pod's container pulls the prometheus-exporter image at version 0.2.0.
  • The container requests 100m of CPU and 200Mi of memory, and limits itself to 200Mi total of memory usage.

In sum, the Pod specification contains the following instructions:

  • Label Pod as prometheus-exporter.
  • Schedule a Pod on every node. Alternatively, use a nodeSelector to select the labelled nodes on which GKE should schedule Pods.
  • Evict lower-priority Pods on destination nodes to schedule the prometheus-exporter Pod.
  • Run prometheus-exporter at version 0.2.0.
  • Request some memory and CPU resources.

For more information about DaemonSet configurations, refer to the DaemonSet API reference.

Updating DaemonSets

You can update DaemonSets by changing its Pod specification, resource requests and limits, labels, and annotations.

To decide how to handle updates, DaemonSet use an update strategy defined in spec: updateStrategy. There are two strategies, OnDelete and RollingUpdate:

  • OnDelete does not automatically delete and recreate DaemonSet Pods when the object's configuration is changed. Instead, Pods must be manually deleted to cause the controller to create new Pods that reflect your changes.
  • RollingUpdate automatically deletes and recreates DaemonSet Pods. With this strategy, valid changes automatically trigger a rollout. This is the default update strategy for DaemonSets.

Update rollouts can be monitored by running the following command:

kubectl rollout status ds daemonset-name

For more information about updating DaemonSets, refer to Perform a Rolling Update on a DaemonSet in the Kubernetes documentation.

What's next