This page describes Kubernetes DaemonSet objects and their use in Kubernetes Engine.
What is a DaemonSet?
Like other workload objects, DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed.
DaemonSets use a Pod template, which contains a specification for its Pods. The Pod specification determines how each Pod should look: what applications should run inside its containers, which volumes it should mount, its labels and selectors, and more.
DaemonSets are useful for deploying ongoing background tasks that you need to
run on all or certain nodes, and which do not require user intervention.
Examples of such tasks include storage daemons like
ceph, log collection
fluentd, and node monitoring daemons like
For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware types and resource needs.
The following is an example of a DaemonSet manifest file:
apiVersion: v1/beta2 # For Kubernetes version 1.9 and later, use apps/v1 kind: DaemonSet metadata: name: fluentd spec: selector: matchLabels: name: fluentd # Label selector that determines which Pods belong to the DaemonSet template: metadata: labels: name: fluentd # Pod template's label selector spec: nodeSelector: type: prod # Node label selector that determines on which nodes Pod should be scheduled # In this case, Pods are only scheduled to nodes bearing the label "type: prod" containers: - name: fluentd image: gcr.io/google-containers/fluentd-elasticsearch:1.20 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi
In this example:
- A DaemonSet named
fluentdis created, indicated by the
- DaemonSet's Pod is labelled
- A node label selector (
type: prod) declares on which labelled nodes the DaemonSet schedules its Pod.
- The Pod's container pulls the
fluentd-elasticsearchimage at version
1.20. The container image is hosted by Container Registry.
- The container requests 100m of CPU and 200Mi of memory, and limits itself to 200Mi total of memory usage.
In sum, the Pod specification contains the following instructions:
- Label Pod as
- Use node label selector
type: prodto schedule Pod to matching nodes, and do not schedule on nodes which do not bear the label selector. (Alternatively, omit the
nodeSelectorfield to schedule on all nodes.)
- Request some memory and CPU resources.
For more information about DaemonSet configurations, refer to the DaemonSet API reference.
You can update DaemonSets by changing its Pod specification, resource requests and limits, labels, and annotations.
To decide how to handle updates, DaemonSet use a update strategy defined in
spec: updateStrategy. There are two strategies,
OnDeletedoes not automatically delete and recreate DaemonSet Pods when the object's configuration is changed. Instead, Pods must be manually deleted to cause the controller to create new Pods that reflect your changes.
RollingUpdateautomatically deletes and recreates DaemonSet Pods. With this strategy, valid changes automatically triggers a rollout. This is the default update strategy for DaemonSets.
Update rollouts can be monitored by running the following command:
kubectl rollout status ds [DAEMONSET_NAME]
For more information about updating DaemonSets, refer to Perform a Rolling Update on a DaemonSet in the Kubernetes documentation.