Kubernetes DaemonSet — Explained
As we are seeing about Kubernetes resources and how to troubleshoot the Kubernetes issue with live demo. Part of that, in this article, we will have a look at the DaemonSet on Kubernetes.
What is a Kubernetes DaemonSet?
The DaemonSet feature is used to ensure that some or all of your pods are scheduled and running on every single available node in the Kubernetes cluster. This essentially runs a copy of the desired pod across all nodes.
Whenever there is new node has been added to a Kubernetes cluster, a new pod will be added to that new node. Similarly, when a node got removed, the DaemonSet controller ensures that the pod associated with that node is garbage collected.
DaemonSets are an integral part of the Kubernetes cluster facilitating administrators to easily configure services (pods) across all or a subset of nodes.
Where we can use DaemonSets?
As daemonSets helps to improve the performance of a Kubernetes cluster by distributing maintenance tasks and support services by deploying the pods across all the nodes. Even you can use this for the job which needs to be run on all the nodes, like metric monitoring, log collector, etc. Following is some examples use cases of DaemonSets:
- To run a daemon for logs collection on each node, such as Fluentd and logstash.
- To run a daemon for cluster storage like glusterd and ceph on each node.
- To run a daemon which does logs rotation and cleaning log files.
- To run a daemon that runs on each node, detects node problems and reports them to the api-server, like node-problem-detector.
- To run a daemon for node monitoring on every note, such as Prometheus Node Exporter, collectd.
- Even you can use for storing the cache across all the node, like if there is large file need to be accessible by the users, you can mount some cloud volume and expose it.
Depending on the requirement, you can set up one or multiple DaemonSets for a single type of daemon, with different flags, memory, CPU, etc. that supports multiple configurations and hardware types.
How to create a DaemonSet
Here is the sample daemonSet,
- name: fluentd-elasticsearch
As you can notice in the above structure, the apiVersion, kind, and metadata are required fields in every Kubernetes manifest. The DaemonSet specific fields come under the spec section — these fields are both mandatory.
- template. this is a required field that specifies a pod template for the DaemonSet to use. Along with the required fields for containers, this template requires appropriate labels (.spec.template.metadata.labels). You should also remember that a pod template of your DaemonSet must have a RestartPolicy equal to Always, which defaults to Always if not specified.
- selector. The selector for the pods managed by the DaemonSet. This value must be a label specified in the pod template. This value is fixed and cannot be changed after the initial creation of the DaemonSet. Changing this value will cause pods created via that DaemonSet to be orphaned. Kubernetes offers two ways to match matchLabels and matchExpressions for creating complex selectors.
Other optional fields
- template.spec.nodeSelector — This can be used to specify a subset of nodes that will create the Pod matching the specified selector.
- template.spec.affinity — This field can be configured to set the affinity that would run the pod only on nodes that match the configured affinity.
Creating a DaemonSet
Now let’s go ahead with creating a sample DaemonSet. Here, we will be using a “fluentd-elasticsearch” image that will run on every node in a Kubernetes cluster. Each pod would then collect logs and send the data to ElasticSearch.
Continue Reading it on https://foxutech.com/kubernetes-daemonset/