We have seen about Kubernetes DaemonSet recently and in this will see how we can troubleshoot daemonset, if there is any issue on it.
What is a Kubernetes DaemonSet?
The DaemonSet feature is used to ensure that some or all of your pods are scheduled and running on every single available node in the Kubernetes cluster. This essentially runs a copy of the desired pod across all nodes.
Whenever there is new node has been added to a Kubernetes cluster, a new pod will be added to that new node. Similarly, when a node got removed, the DaemonSet controller ensures that the pod associated with that node is garbage collected.
DaemonSets are an integral part of the Kubernetes cluster facilitating administrators to easily configure services (pods) across all or a subset of nodes.
How do DaemonSets Work?
A DaemonSet is an active Kubernetes object managed by a controller. You can declare any state you want, indicating that a particular Pod should exist on all nodes. The tuning control loop compares the desired state to the currently observed state. If the monitored node does not have a matching pod, the DaemonSet controller will create one for you.
This automated process includes existing nodes and all newly created nodes. Pods created by the DaemonSet controller are ignored by the Kubernetes scheduler as long as they exist as nodes themselves.
DaemonSet creates pods on every node by default. If desired, you can use the node selector to limit the number of nodes it can accept. The DaemonSet controller only creates pods on nodes that match the predefined nodeSelector field in the YAML file.
Create a DaemonSet
To create a DaemonSet, you need to define a YAML manifest file and run it in the cluster using kubectl apply
. The DaemonSet YAML file specifies the pod template that should be used to run pods on each node. It can also specify conditions or tolerations that determine when DaemonSet pods can schedule on nodes.
Here is an example of a DaemonSet manifest file. The example was shared in the Kubernetes documentation.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
First, let’s create the DaemonSet using the kubectl create command and retrieve the DaemonSet and pod information as follows:
# kubectl create -f daemonset-example.yaml
# kubectl get daemonset
# kubectl get pod -o wide
As you can see from the above output, our DaemonSet has been successfully deployed. Depending on the nodes available on the cluster, it will scale automatically to match the number of nodes or a subset of nodes on the configuration.
Troubleshooting DaemonSet
A DaemonSet is unhealthy if it does not have one pod running per node. Use the following steps to diagnose and resolve the most common DaemonSet issues.
Continue Reading on How to Troubleshoot DaemonSet? — FoxuTech