Kubernetes Taints and Tolerations — Explained

FoxuTech
3 min readJan 12, 2023

Orchestrators like Kubernetes have abstracted servers away, and now you can manage your whole infrastructure in a multi-tenant, heterogeneous Kubernetes cluster. You package your application in containers, and then Kubernetes takes care of availability, scaling, monitoring, and more across various nodes featuring specialized hardware or logical isolation.

Kubernetes has a lot of options and flexibility depending on what you need from it. One such piece of functionality is the concept of taints and tolerations, which helps you achieve selective scheduling.

In this article will see about taints and tolerations with some example.

Why Use Taints and Tolerations?

Taints and tolerations help prevent your pods from scheduling to undesirable nodes. Let’s assume, you want to run your selected label pods or your UI pods on a particular node. For this case, taints and tolerations make this possible.

What is Scheduling?

In Kubernetes, scheduling isn’t about timing, but about ensuring that pods are matched to nodes. When we create a pod, the scheduler in the control plane looks at the nodes and verifies available resources and other conditions before assigning a pod to the nodes.
If there are no errors during the verification, the pod will be scheduled on the node. If the conditions of the verification aren’t satisfied, then your pods will be put in a Pending state, you may experience sometime.

You can use kubectl describe pods name_of_pod and scroll down to Events to find the precise reasons for the pending state or you can run kubectl get events. Incase if there is resource issue, you may get some error on it, like it can be CPU issue or memory or sometime storage.

If there were another node with adequate resources present, the scheduler would have scheduled the pod to it. Again pods goes to pending state for varies reason, we need to understand from the events or logs and should take the necessary actions.

How Do Taints Work?

The scheduler looks at the nodes, and if there are taints that the pods can’t tolerate, it doesn’t schedule the pod to that node.

Similar to labels, there can be many taints on your nodes, and the cluster’s scheduler will schedule a pod to your nodes only if it tolerates all of the taints.

You can add taint to your nodes with the following command:

# kubectl taint nodes <node name> <taint key>=<taint value>:<taint effect>

Here, nodeName is the name of the node that you want to taint, and the taint is described with the key-value pair. In the above example, value1 is the key and taint-effect is the value.

The taints can produce three different outcomes depending on your taint-effect choice.

Taint effects define what will happen to pods if they don’t tolerate the taints. The three taint effects are:

  • NoSchedule: A strong effect where the system lets the pods already scheduled in the nodes run, but enforces taints from the subsequent pods.
  • PreferNoSchedule: A soft effect where the system will try to avoid placing a pod that does not tolerate the taint on the node.
  • NoExecute: A strong effect where all previously scheduled pods are evicted, and new pods that don’t tolerate the taint will not be scheduled.

The three taint effects can be seen here:

# kubectl taint nodes node1 key1=value1:NoSchedule
# kubectl taint nodes node1 key1=value1:NoExecute
# kubectl taint nodes node1 key2=value1:PreferNoSchedule

How Do Tolerations Work?

Taints don’t allow pods to schedule on nodes with the set key-value property, but how will you schedule a pod to these nodes with taints?

Read further on https://foxutech.com/kubernetes-taints-and-tolerations-explained/

--

--

FoxuTech

Discuss about #Linux, #DevOps, #Docker, #kubernetes, #HowTo’s, #cloud & IT technologies like #argocd #crossplane #azure https://foxutech.com/