You can constrain a pod to only be able to run on particular nodes or to prefer to run on particular nodes. There are several ways to do this, and they all use label selectors to make the selection. Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) but there are some circumstances where you may want more control where a pod lands, e.g. to ensure that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone.
You can find all the files for these examples in our docs repo here.
nodeSelector is the simplest form of constraint.
nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible
to run on a node, the node must have each of the indicated key-value pairs as labels (it can have
additional labels as well). The most common usage is one key-value pair.
Let’s walk through an example of how to use
This example assumes that you have a basic understanding of Kubernetes pods and that you have turned up a Kubernetes cluster.
kubectl get nodes to get the names of your cluster’s nodes. Pick out the one that you want to add a label to.
Then, to add a label to the node you’ve chosen, run
kubectl label nodes <node-name> <label-key>=<label-value>. For example, if my node name is ‘kubernetes-foo-node-1.c.a-robinson.internal’ and my desired label is ‘disktype=ssd’, then I can run
kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd.
If this fails with an “invalid command” error, you’re likely using an older version of kubectl that doesn’t have the
label command. In that case, see the previous version of this guide for instructions on how to manually set labels on a node.
Also, note that label keys must be in the form of DNS labels (as described in the identifiers doc), meaning that they are not allowed to contain any upper-case letters.
You can verify that it worked by re-running
kubectl get nodes --show-labels and checking that the node now has a label.
Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx
Then add a nodeSelector like so:
When you then run
kubectl create -f pod.yaml, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running
kubectl get pods -o wide and looking at the “NODE” that the pod was assigned to.
In addition to labels you attach yourself, nodes come pre-populated with a standard set of labels. As of Kubernetes v1.4 these labels are
nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity
feature, currently in alpha, greatly expands the types of constraints you can express. The key enhancements are
The affinity feature consists of two types of affinity, “node affinity” and “inter-pod affinity/anti-affinity.”
Node affinity is like the existing
nodeSelector (but with the first two benefits listed above),
while inter-pod affinity/anti-affinity constrains against pod labels rather than node labels, as
described in the three item listed above, in addition to having the first and second properties listed above.
nodeSelector continues to work as usual, but will eventually be deprecated, as node affinity can express
nodeSelector can express.
Node affinity was introduced in Kubernetes 1.2.
Node affinity is conceptually similar to
nodeSelector – it allows you to constrain which nodes your
pod is eligible to schedule on, based on labels on the node.
There are currently two types of node affinity, called
preferredDuringSchedulingIgnoredDuringExecution. You can think of them as “hard” and “soft” respectively,
in the sense that the former specifies rules that must be met for a pod to schedule onto a node (just like
nodeSelector but using a more expressive syntax), while the latter specifies preferences that the scheduler
will try to enforce but will not guarantee. The “IgnoredDuringExecution” part of the names means that, similar
nodeSelector works, if labels on a node change at runtime such that the affinity rules on a pod are no longer
met, the pod will still continue to run on the node. In the future we plan to offer
requiredDuringSchedulingRequiredDuringExecution which will be just like
except that it will evict pods from nodes that cease to satisfy the pods’ node affinity requirements.
Thus an example of
requiredDuringSchedulingIgnoredDuringExecution would be “only run the pod on nodes with Intel CPUs”
and an example
preferredDuringSchedulingIgnoredDuringExecution would be “try to run this set of pods in availability
zone XYZ, but if it’s not possible, then allow some to run elsewhere”.
Node affinity is currently expressed using an annotation on Pod. Once the feature goes to GA it will use a field of Pod.
Here’s an example of a pod that uses node affinity:
This node affinity rule says the pod can only be placed on a node with a label whose key is
kubernetes.io/e2e-az-name and whose value is either
e2e-az2. In addition,
among nodes that meet that criteria, nodes with a label whose key is
another-annotation-key and whose
another-annotation-value should be preferred.
You can see the operator
In being used in the example. The new node affinity syntax supports the following operators:
There is no explicit “node anti-affinity” concept, but
DoesNotExist give that behavior.
If you specify both
nodeAffinity, both must be satisfied for the pod
to be scheduled onto a candidate node.
For more information on node affinity, see the design doc here.
Inter-pod affinity and anti-affinity were introduced in Kubernetes 1.4.
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to schedule on based on
labels on pods that are already running on the node rather than based on labels on nodes. The rules are of the form “this pod should (or, in the case of
anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y.” Y is expressed
as a LabelSelector with an associated list of namespaces (or “all” namespaces); unlike nodes, because pods are namespaced
(and therefore the labels on pods are implicitly namespaced),
a label selector over pod labels must specify which namespaces the selector should apply to. Conceptually X is a topology domain
like node, rack, cloud provider zone, cloud provider region, etc. You express it using a
topologyKey which is the
key for the node label that the system uses to denote such a topology domain, e.g. see the label keys listed above
in the section “Interlude: built-in node labels.”
As with node affinity, there are currently two types of pod affinity and anti-affinity, called
preferredDuringSchedulingIgnoredDuringExecution which denote “hard” vs. “soft” requirements.
See the description in the node affinity section earlier.
An example of
requiredDuringSchedulingIgnoredDuringExecution affinity would be “co-locate the pods of service A and service B
in the same zone, since they communicate a lot with each other”
and an example
preferredDuringSchedulingIgnoredDuringExecution anti-affinity would be “spread the pods from this service across zones”
(a hard requirement wouldn’t make sense, since you probably have more pods than zones).
Inter-pod affinity and anti-affinity are currently expressed using an annotation on Pod. Once the feature goes to GA it will use a field.
Here’s an example of a pod that uses pod affinity:
The affinity annotation on this pod defines one pod affinity rule and one pod anti-affinity rule. Both
must be satisfied for the pod to schedule onto a node. The
pod affinity rule says that the pod can schedule onto a node only if that node is in the same zone
as at least one already-running pod that has a label with key “security” and value “S1”. (More precisely, the pod is eligible to run
on node N if node N has a label with key
failure-domain.beta.kubernetes.io/zone and some value V
such that there is at least one node in the cluster with key
value V that is running a pod that has a label with key “security” and value “S1”.) The pod anti-affinity
rule says that the pod cannot schedule onto a node if that node is already running a pod with label
having key “security” and value “S2”. (If the
it would mean that the pod cannot schedule onto a node if that node is in the same zone as a pod with
label having key “security” and value “S2”.) See the design doc.
for many more examples of pod affinity and anti-affinity, both the
flavor and the
As with node affinity, the legal operators for pod affinity and anti-affinity are
In principle, the
topologyKey can be any legal label value. However,
for performance reasons, only a limit set of topology keys are allowed;
they are specified in the
--failure-domain command-line argument to the scheduler. By default the allowed topology keys are
In addition to
topologyKey, you can optionally specify a list
of namespaces which the
labelSelector should match against (this goes at the same level of the definition as
If omitted, it defaults to the namespace of the pod where the affinity/anti-affinity definition appears.
If defined but empty, it means “all namespaces.”
matchExpressions associated with
requiredDuringSchedulingIgnoredDuringExecution affinity and anti-affinity
must be satisfied for the pod to schedule onto a node.
For more information on inter-pod affinity/anti-affinity, see the design doc here.Create Issue Edit This Page