Pod Affinity and Anti-Affinity: Co-locating and Spreading Workloads

Pod Affinity and Anti-Affinity#

Node affinity controls which nodes a pod can run on. Pod affinity and anti-affinity go further – they control whether a pod should run near or away from other specific pods. This is how you co-locate a frontend with its cache for low latency, or spread database replicas across failure domains for high availability.

Pod Affinity: Schedule Near Other Pods#

Pod affinity tells the scheduler “place this pod in the same topology domain as pods matching a label selector.” The topology domain is defined by topologyKey – it could be the same node, the same zone, or any other node label.

Pod Topology Spread Constraints: Even Distribution Across Failure Domains

Pod Topology Spread Constraints#

Pod anti-affinity gives you binary control: either a pod avoids another pod’s topology domain or it does not. But it does not give you even distribution. If you have 6 replicas and 3 zones, anti-affinity cannot express “put exactly 2 in each zone.” Topology spread constraints solve this by letting you specify the maximum allowed imbalance between any two topology domains.

How Topology Spread Works#

A topology spread constraint defines: