---
title: "Network Policies: Namespace Isolation and Pod-to-Pod Rules"
description: "How to use Kubernetes NetworkPolicy to implement default-deny, allow specific traffic between pods, permit DNS, and control egress."
url: https://agent-zone.ai/knowledge/kubernetes/network-policies/
section: knowledge
date: 2026-02-22
categories: ["kubernetes"]
tags: ["network-policies","security","namespaces","cni","calico"]
skills: ["network-policy-design","namespace-isolation","cluster-security"]
tools: ["kubectl","calico","cilium"]
levels: ["intermediate"]
word_count: 766
formats:
  json: https://agent-zone.ai/knowledge/kubernetes/network-policies/index.json
  html: https://agent-zone.ai/knowledge/kubernetes/network-policies/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=Network+Policies%3A+Namespace+Isolation+and+Pod-to-Pod+Rules
---


# Network Policies: Namespace Isolation and Pod-to-Pod Rules

By default, every pod in a Kubernetes cluster can talk to every other pod. Network policies let you restrict that. They are namespace-scoped resources that select pods by label and define allowed ingress and egress rules.

## Critical Prerequisite: CNI Support

Network policies are only enforced if your CNI plugin supports them. Calico, Cilium, and Weave all support network policies. **Flannel does not.** If you are running Flannel, you can create NetworkPolicy resources without errors, but they will have absolutely no effect. This is a silent failure that wastes hours of debugging.

Check your CNI:

```bash
kubectl get pods -n kube-system | grep -E 'calico|cilium|flannel|weave'
```

In minikube, start with a network-policy-supporting CNI:

```bash
minikube start --cni=calico
```

## Default Deny: The Foundation

Without any NetworkPolicy, all traffic is allowed. The moment you apply a NetworkPolicy that selects a pod, all traffic not explicitly allowed by a policy is denied for that pod. This is the key mental model: policies are additive allow-lists on top of an implicit deny.

### Default Deny All Ingress in a Namespace

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: production
spec:
  podSelector: {}    # empty selector matches ALL pods in the namespace
  policyTypes:
    - Ingress
```

After applying this, no pod in the `production` namespace accepts any incoming traffic unless another NetworkPolicy explicitly allows it.

### Default Deny All Egress in a Namespace

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Egress
```

This blocks all outgoing traffic from pods in the namespace, including DNS. You almost certainly need to pair this with a DNS allow rule.

### Default Deny Both

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
```

## Allow DNS: The One Everyone Forgets

When you set a default deny on egress, DNS resolution breaks immediately. Every pod needs to reach CoreDNS on UDP/TCP port 53 in the `kube-system` namespace. Apply this alongside any egress deny:

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
```

The label `kubernetes.io/metadata.name` is automatically set on every namespace by Kubernetes 1.21+. If you are on an older version, you need to manually label `kube-system`: `kubectl label namespace kube-system kubernetes.io/metadata.name=kube-system`.

## Allow Specific Pod-to-Pod Traffic

Allow the `web-frontend` pods to talk to `api-backend` pods on port 8080:

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: web-frontend
      ports:
        - protocol: TCP
          port: 8080
```

This policy selects `api-backend` pods and allows ingress from `web-frontend` pods in the same namespace.

## Cross-Namespace Access

Allow pods in the `monitoring` namespace to scrape metrics from pods in `production`:

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-monitoring-scrape
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: monitoring
      ports:
        - protocol: TCP
          port: 9090
```

To combine namespace and pod selectors (pods with a specific label in a specific namespace), put both selectors in the **same `from` entry**:

```yaml
ingress:
  - from:
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: monitoring
        podSelector:
          matchLabels:
            app: prometheus
```

If you put them as separate list items under `from`, they act as an OR -- any pod in the monitoring namespace OR any pod labeled `app: prometheus` in any namespace gets access. This is a common and dangerous mistake.

## Egress Rules

Allow `api-backend` pods to reach an external database on port 5432:

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-api-to-db
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-backend
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.5.0/24    # database subnet
      ports:
        - protocol: TCP
          port: 5432
```

For internal database access within the cluster, use a `podSelector` or `namespaceSelector` instead of `ipBlock`.

## Debugging Network Policies

**List all policies in a namespace:**

```bash
kubectl get networkpolicies -n production
```

**Inspect a specific policy:**

```bash
kubectl describe networkpolicy allow-frontend-to-api -n production
```

**Test connectivity between pods:**

```bash
# From the frontend pod, try reaching the api-backend service
kubectl exec -it <frontend-pod> -n production -- wget -qO- --timeout=3 http://api-backend:8080/healthz

# If it hangs and times out, the network policy is blocking it
# If it immediately refuses, the service or pod is not running
```

**Check which policies apply to a specific pod:**

```bash
kubectl get networkpolicies -n production -o json | \
  jq '.items[] | select(.spec.podSelector.matchLabels.app == "api-backend") | .metadata.name'
```

**Common failure mode:** You apply a default-deny-egress policy and everything breaks because pods cannot resolve DNS. Always deploy the DNS allow policy at the same time as any egress deny policy. If you suspect DNS is the issue, test with an IP address instead of a hostname to confirm.

