---
title: "Deploying Nginx on Kubernetes"
description: "How to run nginx as a simple web server, reverse proxy, and Ingress controller on Kubernetes, with practical configurations for SSL termination, rate limiting, and custom error pages."
url: https://agent-zone.ai/knowledge/kubernetes/nginx-on-kubernetes/
section: knowledge
date: 2026-02-22
categories: ["kubernetes"]
tags: ["nginx","ingress","reverse-proxy","tls","web-server"]
skills: ["nginx-deployment","ingress-controller-setup","reverse-proxy-configuration"]
tools: ["kubectl","helm","openssl"]
levels: ["intermediate"]
word_count: 778
formats:
  json: https://agent-zone.ai/knowledge/kubernetes/nginx-on-kubernetes/index.json
  html: https://agent-zone.ai/knowledge/kubernetes/nginx-on-kubernetes/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=Deploying+Nginx+on+Kubernetes
---


# Deploying Nginx on Kubernetes

Nginx shows up in Kubernetes in two completely different roles. First, as a regular Deployment serving static content or acting as a reverse proxy for your application. Second, as an Ingress controller that watches Ingress resources and dynamically reconfigures itself. These are different deployments with different images and different configuration models. Knowing when to use which saves you from over-engineering or under-engineering your setup.

## Nginx as a Web Server (Deployment + Service + ConfigMap)

For serving static files or acting as a reverse proxy in front of your application pods, deploy nginx as a standard Deployment.

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf
data:
  default.conf: |
    server {
      listen 80;
      server_name _;

      location / {
        root /usr/share/nginx/html;
        index index.html;
        try_files $uri $uri/ /index.html;
      }

      location /api/ {
        proxy_pass http://api-service:8080/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
      }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.27-alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/conf.d/
        - name: static-files
          mountPath: /usr/share/nginx/html
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 200m
            memory: 128Mi
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 2
          periodSeconds: 5
      volumes:
      - name: nginx-conf
        configMap:
          name: nginx-conf
      - name: static-files
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP
```

Mount your custom `nginx.conf` or `default.conf` via ConfigMap. The key insight: mount to `/etc/nginx/conf.d/` for server blocks, or replace `/etc/nginx/nginx.conf` entirely if you need to change worker processes or global settings. Do not try to do both -- if you replace `nginx.conf`, make sure it includes `conf.d/*.conf` or your ConfigMap server blocks will be ignored.

After updating a ConfigMap, pods do not automatically reload nginx. You need to either restart the Deployment or add a checksum annotation to trigger a rollout:

```bash
kubectl rollout restart deployment/nginx
```

## Nginx as an Ingress Controller

The nginx Ingress controller is a fundamentally different deployment. It runs the community `ingress-nginx` controller, which watches Ingress resources across the cluster and dynamically generates nginx configuration.

```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --set controller.replicaCount=2 \
  --set controller.resources.requests.cpu=100m \
  --set controller.resources.requests.memory=256Mi
```

This creates a LoadBalancer Service (on cloud) or NodePort (on bare metal / minikube). The controller automatically creates an IngressClass named `nginx`.

Verify the installation:

```bash
kubectl get ingressclass
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx
```

### SSL Termination

With the Ingress controller, TLS is handled via Ingress resources and Secrets:

```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - app.example.com
    secretName: app-tls
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-app
            port:
              number: 80
```

Create the TLS secret from your certificate and key:

```bash
kubectl create secret tls app-tls --cert=tls.crt --key=tls.key -n default
```

For automatic certificates, pair with cert-manager using the `cert-manager.io/cluster-issuer` annotation.

### Rate Limiting

Apply rate limiting per Ingress resource using annotations:

```yaml
annotations:
  nginx.ingress.kubernetes.io/limit-rps: "10"
  nginx.ingress.kubernetes.io/limit-rpm: "300"
  nginx.ingress.kubernetes.io/limit-burst-multiplier: "5"
  nginx.ingress.kubernetes.io/limit-whitelist: "10.0.0.0/8"
```

This limits each client IP to 10 requests per second with a burst of 50 (5x multiplier). The whitelist excludes internal IPs from rate limiting. Rate limiting state is per-controller-pod, so with multiple replicas behind a LoadBalancer, the effective limit is multiplied by the number of replicas.

### Custom Error Pages

Serve custom error pages by deploying a default backend:

```bash
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --set controller.customErrors.enabled=true \
  --set defaultBackend.enabled=true \
  --set defaultBackend.image.repository=your-registry/custom-errors \
  --set defaultBackend.image.tag=latest
```

Or per-Ingress, point errors to a custom backend:

```yaml
annotations:
  nginx.ingress.kubernetes.io/custom-http-errors: "404,503"
  nginx.ingress.kubernetes.io/default-backend: custom-error-svc
```

The custom error service must return appropriate content based on the `X-Code` header that nginx-ingress forwards.

## When to Use Which

**Use a plain nginx Deployment** when you need a dedicated reverse proxy or static file server for a single application. You control the full `nginx.conf`, can add Lua modules, and it scales independently of other services. This is the right choice for an application-specific sidecar or frontend server.

**Use the nginx Ingress controller** when you need cluster-wide HTTP routing across multiple services. It provides a single entry point, shared TLS termination, and consistent annotations across all your Ingress resources. Do not deploy one per application -- deploy one (or two for HA) per cluster and let all services share it.

**Do not mix them up.** The `nginx` Docker image does not process Ingress resources. The Ingress controller uses `registry.k8s.io/ingress-nginx/controller`, a purpose-built binary that watches the Kubernetes API. Conversely, do not use the Ingress controller just to reverse-proxy one service -- a Deployment with a ConfigMap is lighter.

