Minikube Application Deployment Patterns: Production-Ready Manifests for Four Common Workloads

Choosing the Right Workload Type#

Every application fits one of four deployment patterns. Choosing the wrong one creates problems that are hard to fix later – a database deployed as a Deployment loses data on reschedule, a batch job deployed as a Deployment wastes resources running 24/7.

Pattern Kubernetes Resource Use When
Stateless web app Deployment + Service + Ingress HTTP APIs, frontends, microservices
Stateful app StatefulSet + Headless Service + PVC Databases, caches with persistence, message brokers
Background worker Deployment (no Service) Queue consumers, event processors, stream readers
Batch processing CronJob Scheduled reports, data cleanup, periodic syncs

Pattern 1: Stateless Web App#

A web API that can be scaled horizontally with no persistent state. Any pod can handle any request.

Running Redis on Kubernetes

Running Redis on Kubernetes#

Redis on Kubernetes ranges from dead simple (single pod for caching) to operationally complex (Redis Cluster with persistence). The right choice depends on whether you need data durability, high availability, or just a fast throwaway cache.

Single-Instance Redis with Persistence#

For development or small workloads, a single Redis Deployment with a PVC is enough:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        command: ["redis-server", "--appendonly", "yes", "--maxmemory", "256mb", "--maxmemory-policy", "allkeys-lru"]
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-data
          mountPath: /data
        resources:
          requests:
            cpu: 100m
            memory: 300Mi
          limits:
            cpu: 500m
            memory: 350Mi
      volumes:
      - name: redis-data
        persistentVolumeClaim:
          claimName: redis-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-data
spec:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379

Set the memory limit in Redis (--maxmemory) lower than the container memory limit. If Redis uses 350Mi and the container limit is 350Mi, the kernel OOM-kills the process during background save operations when Redis forks and temporarily doubles its memory usage. A safe ratio: set maxmemory to 60-75% of the container memory limit.

Redis on Kubernetes: Deployment Patterns, Operators, and Production Configuration

Redis on Kubernetes: Deployment Patterns, Operators, and Production Configuration#

Running Redis on Kubernetes requires more thought than deploying a stateless application. Redis is stateful, memory-sensitive, and its clustering model makes assumptions about network identity that conflict with Kubernetes defaults. This guide covers the deployment options from simplest to most complex, the configuration details that matter in production, and the mistakes that cause outages.

Deployment Options#

There are three main approaches to deploying Redis on Kubernetes, each with different tradeoffs.