---
title: "Multiple Temporal Servers on Minikube: Multi-Cluster Setup"
description: "Deploy two independent Temporal Server instances on minikube using profiles, with cross-cluster Docker network bridging for local multi-cluster development and testing."
url: https://agent-zone.ai/knowledge/workflow-orchestration/temporal-multi-cluster-minikube/
section: knowledge
date: 2026-02-22
categories: ["workflow-orchestration"]
tags: ["temporal","multi-cluster","minikube","kubernetes","helm","networking","advanced-deployment"]
skills: ["multi-cluster-temporal-deployment","minikube-profiles","cross-cluster-networking"]
tools: ["temporal","minikube","helm","kubectl","docker"]
levels: ["advanced"]
word_count: 1692
formats:
  json: https://agent-zone.ai/knowledge/workflow-orchestration/temporal-multi-cluster-minikube/index.json
  html: https://agent-zone.ai/knowledge/workflow-orchestration/temporal-multi-cluster-minikube/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=Multiple+Temporal+Servers+on+Minikube%3A+Multi-Cluster+Setup
---


# Multiple Temporal Servers on Minikube

Running two independent Temporal Server instances locally lets you develop and test cross-cluster patterns -- worker bridges, namespace replication, and multi-region failover -- without cloud infrastructure. This article walks through deploying two Temporal clusters on minikube using profiles and connecting them over Docker networking.

All configuration files and Makefile targets reference the companion repository at [github.com/statherm/temporal-examples](https://github.com/statherm/temporal-examples) in the `multi-cluster/` directory.

## Why Multiple Clusters?

A single Temporal cluster handles most use cases. You need multiple clusters when:

**Isolation boundaries.** Different teams or environments (dev, staging, prod) run on separate clusters so a bad deployment or runaway workflow in one cannot affect the others. Namespaces provide logical isolation; separate clusters provide physical isolation.

**Regional deployment.** Workflows that must run close to their data or users. A cluster in us-east processes US customer workflows; a cluster in eu-west processes EU customer workflows. Cross-cluster communication bridges these regions when needed.

**Blast radius reduction.** If one Temporal cluster's database fails, only the workflows on that cluster are affected. Other clusters continue operating independently.

**Compliance boundaries.** Data residency requirements may mandate that certain workflow histories never leave a specific geographic or network boundary. Separate clusters enforce this at the infrastructure level.

## Architecture Overview

The local setup runs two completely independent Temporal stacks:

```
┌─────────────────────────────────┐  ┌─────────────────────────────────┐
│ minikube: temporal-cluster-a    │  │ minikube: temporal-cluster-b    │
│                                 │  │                                 │
│  ┌────────────┐ ┌────────────┐  │  │  ┌────────────┐ ┌────────────┐  │
│  │ PostgreSQL │ │ Temporal   │  │  │  │ PostgreSQL │ │ Temporal   │  │
│  │            │ │ Server     │  │  │  │            │ │ Server     │  │
│  │ Port: 5432 │ │ gRPC: 7233│  │  │  │ Port: 5432 │ │ gRPC: 7233│  │
│  └────────────┘ │ Web:  8080│  │  │  └────────────┘ │ Web:  8080│  │
│                 └────────────┘  │  │                 └────────────┘  │
│                                 │  │                                 │
│  Host ports: 7233, 8080        │  │  Host ports: 7234, 8081        │
└─────────────────────────────────┘  └─────────────────────────────────┘
         │ Docker network: minikube-a │         │ Docker network: minikube-b │
         └────────────┬───────────────┘         └────────────┬───────────────┘
                      │       Docker bridge                  │
                      └──────────────────────────────────────┘
```

Each cluster has its own PostgreSQL instance, its own Temporal Server, and its own minikube profile. They share nothing except the Docker daemon and, optionally, a bridged network.

## Resource Planning

Running two Temporal clusters with PostgreSQL is resource-intensive. Minimum requirements:

| Resource | Minimum | Recommended |
|---|---|---|
| CPU cores | 8 | 12 |
| RAM | 16 GB | 24 GB |
| Disk | 40 GB | 60 GB |

Minikube profiles share the host's Docker daemon, so container images pulled for Cluster A are available to Cluster B without re-downloading. This saves significant disk space and startup time.

Check your available resources before starting:

```bash
# macOS
sysctl -n hw.ncpu
sysctl -n hw.memsize | awk '{print $0/1073741824 " GB"}'

# Linux
nproc
free -h | grep Mem | awk '{print $2}'
```

## Setting Up Cluster A

Create the first minikube profile with enough resources for Temporal and PostgreSQL:

```bash
minikube start \
  --profile=temporal-cluster-a \
  --cpus=4 \
  --memory=8192 \
  --driver=docker \
  --kubernetes-version=v1.28.3
```

Switch to the profile and install the infrastructure:

```bash
# Set kubectl context
minikube profile temporal-cluster-a

# Add Helm repos
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add temporal https://charts.temporal.io
helm repo update

# Install PostgreSQL
helm install postgresql bitnami/postgresql \
  --set auth.postgresPassword=temporal \
  --set auth.database=temporal \
  --set primary.persistence.size=8Gi \
  --wait

# Wait for PostgreSQL to be ready
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=postgresql --timeout=120s

# Create Temporal databases
kubectl exec -it postgresql-0 -- psql -U postgres -c "CREATE DATABASE temporal_visibility;"

# Install Temporal Server
helm install temporal temporal/temporal \
  --set server.replicaCount=1 \
  --set cassandra.enabled=false \
  --set mysql.enabled=false \
  --set postgresql.enabled=false \
  --set schema.setup.enabled=false \
  --set schema.update.enabled=false \
  --set server.config.persistence.default.driver=sql \
  --set server.config.persistence.default.sql.driver=postgres12 \
  --set server.config.persistence.default.sql.host=postgresql \
  --set server.config.persistence.default.sql.port=5432 \
  --set server.config.persistence.default.sql.database=temporal \
  --set server.config.persistence.default.sql.user=postgres \
  --set server.config.persistence.default.sql.password=temporal \
  --set server.config.persistence.visibility.driver=sql \
  --set server.config.persistence.visibility.sql.driver=postgres12 \
  --set server.config.persistence.visibility.sql.host=postgresql \
  --set server.config.persistence.visibility.sql.port=5432 \
  --set server.config.persistence.visibility.sql.database=temporal_visibility \
  --set server.config.persistence.visibility.sql.user=postgres \
  --set server.config.persistence.visibility.sql.password=temporal \
  --wait --timeout=300s
```

Set up port forwarding for Cluster A:

```bash
# gRPC (for Temporal clients and workers)
kubectl port-forward svc/temporal-frontend 7233:7233 &

# Web UI
kubectl port-forward svc/temporal-web 8080:8080 &
```

Verify the cluster is healthy:

```bash
temporal operator cluster health --address localhost:7233
temporal operator namespace list --address localhost:7233
```

Access the Web UI at `http://localhost:8080`.

## Setting Up Cluster B

The process is identical but uses a different profile name and different host ports:

```bash
minikube start \
  --profile=temporal-cluster-b \
  --cpus=4 \
  --memory=8192 \
  --driver=docker \
  --kubernetes-version=v1.28.3
```

```bash
minikube profile temporal-cluster-b

# Install PostgreSQL (same commands as Cluster A)
helm install postgresql bitnami/postgresql \
  --set auth.postgresPassword=temporal \
  --set auth.database=temporal \
  --set primary.persistence.size=8Gi \
  --wait

kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=postgresql --timeout=120s
kubectl exec -it postgresql-0 -- psql -U postgres -c "CREATE DATABASE temporal_visibility;"

# Install Temporal Server (same Helm values as Cluster A)
helm install temporal temporal/temporal \
  --set server.replicaCount=1 \
  --set cassandra.enabled=false \
  --set mysql.enabled=false \
  --set postgresql.enabled=false \
  --set schema.setup.enabled=false \
  --set schema.update.enabled=false \
  --set server.config.persistence.default.driver=sql \
  --set server.config.persistence.default.sql.driver=postgres12 \
  --set server.config.persistence.default.sql.host=postgresql \
  --set server.config.persistence.default.sql.port=5432 \
  --set server.config.persistence.default.sql.database=temporal \
  --set server.config.persistence.default.sql.user=postgres \
  --set server.config.persistence.default.sql.password=temporal \
  --set server.config.persistence.visibility.driver=sql \
  --set server.config.persistence.visibility.sql.driver=postgres12 \
  --set server.config.persistence.visibility.sql.host=postgresql \
  --set server.config.persistence.visibility.sql.port=5432 \
  --set server.config.persistence.visibility.sql.database=temporal_visibility \
  --set server.config.persistence.visibility.sql.user=postgres \
  --set server.config.persistence.visibility.sql.password=temporal \
  --wait --timeout=300s
```

Port forwarding for Cluster B uses different host ports:

```bash
# gRPC on 7234 (not 7233)
kubectl port-forward svc/temporal-frontend 7234:7233 &

# Web UI on 8081 (not 8080)
kubectl port-forward svc/temporal-web 8081:8080 &
```

Verify Cluster B:

```bash
temporal operator cluster health --address localhost:7234
temporal operator namespace list --address localhost:7234
```

Access Cluster B's Web UI at `http://localhost:8081`.

## Docker Network Bridging

When using the Docker driver, each minikube profile creates its own Docker network. By default, these networks are isolated -- pods in Cluster A cannot reach pods in Cluster B. For cross-cluster communication, you need to bridge them.

First, identify the Docker networks:

```bash
docker network ls | grep minikube
# Output:
# abc123  temporal-cluster-a  bridge  local
# def456  temporal-cluster-b  bridge  local
```

Connect each minikube container to the other cluster's network:

```bash
# Get the minikube container names
CLUSTER_A_CONTAINER=$(docker ps --filter "name=temporal-cluster-a" --format "{{.Names}}")
CLUSTER_B_CONTAINER=$(docker ps --filter "name=temporal-cluster-b" --format "{{.Names}}")

# Connect Cluster A's container to Cluster B's network
docker network connect temporal-cluster-b "$CLUSTER_A_CONTAINER"

# Connect Cluster B's container to Cluster A's network
docker network connect temporal-cluster-a "$CLUSTER_B_CONTAINER"
```

After bridging, pods in either cluster can reach the other cluster's minikube node IP. Find the node IPs:

```bash
# Cluster A's IP
minikube ip --profile temporal-cluster-a

# Cluster B's IP
minikube ip --profile temporal-cluster-b
```

To reach Cluster B's Temporal Frontend from within Cluster A, use `<cluster-b-ip>:<nodeport>`. You will need a NodePort service or use the minikube IP with port forwarding.

### DNS Considerations

Kubernetes DNS is cluster-local. A pod in Cluster A cannot resolve `temporal-frontend.default.svc.cluster.local` for Cluster B. Use IP addresses or set up CoreDNS stub zones pointing to the other cluster's DNS server. For local development, IP addresses are simpler.

## Verifying Both Clusters

Run a quick health check across both clusters:

```bash
#!/bin/bash
echo "=== Cluster A ==="
temporal operator cluster health --address localhost:7233
temporal operator namespace list --address localhost:7233

echo ""
echo "=== Cluster B ==="
temporal operator cluster health --address localhost:7234
temporal operator namespace list --address localhost:7234
```

Both should report `SERVING` status and show the `default` namespace.

## Makefile Targets

The companion repository provides Makefile targets for managing both clusters:

```makefile
PROFILE_A := temporal-cluster-a
PROFILE_B := temporal-cluster-b

.PHONY: cluster-a-up cluster-b-up clusters-up clusters-down clusters-status

cluster-a-up:
	minikube start --profile=$(PROFILE_A) --cpus=4 --memory=8192 --driver=docker
	minikube profile $(PROFILE_A)
	$(MAKE) _install-temporal

cluster-b-up:
	minikube start --profile=$(PROFILE_B) --cpus=4 --memory=8192 --driver=docker
	minikube profile $(PROFILE_B)
	$(MAKE) _install-temporal

clusters-up: cluster-a-up cluster-b-up
	$(MAKE) _bridge-networks

clusters-down:
	minikube delete --profile=$(PROFILE_A)
	minikube delete --profile=$(PROFILE_B)

clusters-status:
	@echo "=== Profiles ==="
	@minikube profile list
	@echo ""
	@echo "=== Cluster A Pods ==="
	@kubectl --context=$(PROFILE_A) get pods
	@echo ""
	@echo "=== Cluster B Pods ==="
	@kubectl --context=$(PROFILE_B) get pods

clusters-pause:
	minikube stop --profile=$(PROFILE_A)
	minikube stop --profile=$(PROFILE_B)

clusters-resume:
	minikube start --profile=$(PROFILE_A)
	minikube start --profile=$(PROFILE_B)
	$(MAKE) _bridge-networks

_install-temporal:
	helm repo add bitnami https://charts.bitnami.com/bitnami --force-update
	helm repo add temporal https://charts.temporal.io --force-update
	helm install postgresql bitnami/postgresql \
		--set auth.postgresPassword=temporal \
		--set auth.database=temporal \
		--set primary.persistence.size=8Gi --wait
	kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=postgresql --timeout=120s
	kubectl exec -it postgresql-0 -- psql -U postgres -c "CREATE DATABASE temporal_visibility;"
	helm install temporal temporal/temporal -f helm/temporal-values.yaml --wait --timeout=300s

_bridge-networks:
	docker network connect $(PROFILE_B) $$(docker ps --filter "name=$(PROFILE_A)" --format "{{.Names}}")
	docker network connect $(PROFILE_A) $$(docker ps --filter "name=$(PROFILE_B)" --format "{{.Names}}")
```

Run `make clusters-up` to bring up both clusters with networking in one command. Run `make clusters-pause` when you are done for the day -- this is far faster than `make clusters-down` and preserves all state.

## Resource Management Tips

Two Temporal clusters consume significant resources. Manage them carefully:

**Pause when not in use.** `minikube stop --profile=temporal-cluster-a` pauses the VM/container without deleting anything. Resume with `minikube start --profile=temporal-cluster-a`. This is the single most effective way to reclaim resources.

**Monitor disk usage.** Each profile's Docker volumes accumulate over time. Check usage with:

```bash
docker system df
minikube ssh --profile=temporal-cluster-a -- df -h /
```

**Delete vs stop.** `minikube delete` removes everything -- containers, volumes, configuration. Use it only when you want a fresh start. `minikube stop` preserves all state for later resume.

**Share images across profiles.** Since both profiles use the same Docker daemon, images pulled for one profile are available to both. Run `minikube image load` only once.

## Troubleshooting

### Insufficient Resources

If pods stay in `Pending` state, check resource availability:

```bash
kubectl describe node | grep -A 5 "Allocated resources"
```

Reduce Temporal's resource requests in the Helm values if needed. The companion repository's `helm/temporal-values.yaml` uses minimal resource requests suitable for local development.

### Docker Network Conflicts

If `docker network connect` fails with "already connected", the bridge is already in place. If it fails with a subnet conflict, remove the existing bridge and recreate:

```bash
docker network disconnect temporal-cluster-b "$CLUSTER_A_CONTAINER"
docker network connect temporal-cluster-b "$CLUSTER_A_CONTAINER"
```

### Port Collisions

If port forwarding fails with "address already in use", find and kill the existing forwarder:

```bash
lsof -ti:7233 | xargs kill -9
lsof -ti:8080 | xargs kill -9
```

### Profile Confusion

The most common mistake is running commands against the wrong cluster. Always verify your current context:

```bash
minikube profile list
kubectl config current-context
```

Use explicit `--profile` and `--context` flags rather than relying on the default context.

## Next Steps

With two clusters running, you are ready to build cross-cluster communication patterns. [Cross-Cluster Communication: Architecture and Patterns](../temporal-cross-cluster-communication/) covers the approaches -- namespace replication, worker bridges, and workflow-level coordination. [Building a Worker Bridge](../temporal-cross-cluster-worker-bridge/) implements the bridge pattern on this infrastructure.

For background on single-cluster HA deployment, see [Temporal High Availability](../temporal-ha-cluster/). For minikube fundamentals, see [Minikube Setup](../temporal-minikube-setup/) and [Minikube Multi-Cluster Profiles](../../kubernetes/minikube-multi-cluster-profiles/).

