{"page":{"agent_metadata":{"content_type":"guide","outputs":["multi-region-architecture-selection","service-mesh-federation-setup","cross-cluster-networking","multi-region-gitops-configuration"],"prerequisites":["kubernetes-basics","networking-concepts","service-mesh-basics","gitops-concepts"]},"categories":["kubernetes"],"content_plain":"Multi-Region Kubernetes# Running Kubernetes in a single region is a single point of failure at the infrastructure level. Region outages are rare but real \u0026ndash; AWS us-east-1 has gone down multiple times, taking entire companies offline. Multi-region Kubernetes addresses this, but it introduces complexity in networking, state management, and deployment coordination that you must handle deliberately.\nIndependent Clusters with Shared GitOps# The simplest multi-region pattern: run completely independent clusters in each region, deploy the same applications to all of them using GitOps, and route traffic with DNS or a global load balancer.\n+------------------+ | Global DNS / GLB | +--------+---------+ / | \\ +------+--+ +--+------+ +-+--------+ | us-east | | eu-west | | ap-south | | Cluster | | Cluster | | Cluster | +----------+ +---------+ +----------+ \\ | / +--------+-------+ | Git Repository | | (single source) | +-----------------+ArgoCD ApplicationSets deploy the same workloads across clusters:\napiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: web-app namespace: argocd spec: generators: - clusters: selector: matchLabels: env: production template: metadata: name: \u0026#39;web-app-{{name}}\u0026#39; spec: project: default source: repoURL: https://github.com/org/apps path: web-app/overlays/{{metadata.labels.region}} targetRevision: main destination: server: \u0026#39;{{server}}\u0026#39; namespace: web-appEach cluster gets region-specific overlays (Kustomize) for things like replica counts, resource limits, and region-specific config. The base manifests are identical.\nAdvantages: simplicity, no cross-cluster networking required, each cluster is fully independent. Disadvantages: no cross-cluster service discovery, no automatic failover at the service level, data must be replicated separately.\nService Mesh Federation# Istio Multi-Cluster# Istio supports two multi-cluster models. Primary-remote has one cluster running the control plane and others connecting to it. Multi-primary has independent control planes that share service discovery.\nMulti-primary is the recommended model for multi-region \u0026ndash; each cluster is self-sufficient if the other goes down.\n# Install Istio on cluster 1 (us-east) with multi-cluster enabled istioctl install --context=us-east -f - \u0026lt;\u0026lt;EOF apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: production-mesh multiCluster: clusterName: us-east network: network-east EOF # Create remote secret so clusters can discover each other\u0026#39;s services istioctl create-remote-secret --context=us-east --name=us-east | \\ kubectl apply --context=eu-west -f - istioctl create-remote-secret --context=eu-west --name=eu-west | \\ kubectl apply --context=us-east -f -After federation, a service in us-east can call a service in eu-west transparently. Istio handles cross-cluster load balancing, mTLS, and failover. The east-west gateway carries traffic between clusters.\nCost to be aware of: inter-region traffic charges apply. A chatty service calling another region on every request will generate significant egress bills.\nCilium ClusterMesh# Cilium connects multiple clusters at the network layer using its eBPF-based dataplane. Services in one cluster become reachable from another without application changes.\n# Enable ClusterMesh on both clusters cilium clustermesh enable --context us-east --service-type LoadBalancer cilium clustermesh enable --context eu-west --service-type LoadBalancer # Connect the clusters cilium clustermesh connect --context us-east --destination-context eu-westVerify connectivity:\ncilium clustermesh status --context us-east # Cluster: eu-west - ConnectedTo make a service available cross-cluster, annotate it:\napiVersion: v1 kind: Service metadata: name: api-gateway annotations: service.cilium.io/global: \u0026#34;true\u0026#34; service.cilium.io/shared: \u0026#34;true\u0026#34; spec: selector: app: api-gateway ports: - port: 8080Cilium ClusterMesh is lighter than a full service mesh. It provides cross-cluster service discovery and load balancing without the sidecar overhead. It does not provide the traffic management features (retries, circuit breaking, canary routing) that Istio offers.\nCross-Cluster Networking with Submariner# Submariner creates encrypted tunnels between cluster networks, allowing pods in one cluster to directly reach pods and services in another. It works with any CNI.\n# Install the broker on a management cluster subctl deploy-broker --context management # Join each workload cluster to the broker subctl join broker-info.subm --context us-east --clusterid us-east subctl join broker-info.subm --context eu-west --clusterid eu-westSubmariner handles pod CIDR and service CIDR routing between clusters. It uses the Lighthouse component for cross-cluster DNS \u0026ndash; services get \u0026lt;service\u0026gt;.\u0026lt;namespace\u0026gt;.svc.clusterset.local DNS names.\n# From us-east cluster, resolve a service in eu-west nslookup api.production.svc.clusterset.localThe main limitation: Submariner requires non-overlapping pod and service CIDRs across clusters. If both clusters use 10.96.0.0/12 for services, you must re-IP one of them before connecting. Plan your CIDR allocation before deploying clusters.\nAdmiralty Multi-Cluster Scheduling# Admiralty takes a different approach: instead of connecting networks, it schedules pods across clusters. A pod created in the source cluster gets placed in a target cluster based on scheduling policies.\napiVersion: multicluster.admiralty.io/v1alpha1 kind: ClusterTarget metadata: name: eu-west spec: kubeconfigSecret: name: eu-west-kubeconfig --- apiVersion: multicluster.admiralty.io/v1alpha1 kind: Target metadata: name: eu-west namespace: production spec: clusterTarget: eu-westAnnotate pods that can be scheduled remotely:\nannotations: multicluster.admiralty.io/elect: \u0026#34;\u0026#34;Admiralty creates a proxy pod locally and the real pod in the remote cluster. The proxy pod reports the real pod\u0026rsquo;s status. This is useful for burst capacity \u0026ndash; overflow to another cluster when the local one is full.\nLiqo for Resource Sharing# Liqo virtualizes remote clusters as local nodes. After peering, a remote cluster appears as a virtual node in kubectl get nodes. The scheduler can place pods there transparently.\n# Peer us-east with eu-west liqoctl peer --remoteurl https://eu-west-api:6443 \\ --remotekubeconfig eu-west.kubeconfigkubectl get nodes # NAME STATUS ROLES # node-1 Ready worker # node-2 Ready worker # liqo-eu-west Ready agent # Virtual node representing eu-west clusterUse node affinity or topology spread constraints to control placement:\ntopologySpreadConstraints: - maxSkew: 1 topologyKey: topology.liqo.io/type whenUnsatisfiable: DoNotScheduleWhen to Use Multi-Cluster vs Single Large Cluster# Use a single cluster when: you are in one region, your team is small, you do not have regulatory requirements forcing separation, and you can tolerate region-level outages.\nUse multi-cluster when: you need cross-region redundancy, teams need blast radius isolation, compliance requires data residency, or you are running at a scale where a single cluster\u0026rsquo;s control plane becomes a bottleneck (approximately 5000+ nodes).\nThe hidden cost of multi-cluster: every cluster needs its own monitoring, alerting, certificate management, secrets rotation, and upgrade cycle. Two clusters is not twice the work \u0026ndash; it is closer to three times, because you also need the coordination layer.\nTraffic Routing Between Clusters# Global traffic distribution requires an external layer. Options by cloud provider:\nAWS: Route53 with health checks and latency-based routing to ALBs in each region GCP: Multi Cluster Ingress with BackendPolicy distributing to GKE clusters Azure: Azure Front Door or Traffic Manager pointing to AKS ingress endpoints Multi-cloud: Cloudflare Load Balancing or NS1 with health checks per endpoint The routing layer must health-check each cluster independently. If a cluster\u0026rsquo;s ingress becomes unhealthy, traffic shifts to healthy clusters within the DNS TTL window. Use low TTLs (30-60 seconds) for faster failover, but be aware that some clients cache DNS aggressively.\nConfig Management Across Clusters# Each cluster needs region-specific configuration while sharing a common base. Kustomize overlays are the standard approach:\nclusters/ base/ # Shared across all clusters kustomization.yaml deployment.yaml service.yaml overlays/ us-east/ kustomization.yaml # patches: replica count, region env var eu-west/ kustomization.yaml ap-south/ kustomization.yamlSecrets should not be in Git. Use External Secrets Operator to pull from a regional secret store (AWS Secrets Manager per region, or a single Vault cluster with region-specific paths). This way, each cluster gets region-appropriate credentials without cross-region secret store dependencies.\n","date":"2026-02-22","description":"Patterns for running Kubernetes across regions: independent clusters with shared GitOps, Istio multi-cluster, Cilium ClusterMesh, Submariner, Admiralty scheduling, and Liqo resource sharing.","lastmod":"2026-02-22","levels":["intermediate","advanced"],"reading_time_minutes":6,"section":"knowledge","skills":["multi-region-architecture","service-mesh-federation","cross-cluster-networking","multi-cluster-gitops"],"tags":["multi-region","multi-cluster","istio","cilium","submariner","admiralty","liqo","gitops","service-mesh"],"title":"Multi-Region Kubernetes: Service Mesh Federation, Cross-Cluster Networking, and GitOps","tools":["kubectl","istioctl","cilium","argocd","helm","subctl"],"url":"https://agent-zone.ai/knowledge/kubernetes/multi-region-kubernetes/","word_count":1141}}