---
title: "Cloud Multi-Region Architecture: AWS, GCP, and Azure Patterns with Terraform"
description: "Multi-region deployment patterns for each major cloud: AWS with Route53 and Aurora Global, GCP with Multi Cluster Ingress and Cloud Spanner, Azure with Front Door and Cosmos DB, with real Terraform snippets and cost breakdowns."
url: https://agent-zone.ai/knowledge/kubernetes/cloud-multi-region-patterns/
section: knowledge
date: 2026-02-22
categories: ["kubernetes"]
tags: ["multi-region","aws","gcp","azure","terraform","route53","aurora","spanner","cosmos-db","vpc-peering","transit-gateway"]
skills: ["multi-region-aws","multi-region-gcp","multi-region-azure","cross-region-networking","cost-estimation"]
tools: ["terraform","kubectl","aws","gcloud","az"]
levels: ["intermediate","advanced"]
word_count: 1613
formats:
  json: https://agent-zone.ai/knowledge/kubernetes/cloud-multi-region-patterns/index.json
  html: https://agent-zone.ai/knowledge/kubernetes/cloud-multi-region-patterns/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=Cloud+Multi-Region+Architecture%3A+AWS%2C+GCP%2C+and+Azure+Patterns+with+Terraform
---


# Cloud Multi-Region Architecture Patterns

Multi-region is not just running clusters in two places. It is the networking between them, the data replication strategy, the traffic routing, and the cost of keeping it all running. Each cloud provider has different primitives and different pricing models. Here is how to build it on each.

The three pillars: a Kubernetes cluster per region for compute, a global traffic routing layer to direct users to the nearest healthy region, and a multi-region database for state. Get any one wrong and multi-region gives you complexity without resilience.

## AWS: Route53 + ALB + EKS + Aurora Global

EKS cluster per region, ALB in front of each, Route53 for global routing, Aurora Global Database for cross-region data. AWS lacks a true global load balancer like GCP -- Route53 DNS routing is the primary mechanism, so failover speed is bound by DNS TTL.

### Terraform: EKS per Region

```hcl
module "eks_us_east" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  cluster_name    = "production-us-east-1"
  cluster_version = "1.31"
  vpc_id          = module.vpc_us_east.vpc_id
  subnet_ids      = module.vpc_us_east.private_subnets

  eks_managed_node_groups = {
    default = {
      instance_types = ["m7g.xlarge"]
      min_size       = 3
      max_size       = 10
      desired_size   = 3
    }
  }
}

module "eks_eu_west" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  providers = { aws = aws.eu_west }

  cluster_name    = "production-eu-west-1"
  cluster_version = "1.31"
  vpc_id          = module.vpc_eu_west.vpc_id
  subnet_ids      = module.vpc_eu_west.private_subnets

  eks_managed_node_groups = {
    default = {
      instance_types = ["m7g.xlarge"]
      min_size       = 3
      max_size       = 10
      desired_size   = 3
    }
  }
}
```

### Route53 Latency-Based Routing

Route53 latency routing sends users to the lowest-latency region. The critical setting is `evaluate_target_health: true` -- without it, Route53 continues routing to a dead ALB.

```hcl
resource "aws_route53_record" "api_us_east" {
  zone_id        = aws_route53_zone.main.zone_id
  name           = "api.example.com"
  type           = "A"
  set_identifier = "us-east-1"

  alias {
    name                   = module.alb_us_east.dns_name
    zone_id                = module.alb_us_east.zone_id
    evaluate_target_health = true
  }

  latency_routing_policy {
    region = "us-east-1"
  }
}

resource "aws_route53_record" "api_eu_west" {
  zone_id        = aws_route53_zone.main.zone_id
  name           = "api.example.com"
  type           = "A"
  set_identifier = "eu-west-1"

  alias {
    name                   = module.alb_eu_west.dns_name
    zone_id                = module.alb_eu_west.zone_id
    evaluate_target_health = true
  }

  latency_routing_policy {
    region = "eu-west-1"
  }
}

resource "aws_route53_health_check" "us_east" {
  fqdn              = module.alb_us_east.dns_name
  port              = 443
  type              = "HTTPS"
  resource_path     = "/healthz"
  failure_threshold = 3
  request_interval  = 10
}
```

### Aurora Global Database

Aurora Global uses storage-level replication -- faster than logical replication. Writes go to the primary region only.

```hcl
resource "aws_rds_global_cluster" "main" {
  global_cluster_identifier = "production-global"
  engine                    = "aurora-postgresql"
  engine_version            = "16.4"
  storage_encrypted         = true
}

resource "aws_rds_cluster" "primary" {
  cluster_identifier        = "production-us-east"
  global_cluster_identifier = aws_rds_global_cluster.main.id
  engine                    = "aurora-postgresql"
  engine_version            = "16.4"
  master_username           = "admin"
  master_password           = var.db_password
  db_subnet_group_name      = aws_db_subnet_group.us_east.name
}

resource "aws_rds_cluster" "secondary" {
  provider                  = aws.eu_west
  cluster_identifier        = "production-eu-west"
  global_cluster_identifier = aws_rds_global_cluster.main.id
  engine                    = "aurora-postgresql"
  engine_version            = "16.4"
  db_subnet_group_name      = aws_db_subnet_group.eu_west.name

  depends_on = [aws_rds_cluster.primary]
}
```

Aurora Global replicates with under 1 second lag. Failover promotes the secondary to primary in under a minute. The secondary is read-only until promoted -- your application must handle read/write splitting.

## GCP: Multi Cluster Ingress + GKE + Cloud Spanner

GCP has the strongest native multi-region story. Its global load balancer uses Anycast IPs -- failover does not depend on DNS propagation. GKE Fleet ties clusters together, and Spanner provides globally consistent reads and writes without read-replica limitations.

### Terraform: GKE Clusters with Fleet Membership

Fleet membership enables Multi Cluster Ingress. Each cluster registers with the fleet, and a config cluster manages cross-cluster ingress resources.

```hcl
resource "google_container_cluster" "us_central" {
  name     = "production-us-central1"
  location = "us-central1"

  fleet {
    project = var.project_id
  }

  node_config {
    machine_type = "e2-standard-4"
  }

  initial_node_count = 3
}

resource "google_container_cluster" "eu_west" {
  name     = "production-europe-west1"
  location = "europe-west1"

  fleet {
    project = var.project_id
  }

  node_config {
    machine_type = "e2-standard-4"
  }

  initial_node_count = 3
}
```

### Multi Cluster Ingress

MCI uses a config cluster to define ingress resources that route to services across all fleet clusters:

```yaml
apiVersion: networking.gke.io/v1
kind: MultiClusterIngress
metadata:
  name: api-ingress
  namespace: production
  annotations:
    networking.gke.io/static-ip: "34.120.x.x"
spec:
  template:
    spec:
      backend:
        serviceName: api-multiclusterservice
        servicePort: 8080
---
apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
  name: api-multiclusterservice
  namespace: production
spec:
  template:
    spec:
      selector:
        app: api
      ports:
      - port: 8080
  clusters:
  - link: "us-central1/production-us-central1"
  - link: "europe-west1/production-europe-west1"
```

GCP's global load balancer automatically routes to the nearest healthy cluster. No DNS TTL delays -- routing changes happen at the load balancer level within seconds.

### Cloud Spanner

```hcl
resource "google_spanner_instance" "main" {
  name         = "production"
  config       = "nam-eur-asia1"  # Multi-region: US, Europe, Asia
  display_name = "Production"
  num_nodes    = 3
}

resource "google_spanner_database" "app" {
  instance = google_spanner_instance.main.name
  name     = "app"
}
```

Spanner provides globally consistent reads and writes. No read/write splitting needed. The cost: approximately $0.90/node/hour for multi-region, so a 3-node multi-region instance is roughly $1,944/month before storage and operations.

## Azure: Front Door + AKS + Cosmos DB

Azure Front Door operates at layer 7 with Anycast, similar to GCP's global LB -- SSL offloading, WAF, and automatic failover built in. For non-HTTP traffic, Traffic Manager provides DNS-based routing. Cosmos DB offers tunable consistency from eventual to strong, with three levels in between.

### Terraform: AKS per Region

```hcl
resource "azurerm_kubernetes_cluster" "east_us" {
  name                = "production-eastus"
  location            = "eastus"
  resource_group_name = azurerm_resource_group.east_us.name
  dns_prefix          = "prod-eastus"

  default_node_pool {
    name       = "default"
    node_count = 3
    vm_size    = "Standard_D4s_v5"
  }

  identity { type = "SystemAssigned" }
}

resource "azurerm_kubernetes_cluster" "west_eu" {
  name                = "production-westeurope"
  location            = "westeurope"
  resource_group_name = azurerm_resource_group.west_eu.name
  dns_prefix          = "prod-westeu"

  default_node_pool {
    name       = "default"
    node_count = 3
    vm_size    = "Standard_D4s_v5"
  }

  identity { type = "SystemAssigned" }
}
```

### Azure Front Door

```hcl
resource "azurerm_cdn_frontdoor_profile" "main" {
  name                = "production-fd"
  resource_group_name = azurerm_resource_group.global.name
  sku_name            = "Premium_AzureFrontDoor"
}

resource "azurerm_cdn_frontdoor_origin_group" "api" {
  name                     = "api-origins"
  cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.main.id

  load_balancing {
    sample_size                 = 4
    successful_samples_required = 3
    latency_in_milliseconds     = 50
  }

  health_probe {
    path                = "/healthz"
    protocol            = "Https"
    interval_in_seconds = 10
  }
}

resource "azurerm_cdn_frontdoor_origin" "east_us" {
  name                          = "eastus"
  cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.api.id
  host_name                     = azurerm_kubernetes_cluster.east_us.fqdn
  priority                      = 1
  weight                        = 50
}

resource "azurerm_cdn_frontdoor_origin" "west_eu" {
  name                          = "westeurope"
  cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.api.id
  host_name                     = azurerm_kubernetes_cluster.west_eu.fqdn
  priority                      = 1
  weight                        = 50
}
```

### Cosmos DB Multi-Region

```hcl
resource "azurerm_cosmosdb_account" "main" {
  name                = "production-cosmos"
  location            = "eastus"
  resource_group_name = azurerm_resource_group.global.name
  offer_type          = "Standard"
  kind                = "GlobalDocumentDB"

  consistency_policy {
    consistency_level = "BoundedStaleness"
    max_staleness_prefix    = 100000
    max_interval_in_seconds = 300
  }

  geo_location {
    location          = "eastus"
    failover_priority = 0
  }

  geo_location {
    location          = "westeurope"
    failover_priority = 1
  }

  enable_automatic_failover = true
}
```

Cosmos DB multi-region write requires the "Multi-region Writes" feature enabled. With single-region write, the secondary is read-only with automatic failover. Multi-region writes add conflict resolution complexity but eliminate write latency to the secondary.

## Cross-Region Networking

Your clusters need to reach shared services -- databases, caches, message queues -- that may live in a different region's VPC. The approach differs between clouds.

### AWS Transit Gateway

AWS VPCs are region-scoped. Transit Gateway acts as a regional hub for VPC connectivity. For cross-region, peer two Transit Gateways together.

```hcl
resource "aws_ec2_transit_gateway" "main" {
  description = "Cross-region transit"
}

resource "aws_ec2_transit_gateway_peering_attachment" "cross_region" {
  peer_region             = "eu-west-1"
  peer_transit_gateway_id = aws_ec2_transit_gateway.eu_west.id
  transit_gateway_id      = aws_ec2_transit_gateway.main.id
}
```

Transit Gateway costs $0.05/GB for data processed plus $0.05/hour per attachment. For high-throughput cross-region traffic, this adds up. A workload pushing 1 TB/month cross-region costs approximately $50 in Transit Gateway fees alone, on top of standard data transfer charges.

### GCP: Global VPCs

GCP VPCs are global by default -- subnets in different regions within the same VPC communicate without peering. A pod in us-central1 can reach a database in europe-west1 using internal IPs. Cross-project networking uses Shared VPC (recommended) or VPC peering (simpler but limited to 25 networks).

### Azure VNet Peering

Azure VNets are region-scoped. Global VNet peering connects VNets across regions. Peering is non-transitive -- if A peers with B and B with C, A cannot reach C. For hub-and-spoke, use Azure Virtual WAN or a hub VNet with peering to each spoke.

```hcl
resource "azurerm_virtual_network_peering" "east_to_west" {
  name                      = "east-to-west"
  resource_group_name       = azurerm_resource_group.east_us.name
  virtual_network_name      = azurerm_virtual_network.east_us.name
  remote_virtual_network_id = azurerm_virtual_network.west_eu.id
}
```

## Choosing Between Providers for Multi-Region

**Best global load balancing:** GCP. Its Anycast-based global LB provides instant failover without DNS TTL limitations. Azure Front Door is close. AWS Route53 relies on DNS, which means failover is limited by client-side DNS caching.

**Simplest cross-region networking:** GCP. Global VPCs eliminate the peering and Transit Gateway complexity of AWS and Azure.

**Most flexible multi-region database:** Azure Cosmos DB, with five consistency levels from eventual to strong. Aurora is simpler if you only need PostgreSQL or MySQL. Spanner is the strongest if you need globally consistent transactions.

**Best for hybrid and multi-cloud:** AWS, due to the broadest ecosystem of third-party tooling and cross-cloud connectivity options (Direct Connect, Transit Gateway with third-party VPN).

## Cost Breakdown: Typical Multi-Region Setup

For a mid-size workload (3 nodes per region, 2 regions):

| Component | AWS (monthly) | GCP (monthly) | Azure (monthly) |
|---|---|---|---|
| Kubernetes clusters (2x) | $900 (EKS + m7g.xlarge x6) | $800 (GKE + e2-standard-4 x6) | $850 (AKS + D4s_v5 x6) |
| Global load balancer | $20 (Route53) | $25 (Cloud LB) | $35 (Front Door) |
| Multi-region database | $1,200 (Aurora Global 2x db.r6g.large) | $1,944 (Spanner 3-node) | $800 (Cosmos DB 1000 RU/s x2) |
| Cross-region data transfer (500 GB) | $45 | $40 | $44 |
| **Total** | **~$2,165** | **~$2,809** | **~$1,729** |

These are baseline costs. Production workloads with higher traffic, larger node pools, and more database throughput will cost significantly more. The database is consistently the largest line item -- often 50-60% of the total multi-region bill.

The hidden cost is cross-region data transfer. Every cloud charges for egress between regions, typically $0.01-0.02/GB within the same continent and $0.05-0.09/GB intercontinental. A chatty microservice architecture where services in one region frequently call services in another will accumulate transfer costs quickly. Design your architecture so that reads are served locally from regional replicas and only writes cross regions.

Before committing to multi-region, verify that your availability requirements justify the 2-3x cost increase over single-region. Many workloads achieve sufficient resilience with multi-AZ deployments within a single region at a fraction of the cost.

