Cloud Behavioral Divergence Guide: Where AWS, Azure, and GCP Actually Differ

Cloud Behavioral Divergence Guide#

Running the “same” workload on AWS, Azure, and GCP does not produce the same behavior. The Kubernetes API is portable, application containers are portable, and SQL queries are portable. Everything else – identity, networking, storage, load balancing, DNS, and managed service behavior – diverges in ways that matter for production reliability.

This guide documents the specific divergence points with practical examples. Use it when translating infrastructure from one cloud to another, when debugging behavior that differs between environments, or when assessing migration risk.

Cloud Cost Optimization

The Cost Optimization Hierarchy#

Cloud cost optimization follows a hierarchy of impact. Work from the top down – fixing the wrong tier of commitment discount matters far less than shutting down resources nobody uses.

  1. Eliminate waste – turn off unused resources, delete orphaned storage
  2. Right-size – match instance sizes to actual usage
  3. Use commitment discounts – reserved instances, savings plans, CUDs
  4. Shift to spot/preemptible – for fault-tolerant workloads
  5. Optimize storage and network – tiering, transfer patterns, caching
  6. Architect for cost – serverless, auto-scaling, multi-region strategy

Eliminating Waste#

The fastest cost reduction comes from finding resources that serve no purpose. Every cloud provider accumulates these: instances left running after a test, snapshots from decommissioned servers, load balancers with no backends, unattached disks.

Cloud Migration Strategies: The 7 Rs Framework

Cloud Migration Strategies#

A company does not “migrate to the cloud” – it migrates dozens or hundreds of applications, each with different characteristics, dependencies, and risk profiles. The 7 Rs framework provides vocabulary for per-workload decisions, but selecting the right R requires understanding the application, its dependencies, and the organization’s tolerance for change.

The 7 Rs#

Rehost (Lift and Shift)#

Move the application to cloud infrastructure with minimal changes. A VM on-premises becomes an EC2 instance. OS, application code, and configuration remain the same.

Cloud Vendor Product Matrix: Comparing Cloudflare, AWS, Azure, and GCP

Cloud Vendor Product Matrix#

Choosing between cloud vendors requires mapping equivalent services across providers. AWS has 200+ services. Azure has 200+. GCP has 100+. Cloudflare has 20+ but they are tightly integrated and edge-native. This article maps the services that matter for most applications – compute, serverless, databases, storage, networking, and observability – across all four vendors with pricing, availability, and portability for each.

How to Use This Matrix#

Each section maps equivalent products across vendors, then provides:

Cloud-Native vs Portable Infrastructure: A Decision Framework

Cloud-Native vs Portable Infrastructure#

Every infrastructure decision sits on a spectrum between portability and fidelity. On one end, you have generic Kubernetes running on minikube or kind – it works everywhere, costs nothing, and captures the behavior of the Kubernetes API itself. On the other end, you have cloud-native managed services – EKS with IRSA and ALB Ingress Controller, GKE with Workload Identity and Cloud Load Balancing, AKS with Azure AD Pod Identity and Azure Load Balancer. These capture the behavior of the actual platform your workloads will run on.

Cloudflare Workers as a Full-Stack Platform: Workers, D1, KV, R2, and Pages

Cloudflare Workers as a Full-Stack Platform#

Cloudflare started as a CDN and DDoS protection service. It is now a complete development platform. Workers provide serverless compute at 330+ edge locations. D1 provides a serverless SQLite database. KV provides a globally distributed key-value store. R2 provides S3-compatible object storage with zero egress fees. Pages provides static site hosting with git-integrated deploys. Durable Objects provide stateful, single-threaded coordination primitives. Queues provide async message processing between Workers.

Cluster Autoscaling: HPA, Cluster Autoscaler, and KEDA

Cluster Autoscaling#

Kubernetes autoscaling operates at two levels: pod-level (HPA adds or removes pod replicas) and node-level (Cluster Autoscaler adds or removes nodes). Getting them to work together requires understanding how each makes decisions.

Horizontal Pod Autoscaler (HPA)#

HPA adjusts the replica count of a Deployment, StatefulSet, or ReplicaSet based on observed metrics. The metrics-server must be running in your cluster for CPU and memory metrics.

Basic HPA on CPU#

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

This scales my-app between 2 and 10 replicas, targeting 70% average CPU utilization across all pods. The HPA checks metrics every 15 seconds (default) and computes the desired replica count as:

CockroachDB Day-2 Operations

Adding and Removing Nodes#

Adding a node: start a new cockroach process with --join pointing to existing nodes. CockroachDB automatically rebalances ranges to the new node.

cockroach start --insecure --store=node4-data \
  --advertise-addr=node4:26257 \
  --join=node1:26257,node2:26257,node3:26257

Watch rebalancing in the DB Console under Metrics > Replication, or query directly:

SELECT node_id, range_count, lease_count FROM crdb_internal.kv_store_status;

Decommissioning a node moves all range replicas off before shutdown, preventing under-replication:

cockroach node decommission 4 --insecure --host=node1:26257

# Monitor progress
cockroach node status --insecure --host=node1:26257 --decommission

Do not simply kill a node. Without decommissioning, CockroachDB treats it as a failure and waits 5 minutes before re-replicating. On Kubernetes with the operator, scale by changing spec.nodes in the CrdbCluster resource.

CockroachDB Debugging and Troubleshooting

Node Liveness Issues#

Every node must renew its liveness record every 4.5 seconds. Failure to renew marks the node suspect, then dead, triggering re-replication of its ranges.

cockroach node status --insecure --host=localhost:26257

Look at is_live. If a node shows false, check in order:

Process crashed. Check cockroach-data/logs/ for fatal or panic entries. OOM kills are the most common cause – check dmesg | grep -i oom on the host.

Network partition. The node runs but cannot reach peers. If cockroach node status succeeds locally but fails from other nodes, the problem is network-level (firewalls, security groups, DNS).

CockroachDB Setup and Architecture

Architecture: What CockroachDB Actually Does Under the Hood#

CockroachDB is a distributed SQL database that stores data across multiple nodes while presenting a single logical database to clients. Understanding three concepts is essential before deploying it.

Ranges. All data is stored in key-value pairs, sorted by key. CockroachDB splits this sorted keyspace into contiguous chunks called ranges, each targeting 512 MiB by default. Every SQL table, index, and system table maps to one or more ranges. When a range grows beyond the threshold, it splits automatically.