ArgoCD Multi-Cluster Management: Hub-Spoke Patterns, Cluster Registration, and Fleet Operations

ArgoCD Multi-Cluster Management#

A single ArgoCD instance can manage deployments across dozens of Kubernetes clusters. This is one of ArgoCD’s strongest features and the standard approach for organizations with multiple environments, regions, or cloud providers.

Hub-Spoke Architecture#

The standard multi-cluster pattern runs ArgoCD on one “hub” cluster that deploys to multiple “spoke” clusters:

Hub Cluster (management)
├── ArgoCD control plane
├── Application/ApplicationSet definitions
├── RBAC policies
└── Cluster credentials (Secrets)
    │
    ├──→ Spoke Cluster: dev (us-east-1)
    ├──→ Spoke Cluster: staging (us-west-2)
    ├──→ Spoke Cluster: prod-us (us-east-1)
    ├──→ Spoke Cluster: prod-eu (eu-west-1)
    └──→ Spoke Cluster: prod-apac (ap-southeast-1)

ArgoCD on the hub cluster connects to each spoke cluster’s API server to apply manifests and check health. The spoke clusters do not need ArgoCD installed.

ArgoCD Patterns: App of Apps, ApplicationSets, Multi-Environment Management, and Source Strategies

ArgoCD Patterns#

Once ArgoCD is running and you have a few applications deployed, you hit a scaling problem: managing dozens or hundreds of Application resources by hand is unsustainable. These patterns solve that.

App of Apps#

The App of Apps pattern uses one ArgoCD Application to manage other Application resources. You create a “root” application that points to a directory containing Application YAML files. When ArgoCD syncs the root app, it creates all the child applications.

ArgoCD Setup and Basics: Installation, CLI, First Application, and Sync Policies

ArgoCD Setup and Basics#

ArgoCD is a declarative GitOps continuous delivery tool for Kubernetes. It watches Git repositories and ensures your cluster state matches what is declared in those repos. When someone changes a manifest in Git, ArgoCD detects the drift and either alerts you or automatically applies the change.

Installation with Plain Manifests#

The fastest path to a running ArgoCD instance:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This installs the full ArgoCD stack: API server, repo server, application controller, Redis, and Dex (for SSO). For a minimal install without SSO and the UI, use namespace-install.yaml instead, which also scopes ArgoCD to a single namespace.

ArgoCD with Terraform and Crossplane: Managing Infrastructure Alongside Applications

ArgoCD with Terraform and Crossplane#

Applications need infrastructure – databases, queues, caches, object storage, DNS records, certificates. In a GitOps workflow managed by ArgoCD, there are two approaches to provisioning that infrastructure: Crossplane (Kubernetes-native) and Terraform (external). Each has different strengths and integration patterns with ArgoCD.

Crossplane: Infrastructure as Kubernetes CRDs#

Crossplane extends Kubernetes with CRDs that represent cloud resources. An RDS instance becomes a YAML manifest. A GCS bucket becomes a YAML manifest. ArgoCD manages these manifests exactly like it manages Deployments and Services.

Building a Kubernetes Deployment Pipeline: From Code Push to Production

Building a Kubernetes Deployment Pipeline: From Code Push to Production#

A deployment pipeline connects a code commit to a running container in your cluster. This operational sequence walks through building one end-to-end, with decision points at each phase and verification steps to confirm the pipeline works before moving on.

Phase 1 – Source Control and CI#

Step 1: Repository Structure#

Every deployable service needs three things alongside its application code: a Dockerfile, deployment manifests, and a CI pipeline definition.

cert-manager and external-dns: Automatic TLS and DNS on Kubernetes

cert-manager and external-dns#

These two controllers solve the two most tedious parts of exposing services on Kubernetes: getting TLS certificates and creating DNS records. Together, they make it so that creating an Ingress resource automatically provisions a DNS record pointing to your cluster and a valid TLS certificate for the hostname.

cert-manager#

cert-manager watches for Certificate resources and Ingress annotations, then obtains and renews TLS certificates automatically.

Installation#

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

The crds.enabled=true flag installs the CRDs as part of the Helm release. Verify with kubectl get pods -n cert-manager – you should see cert-manager, cert-manager-cainjector, and cert-manager-webhook all Running.

Change Management for Infrastructure

Sre

Why Change Management Matters#

Most production incidents trace back to a change. Code deployments, configuration updates, infrastructure modifications, database migrations – each introduces risk. Change management reduces that risk through structure, visibility, and accountability. The goal is not to prevent change but to make change safe, visible, and reversible.

Change Request Process#

Every infrastructure change flows through a structured request. The formality scales with risk, but the basic elements remain constant.

Chaos Engineering: From First Experiment to Mature Practice

Sre

Why Break Things on Purpose#

Production systems fail in ways that testing environments never reveal. A database connection pool exhaustion under load, a cascading timeout across three services, a DNS cache that masks a routing change until it expires – these failures only surface when real conditions collide in ways nobody predicted. Chaos engineering is the discipline of deliberately injecting failures into a system to discover weaknesses before they cause outages.

Choosing a CNI Plugin: Calico vs Cilium vs Flannel vs Cloud-Native CNI

Choosing a CNI Plugin#

The Container Network Interface (CNI) plugin is one of the most consequential infrastructure decisions in a Kubernetes cluster. It determines how pods get IP addresses, how traffic flows between them, whether network policies are enforced, and what observability you get into network behavior. Changing CNI after deployment is painful – it typically requires draining and rebuilding nodes, or rebuilding the cluster entirely. Choose carefully up front.

Choosing an Autoscaling Strategy: HPA vs VPA vs KEDA vs Karpenter/Cluster Autoscaler

Choosing an Autoscaling Strategy#

Kubernetes autoscaling operates at two distinct layers: pod-level scaling changes how many pods run or how large they are, while node-level scaling changes how many nodes exist in the cluster to host those pods. Getting the right combination of tools at each layer is the key to a system that responds to demand without wasting resources.

The Two Scaling Layers#

Understanding which layer a tool operates on prevents the most common misconfiguration – expecting pod-level scaling to solve node-level capacity problems, or vice versa.