Advanced Terraform State Management

Remote Backends#

Every team beyond a single developer needs remote state. The three major backends:

S3 + DynamoDB (AWS):

terraform {
  backend "s3" {
    bucket         = "myorg-tfstate"
    key            = "prod/network/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-locks"
    encrypt        = true
  }
}

Azure Blob Storage:

terraform {
  backend "azurerm" {
    resource_group_name  = "tfstate-rg"
    storage_account_name = "myorgtfstate"
    container_name       = "tfstate"
    key                  = "prod/network/terraform.tfstate"
  }
}

Google Cloud Storage:

terraform {
  backend "gcs" {
    bucket = "myorg-tfstate"
    prefix = "prod/network"
  }
}

All three support locking natively (DynamoDB for S3, blob leases for Azure, object locking for GCS). Always enable encryption at rest and restrict access with IAM.

Agent-Oriented Terraform: Linear Patterns for Machine-Managed Infrastructure

Agent-Oriented Terraform#

Most Terraform code is written by humans for humans. It favors abstraction, DRY principles, and deep module nesting — patterns that make sense when a human maintains a mental model of the codebase. Agents do not maintain mental models. They read code fresh each time, trace references to resolve dependencies, and reason about the full resource graph in a single context window.

The patterns that make Terraform elegant for humans make it expensive for agents. Deep module nesting multiplies the files an agent must read. Variable threading through three layers of modules hides dependencies behind indirection. Complex for_each over maps of objects creates resources that are invisible until runtime. The agent spends most of its context on navigation, not comprehension.

AKS Setup and Configuration: Clusters, Node Pools, and Networking

AKS Setup and Configuration#

Azure Kubernetes Service handles the control plane for you – you pay nothing for it. What you configure is node pools, networking, identity, and add-ons. Getting these right at cluster creation matters because several choices (networking model, managed identity) cannot be changed later without rebuilding the cluster.

Creating a Cluster with az CLI#

The minimal command that produces a production-usable cluster:

az aks create \
  --resource-group myapp-rg \
  --name myapp-aks \
  --location eastus2 \
  --node-count 3 \
  --node-vm-size Standard_D4s_v5 \
  --network-plugin azure \
  --network-plugin-mode overlay \
  --vnet-subnet-id /subscriptions/<sub>/resourceGroups/myapp-rg/providers/Microsoft.Network/virtualNetworks/myapp-vnet/subnets/aks-subnet \
  --enable-managed-identity \
  --enable-aad \
  --aad-admin-group-object-ids <admin-group-id> \
  --generate-ssh-keys \
  --tier standard

Key flags: --network-plugin azure --network-plugin-mode overlay gives you Azure CNI Overlay, which avoids the IP exhaustion problems of classic Azure CNI. --tier standard enables the financially-backed SLA and uptime guarantees (the free tier has no SLA). --enable-aad integrates Entra ID (formerly Azure AD) for authentication.

ArgoCD with Terraform and Crossplane: Managing Infrastructure Alongside Applications

ArgoCD with Terraform and Crossplane#

Applications need infrastructure – databases, queues, caches, object storage, DNS records, certificates. In a GitOps workflow managed by ArgoCD, there are two approaches to provisioning that infrastructure: Crossplane (Kubernetes-native) and Terraform (external). Each has different strengths and integration patterns with ArgoCD.

Crossplane: Infrastructure as Kubernetes CRDs#

Crossplane extends Kubernetes with CRDs that represent cloud resources. An RDS instance becomes a YAML manifest. A GCS bucket becomes a YAML manifest. ArgoCD manages these manifests exactly like it manages Deployments and Services.

Cloud-Native vs Portable Infrastructure: A Decision Framework

Cloud-Native vs Portable Infrastructure#

Every infrastructure decision sits on a spectrum between portability and fidelity. On one end, you have generic Kubernetes running on minikube or kind – it works everywhere, costs nothing, and captures the behavior of the Kubernetes API itself. On the other end, you have cloud-native managed services – EKS with IRSA and ALB Ingress Controller, GKE with Workload Identity and Cloud Load Balancing, AKS with Azure AD Pod Identity and Azure Load Balancer. These capture the behavior of the actual platform your workloads will run on.

Converting kubectl Manifests to Terraform: From Manual Applies to Infrastructure as Code

Converting kubectl Manifests to Terraform#

You have a working Kubernetes setup built with kubectl apply -f. It works, but there is no state tracking, no dependency graph, and no way to reliably reproduce it. Terraform fixes all three problems.

Step 1: Export Existing Resources#

Start by extracting what you have. For each resource type, export the YAML:

kubectl get deployment,service,configmap,ingress -n my-app -o yaml > exported.yaml

For a single resource with cleaner output:

Devcontainer Sandbox Templates: Zero-Cost Validation Environments for Infrastructure Development

Devcontainer Sandbox Templates#

Devcontainers provide disposable, reproducible development environments that run in a container. You define the tools, extensions, and configuration in a .devcontainer/ directory, and any compatible host – GitHub Codespaces, Gitpod, VS Code with Docker, or the devcontainer CLI – builds and launches the environment from that definition.

For infrastructure validation, devcontainers solve a specific problem: giving every developer and every CI run the exact same set of tools at the exact same versions, without requiring them to install anything on their local machine. A Kubernetes devcontainer includes kind, kubectl, helm, and kustomize at pinned versions. A Terraform devcontainer includes terraform, tflint, checkov, and cloud CLIs. The environment is ready to use the moment it starts.

Diagnosing Common Terraform Problems

Stuck State Lock#

A CI job was cancelled, a laptop lost network, or a process crashed mid-apply. Terraform refuses to run:

Error acquiring the state lock
Lock Info:
  ID:        f8e7d6c5-b4a3-2109-8765-43210fedcba9
  Operation: OperationTypeApply
  Who:       deploy@ci-runner
  Created:   2026-02-20 09:15:22 +0000 UTC

Verify the lock holder is truly dead. Check CI job status, then:

terraform force-unlock f8e7d6c5-b4a3-2109-8765-43210fedcba9

If the lock was from a crashed apply, the state may be partially updated. Run terraform plan immediately after unlocking to see the current situation.

EKS Setup and Configuration

EKS Setup and Configuration#

Amazon EKS runs the Kubernetes control plane for you – managed etcd, API server, and controller manager across multiple AZs. You are responsible for the worker nodes, networking configuration, and add-ons.

Cluster Creation Methods#

eksctl is the fastest path for a working cluster. It creates the VPC, subnets, NAT gateway, IAM roles, node groups, and kubeconfig in one command:

eksctl create cluster \
  --name my-cluster \
  --region us-east-1 \
  --version 1.31 \
  --nodegroup-name workers \
  --node-type m6i.large \
  --nodes 3 \
  --nodes-min 2 \
  --nodes-max 10 \
  --managed

For repeatable setups, use a ClusterConfig file:

Ephemeral Cloud Clusters: Create, Validate, Destroy Sequences for EKS, GKE, and AKS

Ephemeral Cloud Clusters#

Ephemeral clusters exist for one purpose: validate something, then disappear. They are not staging environments, not shared dev clusters, not long-lived resources that someone forgets to turn off. The operational model is strict – create, validate, destroy – and the entire sequence must be automated so that destruction cannot be forgotten.

The cost of getting this wrong is real. A three-node EKS cluster left running over a weekend costs roughly $15. Left running for a month, $200. Multiply by the number of developers or CI pipelines that create clusters, and forgotten ephemeral infrastructure becomes a significant budget line item. Every template in this article includes auto-destroy mechanisms to prevent this.