ArgoCD Sync Waves, Resource Hooks, and Sync Options

ArgoCD Sync Waves, Resource Hooks, and Sync Options#

ArgoCD sync is not just “apply all manifests at once.” You can control the order resources are created, run pre- and post-deployment tasks, restrict when syncs can happen, and tune how resources are applied. This is where ArgoCD moves from basic GitOps to production-grade deployment orchestration.

Sync Waves#

Sync waves control the order in which resources are applied. Every resource has a wave number (default 0). Resources in lower waves are applied and must be healthy before higher waves begin.

ArgoCD with Terraform and Crossplane: Managing Infrastructure Alongside Applications

ArgoCD with Terraform and Crossplane#

Applications need infrastructure – databases, queues, caches, object storage, DNS records, certificates. In a GitOps workflow managed by ArgoCD, there are two approaches to provisioning that infrastructure: Crossplane (Kubernetes-native) and Terraform (external). Each has different strengths and integration patterns with ArgoCD.

Crossplane: Infrastructure as Kubernetes CRDs#

Crossplane extends Kubernetes with CRDs that represent cloud resources. An RDS instance becomes a YAML manifest. A GCS bucket becomes a YAML manifest. ArgoCD manages these manifests exactly like it manages Deployments and Services.

Artifact Management: Repository Selection, Container Lifecycle, Retention, and Promotion Workflows

Artifact Management#

Every CI/CD pipeline produces artifacts: container images, compiled binaries, library packages, Helm charts. Where these artifacts live, how long they are retained, how they are promoted between environments, and how they are scanned for vulnerabilities are decisions that affect security, cost, and operational reliability. Choosing the wrong artifact repository or neglecting lifecycle management creates accumulating storage costs and security blind spots.

Repository Options#

JFrog Artifactory#

Artifactory is the most comprehensive option. It supports every package type: Docker images, Maven/Gradle JARs, npm packages, PyPI wheels, Helm charts, Go modules, NuGet packages, and generic binaries. It serves as a universal artifact store and a caching proxy for upstream registries.

Automating Operational Runbooks

The Manual-to-Automated Progression#

Not every runbook should be automated, and automation does not happen in a single jump. The progression builds confidence at each stage.

Level 0 – Tribal Knowledge: The procedure exists only in someone’s head. Invisible risk.

Level 1 – Documented Runbook: Step-by-step instructions a human follows, including commands, expected outputs, and decision points. Every runbook starts here.

Level 2 – Scripted Runbook: Manual steps encoded in a script that a human triggers and monitors. The script handles tedious parts; the human handles judgment calls.

AWS Fundamentals for Agents

IAM: Identity and Access Management#

IAM controls who can do what in your AWS account. Everything in AWS is an API call, and IAM decides which API calls are allowed. There are three concepts an agent must understand: users, roles, and policies.

Users are long-lived identities for humans or service accounts. Roles are temporary identities that can be assumed by users, services, or other AWS accounts. Policies are JSON documents that define permissions. Roles are always preferred over users for programmatic access because they issue short-lived credentials through STS (Security Token Service).

AWS Lambda and Serverless Function Patterns

AWS Lambda and Serverless Function Patterns#

Lambda runs your code without you provisioning or managing servers. You upload a function, configure a trigger, and AWS handles scaling, patching, and availability. The execution model is simple: an event arrives, Lambda invokes your handler, your handler returns a response. Everything in between – concurrency, retries, scaling from zero to thousands of instances – is managed for you.

That simplicity hides real complexity. Cold starts, timeout limits, memory-to-CPU coupling, VPC attachment latency, and event source mapping behavior all require deliberate design. This article covers the patterns that matter in practice.

AWS Terraform Patterns: IAM, Networking, EKS, RDS, and Common Gotchas

AWS Terraform Patterns#

AWS is the most common Terraform target and the most complex. It has more services, more configuration options, and more subtle gotchas than Azure or GCP. This article covers the AWS-specific patterns that agents need to write correct, secure Terraform — with emphasis on the mistakes that cause real production issues.

IAM: The Foundation of Everything#

Every AWS resource that does anything needs IAM permissions. The two patterns agents must know: service roles (letting AWS services act on your behalf) and IRSA (letting Kubernetes pods assume IAM roles).

Azure Fundamentals for Agents

Resource Groups and Subscriptions#

Azure organizes resources in a hierarchy: Management Groups > Subscriptions > Resource Groups > Resources. An agent interacts most frequently with resource groups and the resources inside them.

A resource group is a logical container. Every Azure resource belongs to exactly one resource group. Resource groups are regional, but can contain resources from any region. They serve as the deployment boundary, the access control boundary, and the lifecycle boundary – deleting a resource group deletes everything inside it.

Azure Terraform Patterns: Resource Groups, AKS, Managed Identity, and Common Gotchas

Azure Terraform Patterns#

Azure’s Terraform provider (azurerm) has its own idioms, naming conventions, and gotchas that differ significantly from AWS. The biggest differences: everything lives in a Resource Group, identity management uses Managed Identity (not IAM roles), and many services require explicit Private DNS Zone configuration for private networking.

Resource Groups: Azure’s Organizational Unit#

Every Azure resource belongs to a Resource Group. This is the first thing you create and the last thing you delete.

Blameless Post-Mortem Practices: Incident Timelines, Root Cause Analysis, and Organizational Learning

What a Post-Mortem Is and Is Not#

A post-mortem is a structured analysis of an incident conducted after the incident is resolved. Its purpose is to understand what happened, why it happened, and what changes will prevent it from happening again. It is not a blame assignment exercise. It is not a performance review. It is not a formality to check a compliance box.

The output of a good post-mortem is a set of concrete action items that improve the system. Not the humans – the system. If your post-mortem concludes with “engineer X should have been more careful,” you have failed at the process. Humans make mistakes. Systems should be designed so that human mistakes do not cause outages, and when they do, the blast radius is contained.