CircleCI Pipeline Patterns: Orbs, Executors, Workspaces, Parallelism, and Approval Workflows

CircleCI Pipeline Patterns#

CircleCI pipelines are defined in .circleci/config.yml. The configuration model uses workflows to orchestrate jobs, jobs to define execution units, and steps to define commands within a job. Every job runs inside an executor – a Docker container, Linux VM, macOS VM, or Windows VM.

Config Structure and Executors#

A minimal config defines a job and a workflow:

version: 2.1

executors:
  go-executor:
    docker:
      - image: cimg/go:1.22
    resource_class: medium
    working_directory: ~/project

jobs:
  build:
    executor: go-executor
    steps:
      - checkout
      - run:
          name: Build application
          command: go build -o myapp ./cmd/myapp

workflows:
  main:
    jobs:
      - build

Named executors let you reuse environment definitions across jobs. The resource_class controls CPU and memory – small (1 vCPU/2GB), medium (2 vCPU/4GB), large (4 vCPU/8GB), xlarge (8 vCPU/16GB). Choose the smallest class that avoids OOM kills to keep costs down.

Buildkite Pipeline Patterns: Dynamic Pipelines, Agents, Plugins, and Parallel Builds

Buildkite Pipeline Patterns#

Buildkite splits CI/CD into two parts: a hosted web service that manages pipelines, builds, and the UI, and self-hosted agents that execute the actual work. This architecture means your code, secrets, and build artifacts never touch Buildkite’s infrastructure. The agents run on your machines – EC2 instances, Kubernetes pods, bare metal, laptops.

Why Teams Choose Buildkite#

The question usually comes up against Jenkins and GitHub Actions.

Over Jenkins: Buildkite eliminates the Jenkins controller as a single point of failure. There is no plugin compatibility hell, no Groovy DSL, no Java memory tuning. Agents are stateless binaries that poll for work. Scaling is adding more agents. Jenkins requires careful capacity planning of the controller itself.

Azure DevOps Pipelines: YAML Pipelines, Templates, Service Connections, and AKS Integration

Azure DevOps Pipelines#

Azure DevOps Pipelines uses YAML files stored in your repository to define build and deployment workflows. The pipeline model has three levels: stages contain jobs, jobs contain steps. This hierarchy maps directly to how you think about CI/CD – build stage, test stage, deploy-to-staging stage, deploy-to-production stage – with each stage containing one or more parallel jobs.

Pipeline Structure#

A complete pipeline in azure-pipelines.yml:

trigger:
  branches:
    include:
      - main
      - release/*
  paths:
    exclude:
      - docs/**
      - README.md

pool:
  vmImage: 'ubuntu-latest'

variables:
  - group: common-vars
  - name: buildConfiguration
    value: 'Release'

stages:
  - stage: Build
    jobs:
      - job: BuildApp
        steps:
          - task: GoTool@0
            inputs:
              version: '1.22'
          - script: |
              go build -o $(Build.ArtifactStagingDirectory)/myapp ./cmd/myapp
            displayName: 'Build binary'
          - publish: $(Build.ArtifactStagingDirectory)
            artifact: drop

  - stage: Test
    dependsOn: Build
    jobs:
      - job: UnitTests
        steps:
          - task: GoTool@0
            inputs:
              version: '1.22'
          - script: go test ./... -v -coverprofile=coverage.out
            displayName: 'Run tests'
          - task: PublishCodeCoverageResults@2
            inputs:
              summaryFileLocation: coverage.out
              codecoverageTool: 'Cobertura'

  - stage: DeployStaging
    dependsOn: Test
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: DeployToStaging
        environment: staging
        strategy:
          runOnce:
            deploy:
              steps:
                - download: current
                  artifact: drop
                - script: echo "Deploying to staging"

trigger controls which branches and paths trigger the pipeline. dependsOn creates stage ordering. condition adds logic – succeeded() checks the previous stage passed, and you can combine it with variable checks to restrict certain stages to specific branches.

AWS CodePipeline and CodeBuild: Pipeline Structure, ECR Integration, ECS/EKS Deployments, and Cross-Account Patterns

AWS CodePipeline and CodeBuild#

AWS CodePipeline orchestrates CI/CD workflows as a series of stages. CodeBuild executes the actual build and test commands. Together they provide a fully managed pipeline that integrates natively with S3, ECR, ECS, EKS, Lambda, and CloudFormation. No servers to manage, no agents to maintain – but the trade-off is less flexibility than self-hosted systems and tighter coupling to the AWS ecosystem.

Pipeline Structure#

A CodePipeline has stages, and each stage has actions. Actions can run in parallel or sequentially within a stage. The most common pattern is Source -> Build -> Deploy:

Self-Hosted CI Runners at Scale: GitHub Actions Runner Controller, GitLab Runners on K8s, and Autoscaling

Self-Hosted CI Runners at Scale#

GitHub-hosted and GitLab SaaS runners work until they do not. You hit limits when you need private network access to deploy to internal infrastructure, specific hardware like GPUs or ARM64 machines, compliance requirements that prohibit running code on shared infrastructure, or cost control when you are burning thousands of dollars per month on hosted runner minutes.

Self-hosted runners solve these problems but introduce operational complexity: you now own runner provisioning, scaling, security, image updates, and cost management.

CI/CD Anti-Patterns and Migration Strategies: From Snowflakes to Scalable Pipelines

CI/CD Anti-Patterns and Migration Strategies#

CI/CD pipelines accumulate technical debt faster than application code. Nobody refactors a Jenkinsfile. Nobody reviews pipeline YAML with the same rigor as production code. Over time, pipelines become slow, fragile, inconsistent, and actively hostile to developer productivity. Recognizing the anti-patterns is the first step. Migrating to better tooling is often the second.

Anti-Pattern: Snowflake Pipelines#

Every repository has a unique pipeline that someone wrote three years ago and nobody fully understands. Repository A uses Makefile targets, B uses bash scripts, C calls Python, and D has inline shell commands across 40 pipeline steps. There is no shared structure, no reusable components, and no way to make organization-wide changes.

Agent Runbook Generation: Producing Verified Infrastructure Deliverables

Agent Runbook Generation#

An agent that says “you should probably add a readiness probe to your deployment” is giving advice. An agent that hands you a tested manifest with the readiness probe configured, verified against a real cluster, with rollback steps if the probe misconfigures – that agent is producing a deliverable. The difference matters.

The core thesis of infrastructure agent work is that the output is always a deliverable – a runbook, playbook, tested manifest, or validated configuration – never a direct action on someone else’s systems. This article covers the complete workflow for generating those deliverables: understanding requirements, planning steps, executing in a sandbox, capturing what worked, and packaging the result.

Agent Sandboxing: Isolation Strategies for Execution Environments

Agent Sandboxing#

An AI agent that can execute code, run shell commands, or call APIs needs a sandbox. Without one, a single bad tool call – whether from a bug, a hallucination, or a prompt injection attack – can read secrets, modify production data, or pivot to other systems. This article is a decision framework for choosing the right sandboxing strategy based on your trust level, threat model, and performance requirements.

Agent Security Patterns: Defending Against Injection, Leakage, and Misuse

Agent Security Patterns#

An AI agent with tool access is a program that can read files, call APIs, execute code, and modify systems – driven by natural language input. Every classic security concern applies, plus new attack surfaces unique to LLM-powered systems. This article covers practical defenses, not theoretical risks.

Prompt Injection Defense#

Prompt injection is the most agent-specific security threat. An attacker embeds instructions in data the agent processes – a file, a web page, an API response – and the agent follows those instructions as if they came from the user.

ArgoCD Image Updater: Automatic Image Tag Updates Without Git Commits

ArgoCD Image Updater#

ArgoCD Image Updater watches container registries for new image tags and automatically updates ArgoCD Applications to use them. In a standard GitOps workflow, updating an image tag requires a Git commit that changes the tag in a values file or manifest. Image Updater automates that step.

The Problem It Solves#

Standard GitOps image update flow:

CI builds image → pushes myapp:v1.2.3 to registry
    → Developer (or CI) commits "update image tag to v1.2.3" to Git
    → ArgoCD detects Git change
    → ArgoCD syncs new tag to cluster

That middle step – committing the tag update – is friction. CI pipelines need Git write access, commit messages are noise (“bump image to v1.2.4”, “bump image to v1.2.5”), and the delay between image push and deployment depends on how fast the commit pipeline runs.