CircleCI Pipeline Patterns: Orbs, Executors, Workspaces, Parallelism, and Approval Workflows

CircleCI Pipeline Patterns#

CircleCI pipelines are defined in .circleci/config.yml. The configuration model uses workflows to orchestrate jobs, jobs to define execution units, and steps to define commands within a job. Every job runs inside an executor – a Docker container, Linux VM, macOS VM, or Windows VM.

Config Structure and Executors#

A minimal config defines a job and a workflow:

version: 2.1

executors:
  go-executor:
    docker:
      - image: cimg/go:1.22
    resource_class: medium
    working_directory: ~/project

jobs:
  build:
    executor: go-executor
    steps:
      - checkout
      - run:
          name: Build application
          command: go build -o myapp ./cmd/myapp

workflows:
  main:
    jobs:
      - build

Named executors let you reuse environment definitions across jobs. The resource_class controls CPU and memory – small (1 vCPU/2GB), medium (2 vCPU/4GB), large (4 vCPU/8GB), xlarge (8 vCPU/16GB). Choose the smallest class that avoids OOM kills to keep costs down.

Buildkite Pipeline Patterns: Dynamic Pipelines, Agents, Plugins, and Parallel Builds

Buildkite Pipeline Patterns#

Buildkite splits CI/CD into two parts: a hosted web service that manages pipelines, builds, and the UI, and self-hosted agents that execute the actual work. This architecture means your code, secrets, and build artifacts never touch Buildkite’s infrastructure. The agents run on your machines – EC2 instances, Kubernetes pods, bare metal, laptops.

Why Teams Choose Buildkite#

The question usually comes up against Jenkins and GitHub Actions.

Over Jenkins: Buildkite eliminates the Jenkins controller as a single point of failure. There is no plugin compatibility hell, no Groovy DSL, no Java memory tuning. Agents are stateless binaries that poll for work. Scaling is adding more agents. Jenkins requires careful capacity planning of the controller itself.

Azure DevOps Pipelines: YAML Pipelines, Templates, Service Connections, and AKS Integration

Azure DevOps Pipelines#

Azure DevOps Pipelines uses YAML files stored in your repository to define build and deployment workflows. The pipeline model has three levels: stages contain jobs, jobs contain steps. This hierarchy maps directly to how you think about CI/CD – build stage, test stage, deploy-to-staging stage, deploy-to-production stage – with each stage containing one or more parallel jobs.

Pipeline Structure#

A complete pipeline in azure-pipelines.yml:

trigger:
  branches:
    include:
      - main
      - release/*
  paths:
    exclude:
      - docs/**
      - README.md

pool:
  vmImage: 'ubuntu-latest'

variables:
  - group: common-vars
  - name: buildConfiguration
    value: 'Release'

stages:
  - stage: Build
    jobs:
      - job: BuildApp
        steps:
          - task: GoTool@0
            inputs:
              version: '1.22'
          - script: |
              go build -o $(Build.ArtifactStagingDirectory)/myapp ./cmd/myapp
            displayName: 'Build binary'
          - publish: $(Build.ArtifactStagingDirectory)
            artifact: drop

  - stage: Test
    dependsOn: Build
    jobs:
      - job: UnitTests
        steps:
          - task: GoTool@0
            inputs:
              version: '1.22'
          - script: go test ./... -v -coverprofile=coverage.out
            displayName: 'Run tests'
          - task: PublishCodeCoverageResults@2
            inputs:
              summaryFileLocation: coverage.out
              codecoverageTool: 'Cobertura'

  - stage: DeployStaging
    dependsOn: Test
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: DeployToStaging
        environment: staging
        strategy:
          runOnce:
            deploy:
              steps:
                - download: current
                  artifact: drop
                - script: echo "Deploying to staging"

trigger controls which branches and paths trigger the pipeline. dependsOn creates stage ordering. condition adds logic – succeeded() checks the previous stage passed, and you can combine it with variable checks to restrict certain stages to specific branches.

GitLab CI/CD Pipeline Patterns: Stages, DAG Pipelines, Includes, and Registry Integration

GitLab CI/CD Pipeline Patterns#

GitLab CI/CD runs pipelines defined in a .gitlab-ci.yml file at the repository root. Every push, merge request, or tag triggers a pipeline consisting of stages that contain jobs. The pipeline configuration is version-controlled alongside your code, so the build process evolves with the application.

Basic .gitlab-ci.yml Structure#

A minimal pipeline defines stages and jobs. Stages run sequentially; jobs within the same stage run in parallel:

stages:
  - build
  - test
  - deploy

build-app:
  stage: build
  image: golang:1.22
  script:
    - go build -o myapp ./cmd/myapp
  artifacts:
    paths:
      - myapp
    expire_in: 1 hour

unit-tests:
  stage: test
  image: golang:1.22
  script:
    - go test ./... -v -coverprofile=coverage.out
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage.out

deploy-staging:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  environment:
    name: staging
    url: https://staging.example.com
  rules:
    - if: $CI_COMMIT_BRANCH == "main"

Every job must have a stage and a script. The image field specifies the Docker image the job runs inside. If omitted, it falls back to the pipeline-level default image or the runner’s default.