CircleCI Pipeline Patterns: Orbs, Executors, Workspaces, Parallelism, and Approval Workflows

CircleCI Pipeline Patterns#

CircleCI pipelines are defined in .circleci/config.yml. The configuration model uses workflows to orchestrate jobs, jobs to define execution units, and steps to define commands within a job. Every job runs inside an executor – a Docker container, Linux VM, macOS VM, or Windows VM.

Config Structure and Executors#

A minimal config defines a job and a workflow:

version: 2.1

executors:
  go-executor:
    docker:
      - image: cimg/go:1.22
    resource_class: medium
    working_directory: ~/project

jobs:
  build:
    executor: go-executor
    steps:
      - checkout
      - run:
          name: Build application
          command: go build -o myapp ./cmd/myapp

workflows:
  main:
    jobs:
      - build

Named executors let you reuse environment definitions across jobs. The resource_class controls CPU and memory – small (1 vCPU/2GB), medium (2 vCPU/4GB), large (4 vCPU/8GB), xlarge (8 vCPU/16GB). Choose the smallest class that avoids OOM kills to keep costs down.

CI/CD Cost Optimization: Runner Sizing, Caching ROI, Spot Instances, and Build Minute Economics

CI/CD Cost Optimization#

CI/CD costs grow quietly. A team of ten pushing five times a day, running a 15-minute pipeline on 4-core runners, burns through 2,500 build minutes per week. On GitHub Actions at $0.008/minute for Linux runners, that is $20/week. Scale to fifty developers with integration tests, matrix builds, and nightly jobs, and you are looking at $500-$2,000/month before anyone notices.

The fix is not running fewer tests or skipping builds. It is eliminating waste: jobs that use more compute than they need, caches that are never restored, full builds triggered by README changes, and runners sitting idle between jobs.

CI/CD Patterns for Monorepos

CI/CD Patterns for Monorepos#

A monorepo puts multiple packages, services, and applications in a single repository. This simplifies cross-package changes and dependency management, but it breaks the assumption that most CI systems are built on: one repo means one build. Without careful pipeline design, every commit triggers a full rebuild of everything, and CI becomes the bottleneck.

The Core Problem#

In a monorepo, a commit that touches packages/auth-service/src/handler.ts should build and test auth-service and its dependents, but not billing-service or frontend. Getting this right is the central challenge of monorepo CI.

GitLab CI/CD Pipeline Patterns: Stages, DAG Pipelines, Includes, and Registry Integration

GitLab CI/CD Pipeline Patterns#

GitLab CI/CD runs pipelines defined in a .gitlab-ci.yml file at the repository root. Every push, merge request, or tag triggers a pipeline consisting of stages that contain jobs. The pipeline configuration is version-controlled alongside your code, so the build process evolves with the application.

Basic .gitlab-ci.yml Structure#

A minimal pipeline defines stages and jobs. Stages run sequentially; jobs within the same stage run in parallel:

stages:
  - build
  - test
  - deploy

build-app:
  stage: build
  image: golang:1.22
  script:
    - go build -o myapp ./cmd/myapp
  artifacts:
    paths:
      - myapp
    expire_in: 1 hour

unit-tests:
  stage: test
  image: golang:1.22
  script:
    - go test ./... -v -coverprofile=coverage.out
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage.out

deploy-staging:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  environment:
    name: staging
    url: https://staging.example.com
  rules:
    - if: $CI_COMMIT_BRANCH == "main"

Every job must have a stage and a script. The image field specifies the Docker image the job runs inside. If omitted, it falls back to the pipeline-level default image or the runner’s default.

Nginx Configuration Patterns for Production

Configuration File Structure#

Nginx configuration follows a hierarchical block structure. The main configuration file is typically /etc/nginx/nginx.conf, which includes files from /etc/nginx/conf.d/ or /etc/nginx/sites-enabled/.

# /etc/nginx/nginx.conf
user nginx;
worker_processes auto;           # Match CPU core count
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

events {
    worker_connections 1024;     # Per worker process
    multi_accept on;             # Accept multiple connections at once
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

    access_log /var/log/nginx/access.log main;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 16m;

    include /etc/nginx/conf.d/*.conf;
}

The worker_processes auto directive sets one worker per CPU core. Each worker handles worker_connections simultaneous connections, so total capacity is worker_processes * worker_connections. For most production deployments, auto with 1024 connections per worker is sufficient.

Running Redis on Kubernetes

Running Redis on Kubernetes#

Redis on Kubernetes ranges from dead simple (single pod for caching) to operationally complex (Redis Cluster with persistence). The right choice depends on whether you need data durability, high availability, or just a fast throwaway cache.

Single-Instance Redis with Persistence#

For development or small workloads, a single Redis Deployment with a PVC is enough:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        command: ["redis-server", "--appendonly", "yes", "--maxmemory", "256mb", "--maxmemory-policy", "allkeys-lru"]
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-data
          mountPath: /data
        resources:
          requests:
            cpu: 100m
            memory: 300Mi
          limits:
            cpu: 500m
            memory: 350Mi
      volumes:
      - name: redis-data
        persistentVolumeClaim:
          claimName: redis-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-data
spec:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379

Set the memory limit in Redis (--maxmemory) lower than the container memory limit. If Redis uses 350Mi and the container limit is 350Mi, the kernel OOM-kills the process during background save operations when Redis forks and temporarily doubles its memory usage. A safe ratio: set maxmemory to 60-75% of the container memory limit.

GitHub Actions Advanced Patterns: Reusable Workflows, Matrix Strategies, OIDC, and Optimization

GitHub Actions Advanced Patterns#

Once you move past single-file workflows that run npm test on every push, GitHub Actions becomes a platform for building serious CI/CD infrastructure. The features covered here – reusable workflows, composite actions, matrix strategies, OIDC authentication, and caching – are what separate a working pipeline from a production-grade one.

Reusable Workflows#

A reusable workflow is a complete workflow file that other workflows can call like a function. Define it with the workflow_call trigger:

Redis Deep Dive: Data Structures, Persistence, Performance, and Operational Patterns

Redis Deep Dive: Data Structures, Persistence, Performance, and Operational Patterns#

Redis is an in-memory data store, but calling it a “cache” undersells what it can do. It is a data structure server that happens to be extraordinarily fast. Understanding its data structures, persistence model, and operational characteristics determines whether Redis becomes a reliable part of your architecture or a source of mysterious production incidents.

Data Structures Beyond Key-Value#

Redis supports far more than simple string key-value pairs. Each data structure has specific use cases where it outperforms alternatives.