Golden Paths and Paved Roads

What Golden Paths Are#

A golden path is a pre-built, opinionated workflow that gets a developer from zero to a production-ready artifact with minimal decisions. The term comes from Spotify’s internal platform work. Netflix calls them “paved roads.” The idea is the same: provide a well-maintained, well-tested default path that handles 80% of use cases, while allowing teams to go off-road when they have legitimate reasons.

A golden path is not a mandate. It is a recommendation backed by automation. Create a new Go microservice using the golden path and you get a repository with CI/CD, Kubernetes manifests, observability, and a Backstage catalog entry — working in minutes. The golden path removes the 40+ decisions a developer would otherwise need to make.

Developer Self-Service Workflows

The Cost of Not Having Self-Service#

A developer needs a PostgreSQL database. They file a ticket. It sits in a backlog for two days. A DBA provisions it, sends credentials via Slack DM. Elapsed time: 3 days. Actual need: 5 minutes of configuration. Multiply across every database, cache, queue, and namespace, and manual provisioning becomes the single largest drag on velocity. Self-service lets developers provision pre-approved resources directly, within guardrails the platform team defines.

Developer Experience Metrics: Measuring What Matters

The Measurement Problem#

Measuring developer experience wrong is worse than not measuring at all. Lines of code, commit counts, and story points per sprint all create perverse incentives — developers game what gets measured. Good metrics measure outcomes (how fast does code reach production?) and perceptions (do developers feel productive?) without punishing individuals.

The goal is identifying systemic friction in tools, processes, and the platform. Never to evaluate individual developers.

Pipeline Security Hardening with SLSA: Provenance, Signing, and Software Supply Chain Integrity

Pipeline Security Hardening with SLSA#

Software supply chain attacks exploit the gap between source code and deployed artifact. The SLSA framework (Supply-chain Levels for Software Artifacts) defines concrete requirements for closing that gap. It is not a tool you install – it is a set of verifiable properties your build process must satisfy.

SLSA Levels#

SLSA defines four levels of increasing assurance:

Level 0: No guarantees. Most pipelines start here.

Secrets Management in CI/CD Pipelines: OIDC, Vault Integration, and Credential Hygiene

Secrets Management in CI/CD Pipelines#

Every CI/CD pipeline needs credentials: registry tokens, cloud provider keys, database passwords, API keys for third-party services. How you store, deliver, and scope those credentials determines whether a single compromised pipeline job can escalate into a full infrastructure breach. The difference between a mature and an immature pipeline is rarely in the build steps – it is in the secrets management.

The Problem with Static Secrets#

The default approach on every CI platform is storing secrets as encrypted variables: GitHub Actions secrets, GitLab CI variables, Jenkins credentials store. These work but create compounding risks:

CI/CD Cost Optimization: Runner Sizing, Caching ROI, Spot Instances, and Build Minute Economics

CI/CD Cost Optimization#

CI/CD costs grow quietly. A team of ten pushing five times a day, running a 15-minute pipeline on 4-core runners, burns through 2,500 build minutes per week. On GitHub Actions at $0.008/minute for Linux runners, that is $20/week. Scale to fifty developers with integration tests, matrix builds, and nightly jobs, and you are looking at $500-$2,000/month before anyone notices.

The fix is not running fewer tests or skipping builds. It is eliminating waste: jobs that use more compute than they need, caches that are never restored, full builds triggered by README changes, and runners sitting idle between jobs.

Database Schema Migrations in CI/CD: Tools, Pipeline Integration, and Zero-Downtime Strategies

Database Schema Migrations in CI/CD#

Schema migrations are the riskiest step in most deployment pipelines. Application code can be rolled back in seconds by deploying the previous container image. A database migration that drops a column, changes a data type, or restructures a table cannot be undone by pressing a button. Yet many teams run migrations manually, or tack them onto deployment scripts without testing, rollback plans, or zero-downtime considerations.

Pipeline Observability: CI/CD Metrics, DORA, OpenTelemetry, and Grafana Dashboards

Pipeline Observability#

You cannot improve what you do not measure. Most teams have detailed monitoring for their production applications but treat their CI/CD pipelines as black boxes. When builds are slow, flaky, or failing, the response is anecdotal – “builds feel slow lately” – rather than data-driven. Pipeline observability turns CI/CD from a cost center you tolerate into infrastructure you actively manage.

Core CI/CD Metrics#

Build Duration#

Total time from pipeline trigger to completion. Track this as a histogram, not an average, because averages hide bimodal distributions. A pipeline that takes 5 minutes for code-only changes and 25 minutes for dependency updates averages 15 minutes, which describes neither case accurately.

CI/CD Anti-Patterns and Migration Strategies: From Snowflakes to Scalable Pipelines

CI/CD Anti-Patterns and Migration Strategies#

CI/CD pipelines accumulate technical debt faster than application code. Nobody refactors a Jenkinsfile. Nobody reviews pipeline YAML with the same rigor as production code. Over time, pipelines become slow, fragile, inconsistent, and actively hostile to developer productivity. Recognizing the anti-patterns is the first step. Migrating to better tooling is often the second.

Anti-Pattern: Snowflake Pipelines#

Every repository has a unique pipeline that someone wrote three years ago and nobody fully understands. Repository A uses Makefile targets, B uses bash scripts, C calls Python, and D has inline shell commands across 40 pipeline steps. There is no shared structure, no reusable components, and no way to make organization-wide changes.

Advanced GitHub Actions Patterns: Matrix Builds, OIDC, Composite Actions, and Self-Hosted Runners

Advanced GitHub Actions Patterns#

Once you understand the basics of GitHub Actions, these patterns solve the real-world problems: testing across multiple environments, authenticating to cloud providers without static secrets, building reusable action components, and scaling runners.

Matrix Builds#

Test across multiple OS versions, language versions, or configurations in parallel:

jobs:
  test:
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-latest, macos-latest]
        go-version: ['1.22', '1.23']
    runs-on: ${{ matrix.os }}
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: ${{ matrix.go-version }}
      - run: go test ./...

This creates 4 jobs (2 OS x 2 Go versions) running in parallel. Set fail-fast: false so a failure in one combination does not cancel the others – you want to see all failures at once.