Secrets Management in CI/CD Pipelines: OIDC, Vault Integration, and Credential Hygiene

Secrets Management in CI/CD Pipelines#

Every CI/CD pipeline needs credentials: registry tokens, cloud provider keys, database passwords, API keys for third-party services. How you store, deliver, and scope those credentials determines whether a single compromised pipeline job can escalate into a full infrastructure breach. The difference between a mature and an immature pipeline is rarely in the build steps – it is in the secrets management.

The Problem with Static Secrets#

The default approach on every CI platform is storing secrets as encrypted variables: GitHub Actions secrets, GitLab CI variables, Jenkins credentials store. These work but create compounding risks:

Advanced GitHub Actions Patterns: Matrix Builds, OIDC, Composite Actions, and Self-Hosted Runners

Advanced GitHub Actions Patterns#

Once you understand the basics of GitHub Actions, these patterns solve the real-world problems: testing across multiple environments, authenticating to cloud providers without static secrets, building reusable action components, and scaling runners.

Matrix Builds#

Test across multiple OS versions, language versions, or configurations in parallel:

jobs:
  test:
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-latest, macos-latest]
        go-version: ['1.22', '1.23']
    runs-on: ${{ matrix.os }}
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: ${{ matrix.go-version }}
      - run: go test ./...

This creates 4 jobs (2 OS x 2 Go versions) running in parallel. Set fail-fast: false so a failure in one combination does not cancel the others – you want to see all failures at once.

Kubernetes API Server: Architecture, Authentication, Authorization, and Debugging

Kubernetes API Server: Architecture, Authentication, Authorization, and Debugging#

The API server (kube-apiserver) is the front door to your Kubernetes cluster. Every interaction – kubectl commands, controller reconciliation loops, kubelet status updates, admission webhooks – goes through the API server. It is the only component that reads from and writes to etcd. If the API server is down, the cluster is unmanageable. Everything else (scheduler, controllers, kubelets) can tolerate brief API server outages because they cache state locally, but no mutations happen until the API server is back.

Running Terraform in CI/CD Pipelines

The Core Pattern#

The standard CI/CD pattern for Terraform: run terraform plan on every pull request and post the output as a PR comment. Run terraform apply only when the PR merges to main. This gives reviewers visibility into what will change before approving.

GitHub Actions Workflow#

name: Terraform
on:
  pull_request:
    paths: ["infra/**"]
  push:
    branches: [main]
    paths: ["infra/**"]

permissions:
  id-token: write    # OIDC
  contents: read
  pull-requests: write  # PR comments

jobs:
  terraform:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: infra
    steps:
      - uses: actions/checkout@v4

      - uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789012:role/terraform-ci
          aws-region: us-east-1

      - uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: 1.7.0
          terraform_wrapper: true  # captures stdout for PR comments

      - name: Terraform Init
        run: terraform init -input=false

      - name: Terraform Format Check
        run: terraform fmt -check -recursive

      - name: Terraform Validate
        run: terraform validate

      - name: Terraform Plan
        id: plan
        run: terraform plan -input=false -no-color -out=tfplan
        continue-on-error: true

      - name: Post Plan to PR
        if: github.event_name == 'pull_request'
        uses: actions/github-script@v7
        with:
          script: |
            const plan = `${{ steps.plan.outputs.stdout }}`;
            const truncated = plan.length > 60000
              ? plan.substring(0, 60000) + "\n\n... truncated ..."
              : plan;
            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.issue.number,
              body: `#### Terraform Plan\n\`\`\`\n${truncated}\n\`\`\``
            });

      - name: Plan Status
        if: steps.plan.outcome == 'failure'
        run: exit 1

      - name: Terraform Apply
        if: github.ref == 'refs/heads/main' && github.event_name == 'push'
        run: terraform apply -input=false tfplan

The plan step uses continue-on-error: true so the PR comment step still runs. A separate step checks the actual plan outcome. Apply only runs on pushes to main.

GitHub Actions Advanced Patterns: Reusable Workflows, Matrix Strategies, OIDC, and Optimization

GitHub Actions Advanced Patterns#

Once you move past single-file workflows that run npm test on every push, GitHub Actions becomes a platform for building serious CI/CD infrastructure. The features covered here – reusable workflows, composite actions, matrix strategies, OIDC authentication, and caching – are what separate a working pipeline from a production-grade one.

Reusable Workflows#

A reusable workflow is a complete workflow file that other workflows can call like a function. Define it with the workflow_call trigger:

OAuth2 and OIDC for Infrastructure

OAuth2 vs OIDC: What Actually Matters#

OAuth2 is an authorization framework. It answers the question “what is this client allowed to do?” by issuing access tokens. It does not tell you who the user is. OIDC (OpenID Connect) is a layer on top of OAuth2 that adds authentication. It answers “who is this user?” by adding an ID token – a signed JWT containing user identity claims like email, name, and group memberships.