---
title: "CI/CD Patterns for Monorepos"
description: "Reference for monorepo CI/CD patterns — change detection, selective builds, caching strategies, Turborepo/Nx/Bazel comparison, workspace-aware testing, and artifact management."
url: https://agent-zone.ai/knowledge/cicd/monorepo-ci-patterns/
section: knowledge
date: 2026-02-22
categories: ["cicd"]
tags: ["monorepo","ci","change-detection","caching","turborepo","nx","bazel","workspaces","artifact-management"]
skills: ["monorepo-ci-design","build-optimization","change-detection","cache-management"]
tools: ["turborepo","nx","bazel","github-actions","pnpm","npm-workspaces"]
levels: ["intermediate"]
word_count: 1303
formats:
  json: https://agent-zone.ai/knowledge/cicd/monorepo-ci-patterns/index.json
  html: https://agent-zone.ai/knowledge/cicd/monorepo-ci-patterns/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=CI%2FCD+Patterns+for+Monorepos
---


# CI/CD Patterns for Monorepos

A monorepo puts multiple packages, services, and applications in a single repository. This simplifies cross-package changes and dependency management, but it breaks the assumption that most CI systems are built on: one repo means one build. Without careful pipeline design, every commit triggers a full rebuild of everything, and CI becomes the bottleneck.

## The Core Problem

In a monorepo, a commit that touches `packages/auth-service/src/handler.ts` should build and test `auth-service` and its dependents, but not `billing-service` or `frontend`. Getting this right is the central challenge of monorepo CI.

## Change Detection

Change detection determines which packages are affected by a commit. There are three approaches, each with different tradeoffs.

### Git Diff Based

Compare the current commit against the base branch and map changed files to packages:

```bash
# Find changed files relative to main
CHANGED_FILES=$(git diff --name-only origin/main...HEAD)

# Map to packages (assuming packages/<name>/ structure)
CHANGED_PACKAGES=$(echo "$CHANGED_FILES" | grep '^packages/' | cut -d/ -f2 | sort -u)
```

In GitHub Actions, use path filters directly:

```yaml
on:
  pull_request:
    paths:
      - 'packages/auth-service/**'
      - 'packages/shared-lib/**'

jobs:
  build-auth:
    runs-on: ubuntu-latest
    steps:
      - run: make build-auth
```

The limitation is that path filters are static. They do not understand dependency graphs, so a change to `shared-lib` will not trigger `auth-service`'s pipeline unless explicitly listed. This becomes unmanageable as the graph grows.

### Dependency Graph Based

Use the package manager's dependency graph to determine what is affected by a change. If `shared-lib` changed and `auth-service` depends on `shared-lib`, both must be rebuilt.

```bash
# pnpm: list packages affected by changes since main
pnpm --filter "...[origin/main]" run build

# The [...] syntax means "packages changed since this ref, plus their dependents"
```

Nx and Turborepo both provide this natively:

```bash
# Nx: run build for affected projects
npx nx affected --target=build --base=origin/main

# Turborepo: run build for changed packages and their dependents
npx turbo run build --filter="...[origin/main]"
```

This is the correct approach for most monorepos. It builds only what needs building and nothing that does not.

### File Hash Based

Bazel and similar hermetic build systems hash all inputs (source files, dependencies, build configuration) and rebuild only when hashes change. This is the most precise approach but requires the build system to track every input, which is a significant upfront investment.

## Selective Builds

Once you know what changed, configure CI to build only the affected packages. Two patterns work well.

### Dynamic Job Generation

Generate CI jobs dynamically based on detected changes:

```yaml
jobs:
  detect-changes:
    runs-on: ubuntu-latest
    outputs:
      packages: ${{ steps.detect.outputs.packages }}
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - id: detect
        run: |
          PACKAGES=$(npx turbo run build --dry-run=json --filter="...[origin/main]" \
            | jq -r '[.packages[] | select(. != "//") ] | @json')
          echo "packages=$PACKAGES" >> "$GITHUB_OUTPUT"

  build:
    needs: detect-changes
    if: needs.detect-changes.outputs.packages != '[]'
    strategy:
      matrix:
        package: ${{ fromJson(needs.detect-changes.outputs.packages) }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npx turbo run build --filter="${{ matrix.package }}"
```

This creates one CI job per affected package. Each job runs in parallel. Unaffected packages do not consume CI resources.

### Build Tool Orchestration

Alternatively, let the build tool handle orchestration in a single CI job:

```yaml
jobs:
  build-and-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - run: npx turbo run build test lint --filter="...[origin/main]"
```

Simpler CI configuration, but you lose per-package CI status visibility.

## Caching Strategies

Monorepo builds are inherently redundant. If you built `shared-lib` yesterday and nothing changed, building it again wastes time. Caching eliminates redundant work.

### Build Tool Caches

All three major build tools cache task outputs.

**Turborepo** caches by hashing inputs. On cache hit, it replays output instead of re-executing:

```json
{
  "pipeline": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**"],
      "inputs": ["src/**", "tsconfig.json", "package.json"]
    },
    "test": {
      "dependsOn": ["build"],
      "outputs": [],
      "inputs": ["src/**", "tests/**"]
    }
  }
}
```

**Nx** uses a computation cache with the same principle. Configure it in `nx.json`:

```json
{
  "targetDefaults": {
    "build": {
      "inputs": ["production", "^production"],
      "outputs": ["{projectRoot}/dist"],
      "cache": true
    },
    "test": {
      "inputs": ["default", "^production"],
      "cache": true
    }
  }
}
```

**Bazel** caches at the action level with content-addressable hashing. Its cache granularity is finer than Turborepo or Nx -- individual compilation units rather than entire packages.

### Remote Caching

Local caches only help if the same machine runs consecutive builds. In CI, every job starts with an empty cache. Remote caching stores build outputs in a shared location so cache hits work across machines and developers.

**Turborepo Remote Cache** is available through Vercel or self-hosted with `turbo-remote-cache-server`:

```bash
npx turbo run build --team=myteam --token=$TURBO_TOKEN
```

**Nx Cloud** provides remote caching and distributed task execution:

```bash
npx nx-cloud start-ci-run --distribute-on="5 linux-medium-js"
```

**Bazel Remote Execution** uses a remote build cache (e.g., BuildBuddy, EngFlow, or a self-hosted gRPC cache):

```
build --remote_cache=grpcs://remote.buildbuddy.io
build --remote_header=x-buildbuddy-api-key=YOUR_KEY
```

### CI-Level Caching

Cache `node_modules`, package manager stores, and build outputs between CI runs. Layer CI caching with build tool caching -- the CI cache restores the `.turbo` directory, and Turborepo uses it to skip unchanged tasks:

```yaml
- uses: actions/cache@v4
  with:
    path: |
      node_modules
      .turbo
    key: ${{ runner.os }}-turbo-${{ hashFiles('pnpm-lock.yaml') }}-${{ github.sha }}
    restore-keys: |
      ${{ runner.os }}-turbo-${{ hashFiles('pnpm-lock.yaml') }}-
```

## Build Tool Comparison

| Feature | Turborepo | Nx | Bazel |
|---|---|---|---|
| Language support | JavaScript/TypeScript focused | JavaScript/TypeScript focused, plugins for Go, Java, etc. | Any language with rules |
| Configuration | turbo.json, minimal | nx.json + project.json per package | BUILD files per package |
| Learning curve | Low | Medium | High |
| Change detection | File hash + dependency graph | File hash + dependency graph | Content-addressable hashing |
| Remote cache | Vercel or self-hosted | Nx Cloud or self-hosted | gRPC remote cache protocol |
| Distributed execution | No (CI matrix only) | Nx Cloud agents | Native remote execution |
| Cache granularity | Task level (per package) | Task level (per project) | Action level (per compilation unit) |
| Best for | JS/TS monorepos up to ~50 packages | JS/TS monorepos with complex graphs | Large polyglot monorepos (100+ packages) |

**Choose Turborepo** for JS/TS monorepos where simplicity matters -- fastest to set up, works well up to ~50 packages. **Choose Nx** for larger JS/TS monorepos needing code generation, dependency visualization, and distributed task execution. **Choose Bazel** for polyglot monorepos with hundreds of packages where build correctness and hermeticity are critical -- significant upfront cost but unmatched precision at scale.

## Workspace-Aware Testing

Testing in monorepos must respect the dependency graph. Run tests only for affected packages, but include transitive dependents.

```bash
# pnpm: test packages changed since main, including dependents
pnpm --filter "...[origin/main]" run test

# Nx: test affected projects
npx nx affected --target=test --base=origin/main

# Turborepo: test changed packages and dependents
npx turbo run test --filter="...[origin/main]"
```

For cross-package integration tests, create a dedicated `integration-tests` package that depends on the packages under test rather than putting them in any single package's suite.

## Artifact Management

Monorepos produce multiple artifacts per build. Managing which artifacts to publish and how to version them requires discipline.

### Independent Versioning

Each package has its own version. Tools like Changesets automate version bumping across the dependency graph:

```bash
npx changeset add       # Developer describes the change
npx changeset version   # CI bumps versions
npx changeset publish   # CI publishes packages
```

### Container Images Per Service

Build container images only for changed services. Tag with the commit SHA and push to the registry:

```yaml
- name: Build affected service images
  run: |
    for svc in $AFFECTED_SERVICES; do
      docker build -t registry.example.com/$svc:${{ github.sha }} \
        -f packages/$svc/Dockerfile .
      docker push registry.example.com/$svc:${{ github.sha }}
    done
```

Note the build context is the repo root (`.`), not the service directory. Monorepo services typically need access to shared packages during the build. Use `.dockerignore` to exclude irrelevant packages from the build context to keep it small. Label each artifact with the source commit and package name so you can trace any deployed artifact back to its exact source.

## Common Mistakes

1. **Running all tests on every commit.** Use change detection to build only affected packages.
2. **Ignoring transitive dependencies.** A change to `shared-lib` must trigger builds of everything that depends on it.
3. **Shallow clones breaking change detection.** Use `fetch-depth: 0` so git diff can compare against the base branch.
4. **Shared mutable state in tests.** Parallel test execution fails if tests share databases, ports, or filesystem paths.
5. **Skipping remote caching.** Without it, every CI run rebuilds from scratch.

