Agent Context Preservation for Long-Running Workflows: Checkpoints, Sub-Agent Delegation, and Avoiding Context Pollution

Agent Context Preservation for Long-Running Workflows#

The context window is the single most important constraint in agent-driven work. A single-turn task uses a fraction of it. A multi-hour project fills it, overflows it, and degrades the agent’s reasoning quality long before the task is complete. Agents that work effectively on ambitious projects are not smarter – they manage context better.

This article covers practical, battle-tested patterns for preserving context across long sessions, delegating to sub-agents without losing coherence, and avoiding context pollution – the gradual degradation that happens when irrelevant information accumulates in the working context.

Agent Memory and Retrieval: Patterns for Persistent, Searchable Agent Knowledge

Agent Memory and Retrieval#

An agent without memory repeats mistakes, forgets context, and relearns the same facts every session. An agent with too much memory wastes context window tokens on irrelevant history and retrieves noise instead of signal. Effective memory sits between these extremes – storing what matters, retrieving what is relevant, and forgetting what is stale.

This reference covers the concrete patterns for building agent memory systems, from simple file-based approaches to production-grade retrieval pipelines.

Long-Running Workflow Orchestration: State Machines, Checkpointing, and Resumable Multi-Agent Execution

Long-Running Workflow Orchestration#

Most agent examples show single-turn or single-session tasks: answer a question, write a function, debug an error. Real projects are different. Building a feature, migrating a database, setting up a monitoring stack – these take hours, span multiple sessions, involve parallel work streams, and must survive context window resets, session timeouts, and partial failures.

This article covers the architecture for workflows that last hours or days: how to model progress as a state machine, how to checkpoint for reliable resumption, how to delegate to parallel sub-agents without losing coherence, and how to recover when things fail partway through.

Two-Pass Analysis: The Summarize-Then-Correlate Pattern for Scaling Beyond Context Windows

Two-Pass Analysis: Summarize-Then-Correlate#

A 32B model with a 32K context window can process roughly 8-10 source files at once. A real codebase has hundreds. Concatenating everything into one prompt fails — the context overflows, quality degrades, and the model either truncates or hallucinates connections.

The two-pass pattern solves this by splitting analysis into two stages:

  1. Pass 1 (Summarize): A fast 7B model reads each file independently and produces a focused summary.
  2. Pass 2 (Correlate): A capable 32B model reads all summaries (which are much shorter than the original files) and answers the cross-cutting question.

This effectively multiplies your context window by the compression ratio of summarization — typically 10-20x. A 32K context that handles 10 files directly can handle 100-200 files through summaries.