Prompt Engineering for Infrastructure Operations: Templates, Safety, and Structured Reasoning

Prompt Engineering for Infrastructure Operations#

Infrastructure prompts differ from general-purpose prompts in one critical way: the output often drives real actions on real systems. A hallucinated filename in a creative writing task is harmless. A hallucinated resource name in a Kubernetes delete command causes an outage. Every prompt pattern here is designed with that asymmetry in mind – prioritizing correctness and safety over cleverness.

Structured Output for Infrastructure Data#

Infrastructure operations produce structured data: IP addresses, resource names, status codes, configuration values. Free-form text responses create parsing fragility. Force structured output from the start.

Prompt Engineering for Local Models: Presets, Focus Areas, and Differences from Cloud Model Prompting

Prompt Engineering for Local Models#

Prompting a 7B local model is not the same as prompting Claude or GPT-4. Cloud models are overtrained on instruction following, tolerate vague prompts, and self-correct. Small local models need more structure, more constraints, and more explicit formatting instructions. The prompts that work effortlessly on cloud models often produce garbage on local models.

This is not a weakness — it is a design consideration. Local models trade generality for speed and cost. Your prompts must compensate by being more specific.

RAG for Codebases Without Cloud APIs: ChromaDB, Embedding Models, and Semantic Code Search

RAG for Codebases Without Cloud APIs#

When a codebase has hundreds of files, neither direct concatenation nor summarize-then-correlate is ideal for targeted questions like “where is authentication handled?” or “what calls the payment API?” RAG (Retrieval-Augmented Generation) indexes the codebase into a vector database and retrieves only the relevant chunks for each query.

The key advantage: query time is constant regardless of codebase size. Whether the codebase has 50 files or 5,000, a query takes the same time because only the top-K relevant chunks are retrieved and sent to the model.

Rate Limiting Implementation Patterns

Rate Limiting Implementation Patterns#

Rate limiting controls how many requests a client can make within a time period. It protects services from overload, ensures fair usage across clients, prevents abuse, and provides a mechanism for graceful degradation under load. Every production API needs rate limiting at some layer.

Algorithm Comparison#

Fixed Window#

The simplest algorithm. Divide time into fixed windows (e.g., 1-minute intervals) and count requests per window. When the count exceeds the limit, reject requests until the next window starts.

Structured Output from Small Local Models: JSON Mode, Extraction, Classification, and Token Runaway Fixes

Structured Output from Small Local Models#

Small models (2-7B parameters) produce structured output that is 85-95% as accurate as cloud APIs for well-defined extraction and classification tasks. The key is constraining the output space so the model’s limited reasoning capacity is focused on filling fields rather than deciding what to generate.

This is where local models genuinely compete with — and sometimes match — models 30x their size.

JSON Mode#

Ollama’s JSON mode forces the model to produce valid JSON:

Structured Output Patterns: Getting Reliable JSON from LLMs

Structured Output Patterns#

Agents need structured data from LLMs – not free-form text with JSON somewhere inside it. When an agent asks a model to classify a bug as critical/medium/low and gets back a paragraph explaining the classification, the agent cannot act on it programmatically. Structured output is the bridge between LLM reasoning and deterministic code.

Three Approaches#

JSON Mode#

The simplest approach. Tell the API to return valid JSON and describe the shape you want in the prompt.

Two-Pass Analysis: The Summarize-Then-Correlate Pattern for Scaling Beyond Context Windows

Two-Pass Analysis: Summarize-Then-Correlate#

A 32B model with a 32K context window can process roughly 8-10 source files at once. A real codebase has hundreds. Concatenating everything into one prompt fails — the context overflows, quality degrades, and the model either truncates or hallucinates connections.

The two-pass pattern solves this by splitting analysis into two stages:

  1. Pass 1 (Summarize): A fast 7B model reads each file independently and produces a focused summary.
  2. Pass 2 (Correlate): A capable 32B model reads all summaries (which are much shorter than the original files) and answers the cross-cutting question.

This effectively multiplies your context window by the compression ratio of summarization — typically 10-20x. A 32K context that handles 10 files directly can handle 100-200 files through summaries.

MCP Server Patterns: Building Tools for AI Agents

MCP Server Patterns#

Model Context Protocol (MCP) is Anthropic’s open standard for connecting AI agents to external tools and data. Instead of every agent framework inventing its own tool integration format, MCP provides a single protocol that any agent can speak.

An agent that supports MCP can discover tools at runtime, understand their inputs and outputs, and invoke them – without hardcoded integration code for each tool.

Server Structure: Three Primitives#

An MCP server exposes three types of capabilities:

Writing Custom Prometheus Exporters: Exposing Application and Business Metrics

When to Write a Custom Exporter#

The Prometheus ecosystem has exporters for most infrastructure components: node_exporter for Linux hosts, kube-state-metrics for Kubernetes objects, mysqld_exporter for MySQL, and hundreds more. You write a custom exporter when your application or service does not have a Prometheus endpoint, you need business metrics that no generic exporter can provide (revenue, signups, queue depth), or you need to adapt a non-Prometheus system that exposes metrics in a proprietary format.