Structured Output Patterns: Getting Reliable JSON from LLMs

Structured Output Patterns#

Agents need structured data from LLMs – not free-form text with JSON somewhere inside it. When an agent asks a model to classify a bug as critical/medium/low and gets back a paragraph explaining the classification, the agent cannot act on it programmatically. Structured output is the bridge between LLM reasoning and deterministic code.

Three Approaches#

JSON Mode#

The simplest approach. Tell the API to return valid JSON and describe the shape you want in the prompt.

Structuring Effective On-Call Runbooks: Format, Escalation, and Diagnostic Decision Trees

Why Runbooks Exist#

An on-call engineer paged at 3 AM has limited cognitive capacity. They may not be familiar with the specific service that is failing. They may have joined the team two weeks ago. A runbook bridges the gap between the alert firing and the correct human response. Without runbooks, incident response depends on tribal knowledge – the engineer who built the service and knows its failure modes. That engineer is on vacation when the incident hits.

Tekton Pipelines: Cloud-Native CI/CD on Kubernetes with Tasks, Pipelines, and Triggers

Tekton Pipelines#

Tekton is a Kubernetes-native CI/CD framework. Every pipeline concept – tasks, runs, triggers – is a Kubernetes Custom Resource. Pipelines execute as pods. There is no central server, no UI-driven configuration, no special runtime. If you know Kubernetes, you know how to operate Tekton.

Core Concepts#

Tekton has four primary resources:

  • Task: A sequence of steps that run in a single pod. Each step is a container.
  • TaskRun: An instantiation of a Task with specific inputs. Creating a TaskRun executes the Task.
  • Pipeline: An ordered collection of Tasks with dependencies, parameter passing, and conditional execution.
  • PipelineRun: An instantiation of a Pipeline. Creating a PipelineRun executes the entire pipeline.

The separation between definition (Task/Pipeline) and execution (TaskRun/PipelineRun) means you define your CI/CD process once and trigger it many times with different inputs.

Template Contribution Guide: Standards for Validation Template Submissions

Template Contribution Guide#

Agent Zone validation templates are reusable infrastructure configurations that agents and developers use to validate changes. A Kubernetes devcontainer template, an ephemeral EKS cluster module, a static validation pipeline script – each follows a standard format so that any agent or developer can pick one up, understand its purpose, and use it without reading through implementation details.

This guide defines the standards for contributing templates. It covers directory structure, required files, testing, quality expectations, versioning, and the submission process.

Terraform Cloud Architecture Patterns: VPC/EKS/RDS on AWS, VNET/AKS on Azure, VPC/GKE on GCP

Terraform Cloud Architecture Patterns#

The three-tier architecture — networking, managed Kubernetes, managed database — is the most common pattern for production deployments on any major cloud. The concepts are identical across AWS, Azure, and GCP. The Terraform code is not. Resource names differ, required arguments differ, default behaviors differ, and the gotchas that catch agents and humans are cloud-specific.

This article shows the real Terraform for each layer on each cloud, side by side, so agents can write correct infrastructure code for whichever cloud the user deploys to.

Terraform Code Quality: Patterns, Anti-Patterns, and Review Heuristics

Terraform Code Quality#

Writing Terraform that works is easy. Writing Terraform that is safe, maintainable, and comprehensible to the next person (or agent) is harder. Most quality problems are not bugs — they are patterns that work today but create pain tomorrow: hardcoded IDs that break in a new account, missing lifecycle rules that cause accidental data loss, modules that are too big to understand or too small to justify their existence.

Terraform Core Concepts and Workflow

Providers, Resources, and Data Sources#

Terraform has three core object types. Providers are plugins that talk to APIs (AWS, Azure, GCP, Kubernetes, GitHub). Resources are the things you create and manage. Data sources read existing objects without managing them.

# providers.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
  required_version = ">= 1.5.0"
}

provider "aws" {
  region = var.region
}
# A resource Terraform creates and manages
resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
  tags = { Name = "main-vpc" }
}

# A data source that reads an existing AMI
data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"]
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
  }
}

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = var.instance_type
  subnet_id     = aws_vpc.main.id
}

Resources create, update, and delete. Data sources only read. If you need information about something Terraform does not manage, use a data source.

Terraform Cost Management: Writing Cost-Aware Infrastructure Code

Terraform Cost Management#

The most expensive line in your cloud bill was written in a .tf file. A single instance_type choice, a forgotten NAT Gateway, or an over-provisioned RDS instance can cost thousands per month — and none of these show up in terraform plan. Plan shows what changes. It does not show what it costs.

This article covers how to write cost-aware Terraform and catch expensive decisions before they reach production.

Terraform Import and Brownfield Adoption: Bringing Existing Infrastructure Under Code

Terraform Import and Brownfield Adoption#

Most organizations do not start with Infrastructure as Code. They start with console clicks, CLI commands, and scripts. At some point they decide to adopt Terraform — and now they have hundreds of existing resources that need to be brought under management without disruption.

This is the brownfield problem: writing Terraform code that matches existing infrastructure exactly, importing the state so Terraform knows about the resources, and resolving the inevitable drift between what exists and what the code describes.

Terraform Modules: Structure, Composition, and Reuse

What Modules Are#

A Terraform module is a directory containing .tf files. Every Terraform configuration is already a module (the “root module”). When you call another module from your root module, that is a “child module.” Modules let you encapsulate a set of resources behind a clean interface of input variables and outputs.

Module Structure#

A well-organized module looks like this:

modules/vpc/
  main.tf           # resource definitions
  variables.tf      # input variables
  outputs.tf        # output values
  versions.tf       # required providers and terraform version
  README.md         # usage documentation

The module itself has no backend, no provider configuration, and no hardcoded values. Everything configurable comes in through variables. Everything downstream consumers need comes out through outputs.