Kubernetes Troubleshooting Decision Trees: Symptom to Diagnosis to Fix

Kubernetes Troubleshooting Decision Trees#

Troubleshooting Kubernetes in production is about eliminating possibilities in the right order. Every symptom maps to a finite set of causes, and each cause has a specific diagnostic command. The decision trees below encode that mapping. Start at the symptom, follow the branches, run the commands, and the output tells you which branch to take next.

These trees are designed to be followed mechanically. No intuition required – just execute the commands and interpret the results.

Linux Troubleshooting: A Systematic Approach to Diagnosing System Issues

The USE Method: A Framework for Systematic Diagnosis#

The USE method, developed by Brendan Gregg, provides a structured approach to system performance analysis. For every resource on the system – CPU, memory, disk, network – you check three things:

  • Utilization: How busy is the resource? (e.g., CPU at 90%)
  • Saturation: Is work queuing because the resource is overloaded? (e.g., CPU run queue length)
  • Errors: Are there error events? (e.g., disk I/O errors, network packet drops)

This method prevents the common trap of randomly checking things. Instead, you systematically walk through each resource and check all three dimensions. If you find high utilization, saturation, or errors on a resource, you have found your bottleneck.

Load Balancer Patterns: L4 vs L7, Health Checks, Session Affinity, and Cloud LB Selection

L4 vs L7 Load Balancing#

The distinction between Layer 4 and Layer 7 load balancing determines what the load balancer can see and what routing decisions it can make.

Layer 4 (Transport) load balancers work at the TCP/UDP level. They see source/destination IPs and ports but not the content of the traffic. They forward raw TCP connections to backends. This makes them fast (no protocol parsing), protocol-agnostic (works for HTTP, gRPC, database connections, custom protocols), and transparent (the backend sees the original packets, mostly). Use L4 for database connections, raw TCP services, and when you need maximum throughput with minimum latency.

Long-Term Metrics Storage: Thanos vs Grafana Mimir vs VictoriaMetrics

The Retention Problem#

Prometheus stores metrics on local disk with a default retention of 15 days. Most production teams extend this to 30 or 90 days, but local storage has hard limits. A single Prometheus instance cannot scale disk beyond the node it runs on. It provides no high availability – if the instance goes down, you lose scraping and query access. And each Prometheus instance only sees its own targets, so there is no unified view across clusters or regions.

Managed Kubernetes vs Self-Managed: EKS/AKS/GKE vs kubeadm vs k3s vs RKE

Managed Kubernetes vs Self-Managed#

The fundamental tradeoff is straightforward: managed Kubernetes trades control for reduced operational burden, while self-managed Kubernetes gives you full control at the cost of owning everything – etcd, certificates, upgrades, high availability, and recovery.

This decision has cascading effects on team structure, hiring, on-call burden, and long-term maintenance cost. Choose deliberately.

Managed Kubernetes (EKS, AKS, GKE)#

The cloud provider runs the control plane: API server, etcd, controller manager, scheduler. They handle patching, scaling, and high availability for these components. You manage worker nodes and workloads.

MCP Server Patterns: Building Tools for AI Agents

MCP Server Patterns#

Model Context Protocol (MCP) is Anthropic’s open standard for connecting AI agents to external tools and data. Instead of every agent framework inventing its own tool integration format, MCP provides a single protocol that any agent can speak.

An agent that supports MCP can discover tools at runtime, understand their inputs and outputs, and invoke them – without hardcoded integration code for each tool.

Server Structure: Three Primitives#

An MCP server exposes three types of capabilities:

Minikube Networking: Services, Ingress, DNS, and LoadBalancer Emulation

Minikube Networking: Services, Ingress, DNS, and LoadBalancer Emulation#

Minikube networking behaves differently from cloud Kubernetes in ways that cause confusion. LoadBalancer services do not get external IPs by default, the minikube IP may or may not be directly reachable from your host depending on the driver, and ingress requires specific addon setup. Understanding these differences prevents hours of debugging connection timeouts to services that are actually running fine.

How Minikube Networking Works#

Minikube creates a single node (a VM or container depending on the driver) with its own IP address. Pods inside the cluster get IPs from an internal CIDR. Services get ClusterIPs from another internal range. The bridge between your host machine and the cluster depends entirely on which driver you use.

Minikube Setup, Drivers, and Resource Configuration

Minikube Setup, Drivers, and Resource Configuration#

Minikube runs a single-node Kubernetes cluster on your local machine. The difference between a minikube setup that feels like a toy and one that behaves like production comes down to three choices: the driver, the resource allocation, and the Kubernetes version. Get these wrong and you spend more time fighting the tool than using it.

Installation#

On macOS with Homebrew:

brew install minikube

On Linux via direct download:

Multi-Architecture Container Images: Buildx, Manifest Lists, and Registry Patterns

Multi-Architecture Container Images#

You can no longer assume containers run only on x86. AWS Graviton instances are ARM64. Developer laptops with Apple Silicon are ARM64. Ampere cloud instances are ARM64. A container image tagged myapp:latest needs to work on both architectures, or you end up maintaining separate tags and hoping nobody pulls the wrong one.

Manifest Lists#

A manifest list (also called an OCI image index) lets a single tag point to multiple architecture-specific images. When a client pulls myapp:latest, the registry returns the image matching the client’s architecture.

Multi-Cluster Kubernetes: Architecture, Networking, and Management Patterns

Multi-Cluster Kubernetes#

A single Kubernetes cluster is a single blast radius. A bad deployment, a control plane failure, a misconfigured admission webhook – any of these can take down everything. Multi-cluster is not about complexity for its own sake. It is about isolation, resilience, and operating workloads that span regions, regulations, or teams.

Why Multi-Cluster#

Blast radius isolation. A cluster-wide failure (etcd corruption, bad admission webhook, API server overload) only affects one cluster. Critical workloads in another cluster are untouched.