Prometheus and Grafana on Minikube: Production-Like Monitoring Without the Cost

Why Monitor a POC Cluster#

Monitoring on minikube serves two purposes. First, it catches resource problems early – your app might work in tests but OOM-kill under load, and you will not know without metrics. Second, it validates that your monitoring configuration works before you deploy it to production. If your ServiceMonitors, dashboards, and alert rules work on minikube, they will work on EKS or GKE.

The Right Chart: kube-prometheus-stack#

There are multiple Prometheus-related Helm charts. Use the right one:

Pipeline Observability: CI/CD Metrics, DORA, OpenTelemetry, and Grafana Dashboards

Pipeline Observability#

You cannot improve what you do not measure. Most teams have detailed monitoring for their production applications but treat their CI/CD pipelines as black boxes. When builds are slow, flaky, or failing, the response is anecdotal – “builds feel slow lately” – rather than data-driven. Pipeline observability turns CI/CD from a cost center you tolerate into infrastructure you actively manage.

Core CI/CD Metrics#

Build Duration#

Total time from pipeline trigger to completion. Track this as a histogram, not an average, because averages hide bimodal distributions. A pipeline that takes 5 minutes for code-only changes and 25 minutes for dependency updates averages 15 minutes, which describes neither case accurately.

Choosing a Monitoring Stack: Prometheus vs Datadog vs Cloud-Native vs VictoriaMetrics

Choosing a Monitoring Stack#

Monitoring is not optional. Without metrics, you are guessing. The question is not whether to monitor but which stack to use. The right choice depends on your cost tolerance, operational capacity, retention requirements, and how much you value control versus convenience.

Decision Criteria#

Before comparing tools, clarify what matters to your organization:

  • Cost model: Are you optimizing for infrastructure spend or engineering time? Self-managed tools cost less in licensing but more in operational hours. SaaS tools cost more in subscription fees but less in engineering effort.
  • Operational burden: Who manages the monitoring system? Do you have an infrastructure team, or are developers responsible for everything?
  • Data retention: Do you need metrics for 15 days, 90 days, or years? Long retention changes the equation significantly.
  • Query capability: Does your team know PromQL? Do they need ad-hoc analysis or mostly pre-built dashboards?
  • Alerting requirements: Simple threshold alerts, or complex multi-signal alerts with routing and escalation?
  • Team expertise: An organization fluent in Prometheus wastes that investment by switching to Datadog. An organization with no Prometheus experience faces a learning curve.

Options at a Glance#

Capability Prometheus + Grafana Prometheus + Thanos/Mimir VictoriaMetrics Datadog Cloud-Native Grafana Cloud
Cost model Infrastructure only Infrastructure only Infrastructure only Per host ($15-23/mo) Per metric/API call Per series/GB
Operational burden High Very high Medium None Low Low
Query language PromQL PromQL MetricsQL (PromQL-compatible) Datadog query language Vendor-specific PromQL, LogQL
Default retention 15 days (local disk) Unlimited (object storage) Unlimited (configurable) 15 months Varies (15 days - 15 months) Plan-dependent
HA built-in No (requires federation) Yes Yes (cluster mode) Yes Yes Yes
Multi-cluster Federation (limited) Yes (global view) Yes (cluster mode) Yes Per-account Yes
APM/Tracing No (separate tools) No (separate tools) No (separate tools) Yes (integrated) Varies Yes (Tempo)
Vendor lock-in None None Low High High Low-Medium

Prometheus + Grafana (Self-Managed)#

Prometheus is the de facto standard for Kubernetes metrics. It uses a pull-based model, scraping metrics from endpoints at configurable intervals, and stores time series data on local disk. Grafana provides visualization. Alertmanager handles alert routing.

Distributed Tracing in Practice

Trace, Span, and Context#

A trace represents a single request flowing through a distributed system. It is identified by a 128-bit trace ID. A span represents one unit of work within that trace – an HTTP handler, a database query, a message publish. Each span has a name, start time, duration, status, attributes (key-value pairs), and events (timestamped annotations). Spans form a tree: every span except the root has a parent span ID.

Docker Compose Validation Stacks: Templates for Multi-Service Testing

Docker Compose Validation Stacks#

Docker Compose validates multi-service architectures without Kubernetes overhead. It answers the question: do these services actually work together? Containers start, connect, and communicate – or they fail, giving you fast feedback before you push to a cluster.

This article provides complete Compose stacks for four common validation scenarios. Each includes the full docker-compose.yml, health check scripts, and teardown procedures. The pattern for using them is always the same: clone the template, customize for your services, bring it up, validate, capture results, bring it down.

Grafana Dashboards for Kubernetes Monitoring

Data Source Configuration#

Grafana connects to backend data stores through data sources. For a complete Kubernetes observability stack, you need three: Prometheus for metrics, Loki for logs, and Tempo for traces.

Provision data sources declaratively so they survive Grafana restarts and are version-controlled:

# grafana/provisioning/datasources/observability.yml
apiVersion: 1
datasources:
  - name: Prometheus
    type: prometheus
    access: proxy
    url: http://prometheus-operated:9090
    isDefault: true
    jsonData:
      timeInterval: "15s"
      exemplarTraceIdDestinations:
        - name: traceID
          datasourceUid: tempo

  - name: Loki
    type: loki
    access: proxy
    url: http://loki-gateway:3100
    jsonData:
      derivedFields:
        - name: TraceID
          matcherRegex: '"traceID":"(\w+)"'
          url: "$${__value.raw}"
          datasourceUid: tempo

  - name: Tempo
    type: tempo
    access: proxy
    url: http://tempo:3100
    jsonData:
      tracesToMetrics:
        datasourceUid: prometheus
        tags: [{key: "service.name", value: "job"}]
      serviceMap:
        datasourceUid: prometheus
      nodeGraph:
        enabled: true

The cross-linking configuration lets you click from a metric data point to the trace that generated it, and extract trace IDs from log lines to link to Tempo.

Grafana Loki for Log Aggregation

Loki Architecture#

Loki is a log aggregation system designed by Grafana Labs. Unlike Elasticsearch, Loki does not index log content. It indexes only metadata labels, then stores compressed log chunks in object storage. This makes it cheaper to operate and simpler to scale, at the cost of slower full-text search across massive datasets.

The core components are:

  • Distributor: Receives incoming log streams from agents, validates labels, and forwards to ingesters via consistent hashing.
  • Ingester: Buffers log data in memory, builds compressed chunks, and flushes them to long-term storage (S3, GCS, filesystem).
  • Querier: Executes LogQL queries by fetching chunk references from the index and reading chunk data from storage.
  • Compactor: Runs periodic compaction on the index (especially for boltdb-shipper) and handles retention enforcement by deleting old data.
  • Query Frontend (optional): Splits large queries into smaller ones, caches results, and distributes work across queriers.

Deployment Modes#

Loki supports three deployment modes, each suited to different scales.

Grafana Mimir for Long-Term Prometheus Storage

Grafana Mimir for Long-Term Prometheus Storage#

Prometheus stores metrics on local disk with a practical retention limit of weeks to a few months. Beyond that, you need a long-term storage solution. Grafana Mimir is a horizontally scalable, multi-tenant time series database designed for exactly this purpose. It is API-compatible with Prometheus – Grafana queries Mimir using the same PromQL, and Prometheus pushes data to Mimir via remote_write.

Mimir is the successor to Cortex. Grafana Labs forked Cortex, rewrote significant portions for performance, and released Mimir under the AGPLv3 license. If you see references to Cortex architecture, the concepts map directly to Mimir with improvements.

Grafana Organization: Folders, Permissions, Provisioning, and Dashboard Lifecycle

Folder Structure Strategy#

Grafana folders organize dashboards and control access through permissions. The folder structure you choose determines how teams find dashboards and who can edit them. Three patterns work in practice, each suited to a different organizational shape.

By Team#

When teams own distinct services and rarely need cross-team dashboards:

Platform/
  Node Overview
  Kubernetes Cluster
  Networking
Backend/
  API Gateway
  User Service
  Payment Service
Frontend/
  Web Vitals
  CDN Performance
Data/
  Kafka Pipelines
  ETL Jobs
  Data Quality

Each team gets Editor access to their folder and Viewer access to everything else. This works well when ownership boundaries are clear.

Observability Stack Troubleshooting: Diagnosing Prometheus, Alertmanager, Grafana, and Pipeline Failures

“I’m Not Seeing Metrics” – Systematic Diagnosis#

This is the most common observability complaint. Work through these steps in order to isolate where the pipeline breaks.

Step 1: Is the Target Being Scraped?#

Open the Prometheus UI at /targets. Search for the job name or target address. Look at three things: state (UP or DOWN), last scrape timestamp, and error message.

Status: UP    Last Scrape: 3s ago    Duration: 12ms    Error: (none)
Status: DOWN  Last Scrape: 15s ago   Duration: 0ms     Error: connection refused

If the target does not appear at all, Prometheus does not know about it. This means the scrape configuration (or ServiceMonitor) is not matching the target. Jump to the ServiceMonitor checklist at the end of this guide.