<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Single-Node on Agent Zone</title><link>https://agent-zone.ai/tags/single-node/</link><description>Recent content in Single-Node on Agent Zone</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 07 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://agent-zone.ai/tags/single-node/index.xml" rel="self" type="application/rss+xml"/><item><title>Running 7 Helm-Managed Services on One Kubernetes Cluster: A Cross-Cutting Survey</title><link>https://agent-zone.ai/knowledge/platform-engineering/helm-managed-services-on-single-node-k8s-survey/</link><pubDate>Thu, 07 May 2026 00:00:00 +0000</pubDate><guid>https://agent-zone.ai/knowledge/platform-engineering/helm-managed-services-on-single-node-k8s-survey/</guid><description>&lt;p&gt;A single-node Kubernetes cluster running seven Helm-managed services concurrently — Gitea, Mattermost, PostgreSQL, kube-prometheus-stack, Jenkins, Temporal, and NATS — looks tractable on paper. The charts are all upstream-maintained. The hardware is modest but adequate. The operational reality is that &lt;strong&gt;zero of the seven&lt;/strong&gt; ran cleanly on out-of-the-box values. Every chart needed at least one customization to coexist with the others, and several needed substantial rewrites of the helm-values surface. This survey catalogs what those customizations are, why each was necessary, and what the common failure modes look like across the fleet.&lt;/p&gt;</description></item><item><title>Single-Node Kubernetes Disaster Recovery: Backups That Survive a Wiped Docker VM</title><link>https://agent-zone.ai/knowledge/sre/single-node-kubernetes-disaster-recovery/</link><pubDate>Thu, 07 May 2026 00:00:00 +0000</pubDate><guid>https://agent-zone.ai/knowledge/sre/single-node-kubernetes-disaster-recovery/</guid><description>&lt;p&gt;A single-node minikube cluster on Docker Desktop runs the entire control plane, kubelet, every PVC, every Secret, and the container image cache inside one VM whose disk is &lt;strong&gt;one file&lt;/strong&gt;: &lt;code&gt;~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw&lt;/code&gt; on macOS. When that file is lost or corrupted, every piece of cluster state goes with it in a single event. There is no &amp;ldquo;node failure vs storage failure&amp;rdquo; distinction to design around. Every backup strategy that assumes those are separable does not apply.&lt;/p&gt;</description></item></channel></rss>