Lightweight Kubernetes at the Edge with K3s

Lightweight Kubernetes at the Edge with K3s#

K3s is a production-grade Kubernetes distribution packaged as a single binary under 100 MB. It was built for environments where resources are constrained and operational simplicity matters: edge locations, IoT gateways, retail stores, factory floors, branch offices, and CI/CD pipelines where you need a real cluster but cannot justify the overhead of a full Kubernetes deployment.

K3s achieves its small footprint by replacing etcd with SQLite (by default), embedding containerd directly, removing in-tree cloud provider and storage plugins, and packaging everything into a single binary. Despite these changes, K3s is a fully conformant Kubernetes distribution – it passes the CNCF conformance tests and runs standard Kubernetes workloads without modification.

Logging Patterns in Kubernetes

How Kubernetes Captures Logs#

Containers write to stdout and stderr. The container runtime (containerd, CRI-O) captures these streams and writes them to files on the node. The kubelet manages these files at /var/log/pods/<namespace>_<pod-name>_<pod-uid>/<container-name>/ with symlinks from /var/log/containers/.

The format depends on the runtime. Containerd writes logs in a format with timestamp, stream tag, and the log line:

2026-02-22T10:15:32.123456789Z stdout F {"level":"info","msg":"request handled","status":200}
2026-02-22T10:15:32.456789012Z stderr F error: connection refused to database

kubectl logs reads these files. It only works while the pod exists – once a pod is deleted, its log files are eventually cleaned up. This is why centralized log collection is essential.

Minikube to Cloud Migration: 10 Things That Change on EKS, GKE, and AKS

Minikube to Cloud Migration Guide#

Minikube is excellent for learning and local development. But almost everything that “just works” on minikube requires explicit configuration on a cloud cluster. Here are the 10 things that change.

1. Ingress Controller Becomes a Cloud Load Balancer#

On minikube: You enable the NGINX ingress addon with minikube addons enable ingress. Traffic reaches your services through minikube tunnel or minikube service.

On cloud: The ingress controller must be deployed explicitly, and it provisions a real cloud load balancer. On AWS, the AWS Load Balancer Controller creates ALBs or NLBs from Ingress resources. On GKE, the built-in GCE ingress controller creates Google Cloud Load Balancers. You pay per load balancer.

Monolith to Microservices: When and How to Decompose

Monolith to Microservices#

The decision to break a monolith into microservices is one of the most consequential architectural choices a team makes. Get it right and you unlock independent deployment, team autonomy, and targeted scaling. Get it wrong and you trade a manageable monolith for a distributed monolith – all the complexity of microservices with none of the benefits.

When to Stay with a Monolith#

Microservices are not an upgrade from monoliths. They are a different set of tradeoffs. A well-structured monolith is the right choice in many situations.

Multi-Tenancy Patterns: Namespace Isolation, vCluster, and Dedicated Clusters

Multi-Tenancy Patterns: Namespace Isolation, vCluster, and Dedicated Clusters#

Multi-tenancy in Kubernetes means running workloads for multiple teams, customers, or environments on shared infrastructure. The core tension is always the same: sharing reduces cost, but isolation prevents blast radius. Choosing the wrong model creates security gaps or wastes money. This guide provides a framework for selecting the right approach and implementing it correctly.

The Three Models#

Every Kubernetes multi-tenancy approach falls into one of three categories, each with different isolation guarantees:

Namespace Strategy and Multi-Tenancy: Isolation, Quotas, and Policies

Namespace Strategy and Multi-Tenancy#

Namespaces are the foundation for isolating workloads in a shared Kubernetes cluster. Without a deliberate strategy, teams deploy into arbitrary namespaces, resources are unbound, and one misbehaving application can take down the entire cluster.

Why Namespaces Matter#

Namespaces provide four isolation boundaries:

  • RBAC scoping: Roles and RoleBindings are namespace-scoped, so you can grant teams access to their namespaces only.
  • Resource quotas: Limit CPU, memory, and object counts per namespace, preventing one team from starving others.
  • Network policies: Restrict traffic between namespaces so a compromised application cannot reach services it should not.
  • Organizational clarity: kubectl get pods -n payments-prod shows exactly what you expect, not a jumble of unrelated workloads.

System Namespaces#

These exist in every cluster and should be off-limits to application teams:

Network Policies: Namespace Isolation and Pod-to-Pod Rules

Network Policies: Namespace Isolation and Pod-to-Pod Rules#

By default, every pod in a Kubernetes cluster can talk to every other pod. Network policies let you restrict that. They are namespace-scoped resources that select pods by label and define allowed ingress and egress rules.

Critical Prerequisite: CNI Support#

Network policies are only enforced if your CNI plugin supports them. Calico, Cilium, and Weave all support network policies. Flannel does not. If you are running Flannel, you can create NetworkPolicy resources without errors, but they will have absolutely no effect. This is a silent failure that wastes hours of debugging.

Network Security Layers

Defense in Depth#

No single network control stops every attack. Layer controls so that a failure in one does not compromise the system: host firewalls, Kubernetes network policies, service mesh encryption, API gateway authentication, and DNS security, each operating independently.

Host Firewall: iptables and nftables#

Every node should run a host firewall regardless of the orchestrator. Block everything by default:

# iptables: default deny with essential allows
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT

# Allow established connections
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allow SSH from management network only
iptables -A INPUT -p tcp --dport 22 -s 10.0.100.0/24 -j ACCEPT

# Allow kubelet API (for k8s nodes)
iptables -A INPUT -p tcp --dport 10250 -s 10.0.0.0/16 -j ACCEPT

# Allow loopback
iptables -A INPUT -i lo -j ACCEPT

The nftables equivalent is more readable for complex rulesets:

Node Drain and Cordon: Safe Node Maintenance

Node Drain and Cordon#

Node maintenance is a routine part of cluster operations: kernel patches, instance type changes, Kubernetes upgrades, hardware replacement. The tools are kubectl cordon (stop scheduling new pods) and kubectl drain (evict existing pods). Getting the flags and sequence right is the difference between a seamless operation and a production incident.

Cordon: Mark Unschedulable#

Cordon sets the spec.unschedulable field on a node to true. The scheduler will not place new pods on it, but existing pods continue running undisturbed.

Observability Stack Troubleshooting: Diagnosing Prometheus, Alertmanager, Grafana, and Pipeline Failures

“I’m Not Seeing Metrics” – Systematic Diagnosis#

This is the most common observability complaint. Work through these steps in order to isolate where the pipeline breaks.

Step 1: Is the Target Being Scraped?#

Open the Prometheus UI at /targets. Search for the job name or target address. Look at three things: state (UP or DOWN), last scrape timestamp, and error message.

Status: UP    Last Scrape: 3s ago    Duration: 12ms    Error: (none)
Status: DOWN  Last Scrape: 15s ago   Duration: 0ms     Error: connection refused

If the target does not appear at all, Prometheus does not know about it. This means the scrape configuration (or ServiceMonitor) is not matching the target. Jump to the ServiceMonitor checklist at the end of this guide.