AKS Networking and Ingress Deep Dive

AKS Networking and Ingress#

AKS networking involves three layers: how pods communicate (CNI plugin), how traffic enters the cluster (load balancers and ingress controllers), and how the cluster connects to other Azure resources (VNet integration, private endpoints). Each layer has Azure-specific behavior that differs from generic Kubernetes.

Azure Load Balancer for Services#

When you create a Service of type LoadBalancer in AKS, Azure provisions a Standard SKU Azure Load Balancer. AKS manages the load balancer rules and health probes automatically.

cert-manager and external-dns: Automatic TLS and DNS on Kubernetes

cert-manager and external-dns#

These two controllers solve the two most tedious parts of exposing services on Kubernetes: getting TLS certificates and creating DNS records. Together, they make it so that creating an Ingress resource automatically provisions a DNS record pointing to your cluster and a valid TLS certificate for the hostname.

cert-manager#

cert-manager watches for Certificate resources and Ingress annotations, then obtains and renews TLS certificates automatically.

Installation#

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

The crds.enabled=true flag installs the CRDs as part of the Helm release. Verify with kubectl get pods -n cert-manager – you should see cert-manager, cert-manager-cainjector, and cert-manager-webhook all Running.

Cloud Behavioral Divergence Guide: Where AWS, Azure, and GCP Actually Differ

Cloud Behavioral Divergence Guide#

Running the “same” workload on AWS, Azure, and GCP does not produce the same behavior. The Kubernetes API is portable, application containers are portable, and SQL queries are portable. Everything else – identity, networking, storage, load balancing, DNS, and managed service behavior – diverges in ways that matter for production reliability.

This guide documents the specific divergence points with practical examples. Use it when translating infrastructure from one cloud to another, when debugging behavior that differs between environments, or when assessing migration risk.

Kubernetes DNS Deep Dive: CoreDNS, ndots, and Debugging Resolution Failures

Kubernetes DNS Deep Dive: CoreDNS, ndots, and Debugging Resolution Failures#

DNS problems are responsible for a disproportionate number of Kubernetes debugging sessions. The symptoms are always vague – timeouts, connection refused, “could not resolve host” – and the root causes range from CoreDNS being down to a misunderstood setting called ndots.

How Pod DNS Resolution Works#

When a pod makes a DNS query, it goes through the following chain:

  1. The application calls getaddrinfo() or equivalent.
  2. The system resolver reads /etc/resolv.conf inside the pod.
  3. The query goes to the nameserver specified in resolv.conf, which is CoreDNS (reachable via the kube-dns Service in kube-system).
  4. CoreDNS resolves the name – either from its internal zone (for cluster services) or by forwarding to upstream DNS.

Every pod’s /etc/resolv.conf looks something like this:

Kubernetes Service Types and DNS-Based Discovery

Kubernetes Service Types and DNS-Based Discovery#

Services are the stable networking abstraction in Kubernetes. Pods come and go, but a Service gives you a consistent DNS name and IP address that routes to the right set of pods. Choosing the wrong Service type or misunderstanding DNS discovery is behind a large percentage of connectivity failures.

Service Types#

ClusterIP (Default)#

ClusterIP creates an internal-only virtual IP. Only pods inside the cluster can reach it. This is what you want for internal communication between microservices.

Scenario: Debugging Kubernetes Network Connectivity End-to-End

Scenario: Debugging Kubernetes Network Connectivity End-to-End#

The report comes in as it always does: “my application can’t reach another service.” This is one of the most common and most frustrating categories of Kubernetes issues because the networking stack has multiple layers, and the symptom (timeout, connection refused, 502) tells you almost nothing about which layer is broken.

This scenario walks through a systematic diagnostic process, starting from the symptom and narrowing down to the root cause. Follow these steps in order. Each step either identifies the problem or eliminates a layer from the investigation.

Scenario: Migrating Workloads Between Kubernetes Clusters

Scenario: Migrating Workloads Between Kubernetes Clusters#

You are helping when someone says: “we need to move workloads from cluster A to cluster B.” The reasons vary – Kubernetes version upgrade, cloud provider migration, region change, architecture consolidation, or moving from self-managed to a managed service. The complexity ranges from trivial (stateless services with GitOps) to significant (stateful workloads with zero-downtime requirements).

The core risk in any cluster migration is data loss for stateful workloads and downtime during the traffic cutover. Every decision in this plan aims to minimize both.

DNS Deep Dive: Record Types, Resolution, Troubleshooting, and Cloud DNS Management

How DNS Resolution Works#

When a client requests api.example.com, the resolution follows a chain of queries. The client asks its configured recursive resolver (often the ISP’s, or a public one like 8.8.8.8). The recursive resolver does the heavy lifting: it asks a root name server for .com, the .com TLD server for example.com, and the authoritative name server for example.com returns the answer for api.example.com. Each level caches the result according to the record’s TTL, so subsequent requests short-circuit the chain.

Minikube Networking: Services, Ingress, DNS, and LoadBalancer Emulation

Minikube Networking: Services, Ingress, DNS, and LoadBalancer Emulation#

Minikube networking behaves differently from cloud Kubernetes in ways that cause confusion. LoadBalancer services do not get external IPs by default, the minikube IP may or may not be directly reachable from your host depending on the driver, and ingress requires specific addon setup. Understanding these differences prevents hours of debugging connection timeouts to services that are actually running fine.

How Minikube Networking Works#

Minikube creates a single node (a VM or container depending on the driver) with its own IP address. Pods inside the cluster get IPs from an internal CIDR. Services get ClusterIPs from another internal range. The bridge between your host machine and the cluster depends entirely on which driver you use.