AKS Networking and Ingress Deep Dive

AKS Networking and Ingress#

AKS networking involves three layers: how pods communicate (CNI plugin), how traffic enters the cluster (load balancers and ingress controllers), and how the cluster connects to other Azure resources (VNet integration, private endpoints). Each layer has Azure-specific behavior that differs from generic Kubernetes.

Azure Load Balancer for Services#

When you create a Service of type LoadBalancer in AKS, Azure provisions a Standard SKU Azure Load Balancer. AKS manages the load balancer rules and health probes automatically.

AKS Setup and Configuration: Clusters, Node Pools, and Networking

AKS Setup and Configuration#

Azure Kubernetes Service handles the control plane for you – you pay nothing for it. What you configure is node pools, networking, identity, and add-ons. Getting these right at cluster creation matters because several choices (networking model, managed identity) cannot be changed later without rebuilding the cluster.

Creating a Cluster with az CLI#

The minimal command that produces a production-usable cluster:

az aks create \
  --resource-group myapp-rg \
  --name myapp-aks \
  --location eastus2 \
  --node-count 3 \
  --node-vm-size Standard_D4s_v5 \
  --network-plugin azure \
  --network-plugin-mode overlay \
  --vnet-subnet-id /subscriptions/<sub>/resourceGroups/myapp-rg/providers/Microsoft.Network/virtualNetworks/myapp-vnet/subnets/aks-subnet \
  --enable-managed-identity \
  --enable-aad \
  --aad-admin-group-object-ids <admin-group-id> \
  --generate-ssh-keys \
  --tier standard

Key flags: --network-plugin azure --network-plugin-mode overlay gives you Azure CNI Overlay, which avoids the IP exhaustion problems of classic Azure CNI. --tier standard enables the financially-backed SLA and uptime guarantees (the free tier has no SLA). --enable-aad integrates Entra ID (formerly Azure AD) for authentication.

AKS Troubleshooting: Diagnosing Common Azure Kubernetes Problems

AKS Troubleshooting#

AKS problems fall into categories: node pool operations stuck or failed, pods not scheduling, storage not provisioning, authentication broken, and ingress not working. Each has Azure-specific causes that generic Kubernetes debugging will not surface.

Node Pool Stuck in Updating or Failed#

Node pool operations (scaling, upgrading, changing settings) can get stuck. The AKS API reports the pool as “Updating” indefinitely or transitions to “Failed.”

# Check node pool provisioning state
az aks nodepool show \
  --resource-group myapp-rg \
  --cluster-name myapp-aks \
  --name workload \
  --query provisioningState

# Check the activity log for errors
az monitor activity-log list \
  --resource-group myapp-rg \
  --query "[?contains(operationName.value, 'Microsoft.ContainerService')].{op:operationName.value, status:status.value, msg:properties.statusMessage}" \
  --output table

Common causes and fixes:

cert-manager and external-dns: Automatic TLS and DNS on Kubernetes

cert-manager and external-dns#

These two controllers solve the two most tedious parts of exposing services on Kubernetes: getting TLS certificates and creating DNS records. Together, they make it so that creating an Ingress resource automatically provisions a DNS record pointing to your cluster and a valid TLS certificate for the hostname.

cert-manager#

cert-manager watches for Certificate resources and Ingress annotations, then obtains and renews TLS certificates automatically.

Installation#

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

The crds.enabled=true flag installs the CRDs as part of the Helm release. Verify with kubectl get pods -n cert-manager – you should see cert-manager, cert-manager-cainjector, and cert-manager-webhook all Running.

Choosing a CNI Plugin: Calico vs Cilium vs Flannel vs Cloud-Native CNI

Choosing a CNI Plugin#

The Container Network Interface (CNI) plugin is one of the most consequential infrastructure decisions in a Kubernetes cluster. It determines how pods get IP addresses, how traffic flows between them, whether network policies are enforced, and what observability you get into network behavior. Changing CNI after deployment is painful – it typically requires draining and rebuilding nodes, or rebuilding the cluster entirely. Choose carefully up front.

Choosing an Autoscaling Strategy: HPA vs VPA vs KEDA vs Karpenter/Cluster Autoscaler

Choosing an Autoscaling Strategy#

Kubernetes autoscaling operates at two distinct layers: pod-level scaling changes how many pods run or how large they are, while node-level scaling changes how many nodes exist in the cluster to host those pods. Getting the right combination of tools at each layer is the key to a system that responds to demand without wasting resources.

The Two Scaling Layers#

Understanding which layer a tool operates on prevents the most common misconfiguration – expecting pod-level scaling to solve node-level capacity problems, or vice versa.

Choosing an Ingress Controller: Nginx vs Traefik vs HAProxy vs Cloud ALB/NLB

Choosing an Ingress Controller#

An Ingress controller is the component that actually routes external traffic into your cluster. The Ingress resource (or Gateway API resource) defines the rules – which hostnames and paths map to which backend Services – but without a controller watching those resources and configuring a reverse proxy, nothing happens. The choice of controller affects performance, configuration ergonomics, TLS management, protocol support, and operational cost.

Unlike CNI plugins, you can run multiple ingress controllers in the same cluster, which is a common pattern for separating internal and external traffic. This reduces the stakes of any single choice, but your primary controller still deserves careful selection.

Choosing Kubernetes Workload Types: Deployment vs StatefulSet vs DaemonSet vs Job

Choosing Kubernetes Workload Types#

Kubernetes provides several workload controllers, each designed for a specific class of application behavior. Choosing the wrong one leads to data loss, unnecessary complexity, or workloads that fight the platform instead of leveraging it. This guide walks through the decision criteria and tradeoffs for each type.

The Workload Types at a Glance#

Workload TypeLifecyclePod IdentityScaling ModelStorage ModelTypical Use
DeploymentLong-runningInterchangeableHorizontal replicasShared or noneWeb servers, APIs, stateless microservices
StatefulSetLong-runningStable, orderedOrdered horizontalPer-pod persistentDatabases, message queues, distributed consensus
DaemonSetLong-runningOne per nodeTied to node countNode-localLog collectors, monitoring agents, network plugins
JobRun to completionDisposableParallel completionsEphemeralBatch processing, migrations, one-time tasks
CronJobScheduledDisposablePer-schedule runEphemeralPeriodic backups, cleanup, scheduled reports
ReplicaSetLong-runningInterchangeableHorizontal replicasShared or noneAlmost never used directly

Decision Criteria#

The choice comes down to four questions:

Cluster Autoscaling: HPA, Cluster Autoscaler, and KEDA

Cluster Autoscaling#

Kubernetes autoscaling operates at two levels: pod-level (HPA adds or removes pod replicas) and node-level (Cluster Autoscaler adds or removes nodes). Getting them to work together requires understanding how each makes decisions.

Horizontal Pod Autoscaler (HPA)#

HPA adjusts the replica count of a Deployment, StatefulSet, or ReplicaSet based on observed metrics. The metrics-server must be running in your cluster for CPU and memory metrics.

Basic HPA on CPU#

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

This scales my-app between 2 and 10 replicas, targeting 70% average CPU utilization across all pods. The HPA checks metrics every 15 seconds (default) and computes the desired replica count as:

ConfigMaps and Secrets: Configuration Management in Kubernetes

ConfigMaps and Secrets#

ConfigMaps hold non-sensitive configuration data. Secrets hold sensitive data like passwords, tokens, and TLS certificates. They look similar in structure but differ in handling: Secrets are base64-encoded, stored with slightly restricted access by default, and can be encrypted at rest if the cluster is configured for it.

Creating ConfigMaps#

From a literal value:

kubectl create configmap app-config \
  --from-literal=LOG_LEVEL=info \
  --from-literal=MAX_CONNECTIONS=100

From a file:

kubectl create configmap nginx-config --from-file=nginx.conf

The key name defaults to the filename. Override it with --from-file=custom-key=nginx.conf.