Nginx Ingress Deployment - AKS 101

Let's deploy an Nginx Ingress and configure a quick connectivity check.

Why You Need an Ingress Controller

When you deploy workloads to Kubernetes, each service typically runs inside a private cluster network. To expose your applications to the outside world, you need a way to route HTTP/HTTPS traffic into your cluster — and that’s where an Ingress Controller comes in.

An ingress controller acts as a smart traffic manager:

  • It listens for requests coming into your cluster.
  • Routes them based on rules you define in Kubernetes Ingress resources.
  • Handles SSL termination, path-based routing, and even load balancing.

Without one, you’d have to expose each service individually via LoadBalancer or NodePort, which quickly becomes messy, expensive, and hard to manage.

Why NGINX?

There are many ingress controllers available — from cloud-native options like Azure Application Gateway and Traefik, to enterprise solutions like HAProxy or Kong. For the full list, see the Kubernetes Documentation

However, NGINX Ingress Controller remains the most popular starting point because:

  • It’s simple, open-source, and well-documented.
  • Works seamlessly on any Kubernetes cluster

Preflight Steps

Before starting, I’m going to assume a couple of things (Crazy of me right?!). Firstly that you’ve got some common packages already installed on your local machine, the likes of:

  • Microsoft.AzureCLI
  • Microsoft.Azure.Kubelogin
  • Kubernetes.kubectl
  • Helm.Helm
  • WSL (If you want that Linux feel)

Windows Install

If you’re more Windows focused we can use WinGet to get you setup, If you’re missing those packages

WinGet CLI Install

If you’re missing WinGet CLI from your system, Here you go 🫡

1
2
3
4
5
6
7
$progressPreference = 'silentlyContinue'
Write-Host "Installing WinGet PowerShell module from PSGallery..."
Install-PackageProvider -Name NuGet -Force | Out-Null
Install-Module -Name Microsoft.WinGet.Client -Force -Repository PSGallery | Out-Null
Write-Host "Using Repair-WinGetPackageManager cmdlet to bootstrap WinGet..."
Repair-WinGetPackageManager -AllUsers -Force
Write-Host "Done."

1
2
3
4
5
$appList = ('Microsoft.AzureCLI','Microsoft.Azure.Kubelogin','Kubernetes.kubectl','Helm.Helm')
forEach ($app in $appList) {
    Write-Output "Installing: $app"
    winget install --source winget --id $app
}
Important

Please close and reopen your terminal to refresh the environment variables.

After the relaunch, Authenticate and Log into your Azure Kubernetes Cluster

Helm Chart Installation

For more information you can check the official docs over at: ingress-nginx

Internal Load Balancer

1
2
3
4
5
6
7
helm upgrade --install ingress-nginx ingress-nginx `
  --repo https://kubernetes.github.io/ingress-nginx `
  --namespace ingress-nginx --create-namespace `
  --set controller.replicaCount=2 `
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="true" `
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"="/healthz" `
  --set controller.config."health-check-path"="/healthz"

External (Public) Load Balancer

1
2
3
4
5
6
7
helm upgrade --install ingress-nginx ingress-nginx `
  --repo https://kubernetes.github.io/ingress-nginx `
  --namespace ingress-nginx --create-namespace `
  --set controller.replicaCount=2 `
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="false" `
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"="/healthz" `
  --set controller.config."health-check-path"="/healthz"

Helm Command Breakdown:

Flag / ParameterPurposeExample / Explanation
helm upgrade --install ingress-nginx ingress-nginxInstalls the NGINX Ingress Controller if it doesn’t exist, or upgrades it if it already does.Ensures the release name is ingress-nginx, making it idempotent for redeployments.
--repo https://kubernetes.github.io/ingress-nginxPoints Helm to the official NGINX Ingress Controller chart repository.Ensures you’re always using the maintained upstream chart from the Kubernetes project.
--namespace ingress-nginxInstalls the controller into its own namespace.Keeps ingress components isolated from workloads and improves manageability.
--create-namespaceCreates the namespace automatically if it doesn’t exist.Saves you from running a separate kubectl create namespace ingress-nginx command.
--set controller.replicaCount=2Runs two replicas of the ingress controller.Provides high availability and resilience — one pod can restart without downtime.
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="true"Configures Azure to create an internal Load Balancer instead of a public one.Keeps ingress traffic private within your virtual network — ideal for internal apps.
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"="/healthz"Sets the HTTP path Azure Load Balancer will probe for health checks.Azure uses this endpoint to verify if ingress pods are healthy and ready.
--set controller.config."health-check-path"="/healthz"Updates the NGINX Ingress Controller configuration to expose /healthz.Ensures the health probe path actually exists inside NGINX and returns 200 OK.

Why This Matters

This configuration ensures:

  • The Ingress Controller is deployed privately within your AKS virtual network, keeping traffic internal and secure.
  • Azure Load Balancer can correctly detect pod health using the /healthz endpoint, allowing for automatic failover and reliable traffic distribution.
  • The setup is production-ready with redundancy (2 replicas) and proper health probes.

Running two replicas of the ingress controller is important because it provides:

  • High Availability: If one pod fails, upgrades, or is rescheduled, the second replica continues handling traffic — preventing downtime.
  • Rolling Updates Without Disruption: During Helm upgrades or AKS node maintenance, at least one controller remains online, maintaining active connections.
  • Load Distribution: With multiple replicas, incoming requests are balanced across pods, improving throughput and reducing latency spikes.
  • Resilience to Node Failure: If a node hosting one replica becomes unavailable, the other replica on a different node ensures ingress continuity.

Connectivity Check

If you want to check that the Nginx Ingress pods are working and reporting, check the Load Balancer Health Probe overview. If everything is configured correctly you should see green health indicators on the backend pools

Finally we can check the local services (svc) within the aks cluster

1
kubectl get svc -n ingress-nginx

Example Nginx Deployment

So we have a working ingress controller 🥳, Now let’s deploy a basic Nginx backend and configure some connectivity for now over port 80 (HTTP). Don’t worry, TLS is coming in another post soon 😉

1
2
kubectl create namespace mynginxapp
kubectl create deployment nginx-web-dev --image=nginx --port=80 --replicas=2 --namespace mynginxapp

Once this has deployed we can check the namespace to see the two Nginx web servers.

1
kubectl get pods -n mynginxapp

Service

What is a Service?

A Kubernetes Service provides a stable network endpoint for a group of Pods, ensuring reliable communication even as Pods are created or replaced. It automatically routes traffic to healthy Pods using label selectors and can expose applications internally or externally depending on its type. In essence, a Service decouples dynamic Pods from how clients connect to them. Learn more in the official Kubernetes documentation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
  name: nginx-http
  namespace: mynginxapp
spec:
  selector:
    app: nginx-web-dev
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP

Save this as svc-mynginxapp.yaml

1
kubectl apply -f svc-mynginxapp.yaml

Give it 30 seconds to apply and configure and we can check with.

1
kubectl get svc -n mynginxapp

Ingress

What is an Ingress?

A Kubernetes Ingress is a resource that manages external access to applications running inside a cluster, typically over HTTP or HTTPS. It acts as a smart entry point, routing incoming traffic to the appropriate Service based on defined rules such as hostnames and paths. With an Ingress Controller (like NGINX or Traefik), you can handle features like TLS termination, path-based routing, and load balancing without exposing every Service individually. Learn more in the official Kubernetes documentation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: mynginxapp
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  ingressClassName: nginx
  rules:
    - host: nginx-demo.az.builtwithcaffeine.cloud
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-http
                port:
                  number: 80

Save this as ingress-mynginxapp.yaml

Info

READER NOTICE
You’ll need to update the host (L10), Nginx does DNS name routing, So otherwise you’ll get 404 Web page. If you need the Public IP from the ingress controller you can run:
kubectl get svc ingress-nginx-controller -n ingress-nginx

1
kubectl apply -f ingress-mynginxapp.yaml

Once deployed we can check that the ingress is online and healthy.

1
kubectl get ingress -n mynginxapp

Nginx Connectivity Check

Wrapping Up

With that, you’ve successfully deployed an NGINX Ingress Controller on Azure Kubernetes Service (AKS) and configured a working HTTP-based demo app — complete with a Service and Ingress route.

You’ve just covered some key building blocks of Kubernetes networking:

  • Ingress Controllers for routing external traffic into your cluster
  • Services for stable internal connectivity
  • Ingress Rules for host and path-based routing

This setup not only helps you understand how traffic flows within Kubernetes but also lays the foundation for production-grade networking patterns — where TLS, DNS automation, and certificate management come next.

In the next post, we’ll take this a step further by integrating Cert-Manager and Let’s Encrypt to enable HTTPS/TLS termination and secure your endpoints automatically.

Tip

Connectivity Check Tip!
If everything’s green and reachable — congratulations!
You’ve built a fully functional ingress path into your AKS cluster. 🎉

Share with your network!

Built with Hugo - Theme Stack designed by Jimmy