The Azure Load Balancer is not accepting external connections for the Nginx service despite reporting its backend nodes as healthy via health probes.

Gav Sturm 20 Reputation points
2025-04-25T20:43:35.34+00:00

Hello,

I'm encountering a persistent issue with external connectivity to an Nginx Ingress controller service exposed via a standard Azure Load Balancer on my AKS cluster.

The Problem:

External connection attempts (e.g., curl) to the Load Balancer's public IP on port 80 consistently time out before establishing a TCP connection. This happens from multiple external networks (tested locally and from AWS Lambda).

The Puzzle:

This timeout occurs despite the following configurations and checks appearing correct:

  1. Azure Load Balancer Rules: The LB (kubernetes in node RG) has rules correctly mapping Frontend Port 80 -> Backend Port 31965 and Frontend Port 443 -> Backend Port 30429. (These are the NodePorts assigned to the ingress-nginx/ingress-nginx-controller service).
  2. Azure Load Balancer Health Probes: Configured TCP probes targeting the backend NodePorts (31965 and 30429) are consistently reporting as Healthy in the Azure Portal.
  3. NSG Rules: The associated NSG has rules allowing inbound traffic from Internet to port 80/443 (via AKS default rules) and explicitly allows AzureLoadBalancer source to the NodePorts (31965, 30429).
  4. Internal Connectivity: Tests performed from within the AKS cluster using kubectl exec into a pod and curling the node's private IP directly on the NodePorts (curl -k https://) connect successfully and receive the expected 404 response from the Nginx pod.
  5. Kubernetes State: The Nginx Ingress controller pod (ingress-nginx namespace) is Running/Ready. kube-proxy pods (kube-system namespace) are Running/Ready and logs show successful rule syncing. The relevant node is Ready. Node reboots and pod restarts have been attempted.

Summary: Everything up to the Azure Load Balancer seems functional. Internal routing via NodePorts works, and the Load Balancer itself reports the backend nodes as healthy via TCP probes to those NodePorts. However, external traffic directed to the Load Balancer's public IP fails to connect.

Question:

Has anyone encountered a similar situation where an Azure Load Balancer fails to accept external connections despite having healthy probes and seemingly correct LB/NSG rules? Are there any other diagnostic steps recommended for AKS LB connectivity issues, particularly when standard checks pass and Azure support is not available? Could this point to an underlying Azure platform issue with the LB instance or network path?

Any insights or suggestions would be greatly appreciated!

Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
2,385 questions
{count} votes

Accepted answer
  1. Arko 2,130 Reputation points Microsoft External Staff
    2025-04-28T08:38:15.7466667+00:00

    Hello Gav Sturm,

    Even though Azure Load Balancer health probes report "Healthy" and NodePort traffic works internally inside AKS, external connectivity (curl to Load Balancer Public IP) times out because there is no frontend rule properly linking the Load Balancer public IP to the backend NodePorts. This typically happens when you use a service of type NodePort without creating an AKS-managed LoadBalancer type service.

    Why?

    Ans- In AKS with a standard SKU Azure Load Balancer, health probes only check TCP connectivity to backend NodePorts. They do not guarantee that the frontend rule is correctly mapping external traffic. When you expose nginx using a NodePort service type, Azure does not automatically create a frontend listener rule for port 80/443 on the Load Balancer. Therefore, external packets hitting the Load Balancer public IP timeout, even though the backend NodePort service works fine internally. NSG rules alone are not enough. The Load Balancer needs frontend rules + probes bound correctly. This behavior is by design with NodePort services in AKS and they require manual Load Balancer configuration if you don't use Service type LoadBalancer.

    I was checking the same from my end by creating a cluster with standard load balancer and deploying nginx ingress controller with a Service of type NodePort and exposed ports 32080 (HTTP) and 32443 (HTTPS) and I observed the same error as yours.

    How to fix it?

    Ans- Delete the NodePort service and recreate the nginx service as type: LoadBalancer instead of NodePort, which automatically configures Azure Load Balancer frontend rules, backend pools, and probes correctly.

    enter image description here

    
    apiVersion: v1
    
    kind: Service
    
    metadata:
    
      name: ingress-nginx-controller
    
      namespace: ingress-nginx
    
    spec:
    
      type: LoadBalancer
    
      selector:
    
        app: ingress-nginx
    
      ports:
    
      - name: http
    
        port: 80
    
        targetPort: 80
    
      - name: https
    
        port: 443
    
        targetPort: 443
    
    

    You won't need to manually configure frontend rules when using Service type LoadBalancer as AKS automatically handles the Azure Load Balancer wiring for you.

    After applying this, a new external IP will be assigned to the service. Load Balancer frontend rules and health probes will be created properly. External access on port 80 will start working immediately.

    enter image description here

    References Docs:

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.