Skip to content

Network Load Balancer With External DNS

You can use the ``` annotation to let ExternalDNS manage the Route 53 record for the kubernetes service.


To follow this tutorial, you'll require a domain and, additionally, an SSL certificate for the domain and its subdomains.

  1. Register a Route 53 Domain

    Go to AWS Console and register a Route 53 domain. You can opt for a cheaper TLD (top level domain) such as .link


    It usually takes about 10 minutes but it might take about an hour for the registered domain to become available.

  2. Request a Public Certificate

    Visit AWS Certificate Manager in AWS Console and request a public certificate for your domain and all the subdomains. For example, if you registered for a domain then request certificate for and *


    Make sure you request the certificate in the region where your EKS cluster is in.

  3. Validate the Certificate

    Validate the requested certificate by adding CNAME records in Route 53. It is a very simple process. Go to the certificate you created and click on Create records in Route 53. The CNAMEs will be automatically added to Route 53.


    It usually takes about 5 minutes but it might take about an hour for the certificate to be ready for use.

Now that you have everything you need, let's move on to the demonstration.

Docker Images

Here is the Docker Image used in this tutorial: reyanshkharga/nodeapp:v1


reyanshkharga/nodeapp:v1 runs on port 5000 and has the following routes:

  • GET / Returns host info and app version
  • GET /health Returns health status of the app
  • GET /random Returns a randomly generated number between 1 and 10

Step 1: Create a Deployment

First, let's create a deployment as follows:

apiVersion: apps/v1
kind: Deployment
  name: my-deployment
  replicas: 2
      app: demo
        app: demo
      - name: nodeapp
        image: reyanshkharga/nodeapp:v1
          - containerPort: 5000

Apply the manifest to create the deployment:

kubectl apply -f my-deployment.yml

Verify deployment and pods:

# List deployments
kubectl get deployments

# List pods
kubectl get pods

Step 2: Create a LoadBalancer Service

Let's create a LoadBalancer service as follows:

apiVersion: v1
kind: Service
  name: nlb-service
  annotations: my-nlb external instance # Must specify this annotation internet-facing # Default is internal
    # Health Check http traffic-port /health '10' '2' # ignored '2' '2'
    # TLS arn:aws:acm:ap-south-1:170476043077:certificate/2d88e035-cde7-472a-9cd3-6b6ce6ece961 '443' tcp
    #External DNS # give your domain name here (Optional)
  type: LoadBalancer
    app: demo
    - port: 443 # Creates a listener with port 443
      targetPort: 5000

Be sure to replace the value of with the ARN of the SSL certificate you created.

Apply the manifest to create the service:

kubectl apply -f my-service.yml

Verify service:

kubectl get svc

Note that we are offloading the reconciliation to AWS Load Balancer Controller using the external annotation.

Step 3: Verify AWS Resources in AWS Console

Visit the AWS console and verify the resources created by AWS Load Balancer Controller.

Pay close attention to the health check configuration of the target group.

Note that the Load Balancer takes some time to become Active.

Verify that the NLB was created by AWS Load Balancer Controller. You can check the events in the logs as follows:

kubectl logs -f deploy/aws-load-balancer-controller -n aws-load-balancer-controller --all-containers=true

Also, go to AWS Route 53 and verify the record ( that was added by ExternalDNS.

You can also check the events that external-dns pod performs:

kubectl logs -f <external-dns-pod> -n external-dns

Step 4: Access App Using Route 53 DNS

Once the load balancer is in Active state, you can hit the subdomain you created in Route 53 and verify if everything is working properly.

Try accessing the following paths:

# Root path

# Health path

# Random generator path

Clean Up

Assuming your folder structure looks like the one below:

|-- manifests
│   |-- my-deployment.yml
│   |-- my-service.yml

Let's delete all the resources we created:

kubectl delete -f manifests/