Skip to content

Create Ingress With Instance Mode

You can use annotation in the Ingress object to specify how to route traffic to pods. You can choose between instance and ip.

The kubernetes service must be of type NodePort to use instance mode. This is because worker nodes (EC2 instances) are registered as targets in the target group that will be created by the AWS Load Balancer Controller.

The default value for is instance. So you don't have to define this explicitly unless you want to use ip mode.

Docker Images

Here is the Docker Image used in this tutorial: reyanshkharga/nodeapp:v1


reyanshkharga/nodeapp:v1 runs on port 5000 and has the following routes:

  • GET / Returns host info and app version
  • GET /health Returns health status of the app
  • GET /random Returns a randomly generated number between 1 and 10

Step 1: Create a Deployment

First, let's create a deployment as follows:

apiVersion: apps/v1
kind: Deployment
  name: my-deployment
  replicas: 2
      app: nodeapp
        app: nodeapp
      - name: nodeapp
        image: reyanshkharga/nodeapp:v1
        imagePullPolicy: Always
          - containerPort: 5000

Apply the manifest to create the deployment:

kubectl apply -f my-deployment.yml

Verify deployment and pods:

# List deployments
kubectl get deployments

# List pods
kubectl get pods

Step 2: Create a NodePort Service

The kubernetes service must be of type NodePort to use instance mode. So, let's create a NodePort service as follows:

apiVersion: v1
kind: Service
  name: my-nodeport-service
  type: NodePort
    app: nodeapp
    - port: 80
      targetPort: 5000

Apply the manifest to create the NodePort service:

kubectl apply -f my-nodeport-service.yml

Verify service:

kubectl get svc

If you don't explicitly provide a nodePort, you'll observe that the service is automatically assigned one. However, if desired, you can specify a specific nodePort.

Step 3: Create Ingress

Now that we have the service ready, let's create an Ingress object:

kind: Ingress
  name: my-ingress
  annotations: internet-facing # Default value is internal Project=eks-masterclass,Team=DevOps # Optional my-load-balancer # Optional instance # Default is instance
  ingressClassName: alb
  - http:
      - path: /
        pathType: Prefix
            name: my-nodeport-service
              number: 80

Observe the following:

  1. We have used annotations to specify load balancer and target group attributes
  2. We have one rule that matches / path and then routes traffic to my-nodeport-service


Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers.

Apply the manifest to create ingress:

kubectl apply -f my-ingress.yml

Verify ingress:

kubectl get ingress
kubectl get ing

Here's what happens when you create an ingress:

  1. An ALB (ELBv2) is created in AWS for the new ingress resource.
  2. Target Groups are created in AWS for each unique kubernetes service described in the ingress resource.
  3. Listeners are created for every port detailed the ingress resource annotations.
  4. Listener rules are created for each path specified in the ingress resource. This ensures traffic to a specific path is routed to the correct kubernetes service.

This is all done by the AWS Load Balancer Controller. You can check the events in the logs as follows:

kubectl logs -f deploy/aws-load-balancer-controller -n aws-load-balancer-controller --all-containers=true

You can see the events such as creating securityGroup, created securityGroup, creating loadBalancer, created loadBalancer, created listener, created listener rule, creating targetGroupBinding, created targetGroupBinding, successfully deployed model, etc in the logs.

Step 4: Verify AWS Resources in AWS Console

Visit the AWS console and verify the resources created by AWS Load Balancer Controller.

Pay close attention to the Listeners, Rules and TargetGroups.

You will observe that in the Target Group, instances are registered as targets because we chose instance as target type.

Note that the Load Balancer takes some time to become Active.

Step 5: Access App Via Load Balancer DNS

Once the load balancer is in Active state, you can hit the load balancer DNS and verify if everything is working properly.

Access the load balancer DNS by entering it in your browser. You can get the load balancer DNS either from the AWS console or the Ingress configuration.

Try accessing the following paths:

# Root path

# Health path

# Random generator path


If you don't see the load balancer in the AWS console, this means the ingress has some issue. To identify the underlying issue, you can examine the logs of the controller as follows:

# Describe the ingress
kubectl describe ing my-ingress

# View aws load balancer controller logs
kubectl logs -f deploy/aws-load-balancer-controller -n aws-load-balancer-controller --all-containers=true

Clean Up

Assuming your folder structure looks like the one below:

|-- manifests
│   |-- my-deployment.yml
│   |-- my-nodeport-service.yml
│   |-- my-ingress.yml

Let's delete all the resources we created:

kubectl delete -f manifests/