Skip to content

Create Ingress With Health Check

Health check on target groups can be controlled with following annotations:

Annotation Function specifies the protocol used when performing health check on targets. specifies the port used when performing health check on targets. When using target-type: instance with a service of type NodePort, the healthcheck port can be set to traffic-port to automatically point to the correct port. specifies the HTTP path when performing health check on targets. specifies the interval(in seconds) between health check of an individual target. specifies the timeout(in seconds) during which no response from a target means a failed health check. specifies the HTTP or gRPC status code that should be expected when doing health checks against the specified health check path. specifies the consecutive health checks successes required before considering an unhealthy target healthy. specifies the consecutive health check failures required before considering a target unhealthy.

Annotation Format

Annotation keys and values can only be strings. Advanced format should be encoded as below:

boolean: 'true' # Must be quoted
integer: '42' # Must be quoted
stringList: s1,s2,s3
stringMap: k1=v1,k2=v2
json: 'jsonContent' # Must be quoted

Docker Images

Here is the Docker Image used in this tutorial: reyanshkharga/nodeapp:v1


reyanshkharga/nodeapp:v1 runs on port 5000 and has the following routes:

  • GET / Returns host info and app version
  • GET /health Returns health status of the app
  • GET /random Returns a randomly generated number between 1 and 10

Step 1: Create a Deployment

First, let's create a deployment as follows:

apiVersion: apps/v1
kind: Deployment
  name: my-deployment
  replicas: 2
      app: demo
        app: demo
      - name: nodeapp
        image: reyanshkharga/nodeapp:v1
        imagePullPolicy: Always
          - containerPort: 5000

Apply the manifest to create the deployment:

kubectl apply -f my-deployment.yml

Verify deployment and pods:

# List deployments
kubectl get deployments

# List pods
kubectl get pods

Step 2: Create a NodePort Service

Let's create a NodePort service as follows:

apiVersion: v1
kind: Service
  name: my-nodeport-service
  type: NodePort
    app: demo
    - port: 5000
      targetPort: 5000

Apply the manifest to create the NodePort service:

kubectl apply -f my-nodeport-service.yml

Verify service:

kubectl get svc

If you don't explicitly provide a nodePort, you'll observe that the service is automatically assigned one. However, if desired, you can specify a specific nodePort.

Step 3: Create Ingress

Now that we have the service ready, let's create an Ingress object with health check:

kind: Ingress
  name: my-ingress
    # Load Balancer Annotations internet-facing # Default value is internal Environment=dev,Team=DevOps # Optional my-load-balancer # Optional
    # Health Check Annotations HTTP traffic-port /health '5' '2' '200' '2' '2'
  ingressClassName: alb
  - http:
      - path: /
        pathType: Prefix
            name: my-nodeport-service
              number: 5000

Observe the following:

  1. We have used annotations to specify load balancer and target group attributes
  2. We have one rule that matches / path and then routes traffic to my-nodeport-service
  3. We have specified health check parameters for the target group

Apply the manifest to create ingress:

kubectl apply -f my-ingress.yml

Verify ingress:

kubectl get ingress
kubectl get ing

Step 4: Verify AWS Resources in AWS Console

Visit the AWS console and verify the resources created by AWS Load Balancer Controller.

Pay close attention to the health check configuration of the target group that ingress created.

Note that the Load Balancer takes some time to become Active.

Also, verify that the ALB was created by AWS Load Balancer Controller. You can check the events in the logs as follows:

kubectl logs -f deploy/aws-load-balancer-controller -n aws-load-balancer-controller --all-containers=true

Step 5: Access App Via Load Balancer DNS

Once the load balancer is in Active state, you can hit the load balancer DNS and verify if everything is working properly.

Access the load balancer DNS by entering it in your browser. You can get the load balancer DNS either from the AWS console or the Ingress configuration.

Try accessing the following paths:

# Root path

# Health path

# Random generator path


If you don't see the load balancer in the AWS console, this means the ingress has some issue. To identify the underlying issue, you can examine the logs of the controller as follows:

# Describe the ingress
kubectl describe ing my-ingress

# View aws load balancer controller logs
kubectl logs -f deploy/aws-load-balancer-controller -n aws-load-balancer-controller --all-containers=true

Clean Up

Assuming your folder structure looks like the one below:

|-- manifests
│   |-- my-deployment.yml
│   |-- my-nodeport-service.yml
│   |-- my-ingress.yml

Let's delete all the resources we created:

kubectl delete -f manifests/