Create Ingress With Instance Mode¶
You can use alb.ingress.kubernetes.io/target-type
annotation in the Ingress object to specify how to route traffic to pods. You can choose between instance
and ip
.
The kubernetes service must be of type NodePort
to use instance
mode. This is because worker nodes (EC2 instances) are registered as targets in the target group that will be created by the AWS Load Balancer Controller.
The default value for alb.ingress.kubernetes.io/target-type
is instance
. So you don't have to define this explicitly unless you want to use ip
mode.
Docker Images¶
Here is the Docker Image used in this tutorial: reyanshkharga/nodeapp:v1
Note
reyanshkharga/nodeapp:v1 runs on port 5000
and has the following routes:
GET /
Returns host info and app versionGET /health
Returns health status of the appGET /random
Returns a randomly generated number between 1 and 10
Step 1: Create a Deployment¶
First, let's create a deployment as follows:
Apply the manifest to create the deployment:
Verify deployment and pods:
Step 2: Create a NodePort Service¶
The kubernetes service must be of type NodePort
to use instance
mode. So, let's create a NodePort
service as follows:
Apply the manifest to create the NodePort service:
Verify service:
If you don't explicitly provide a nodePort
, you'll observe that the service is automatically assigned one. However, if desired, you can specify a specific nodePort
.
Step 3: Create Ingress¶
Now that we have the service ready, let's create an Ingress object:
Observe the following:
- We have used annotations to specify load balancer and target group attributes
- We have one rule that matches
/
path and then routes traffic tomy-nodeport-service
Note
Before the IngressClass
resource and ingressClassName
field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class
annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers.
Apply the manifest to create ingress:
Verify ingress:
Here's what happens when you create an ingress:
- An ALB (ELBv2) is created in AWS for the new ingress resource.
- Target Groups are created in AWS for each unique kubernetes service described in the ingress resource.
- Listeners are created for every port detailed the ingress resource annotations.
- Listener rules are created for each path specified in the ingress resource. This ensures traffic to a specific path is routed to the correct kubernetes service.
This is all done by the AWS Load Balancer Controller
. You can check the events in the logs as follows:
kubectl logs -f deploy/aws-load-balancer-controller -n aws-load-balancer-controller --all-containers=true
You can see the events such as creating securityGroup
, created securityGroup
, creating loadBalancer
, created loadBalancer
, created listener
, created listener rule
, creating targetGroupBinding
, created targetGroupBinding
, successfully deployed model
, etc in the logs.
Step 4: Verify AWS Resources in AWS Console¶
Visit the AWS console and verify the resources created by AWS Load Balancer Controller.
Pay close attention to the Listeners
, Rules
and TargetGroups
.
You will observe that in the Target Group, instances are registered as targets because we chose instance
as target type.
Note that the Load Balancer takes some time to become Active
.
Step 5: Access App Via Load Balancer DNS¶
Once the load balancer is in Active
state, you can hit the load balancer DNS and verify if everything is working properly.
Access the load balancer DNS by entering it in your browser. You can get the load balancer DNS either from the AWS console or the Ingress configuration.
Try accessing the following paths:
# Root path
<load-balancer-dns>/
# Health path
<load-balancer-dns>/health
# Random generator path
<load-balancer-dns>/random
Troubleshooting¶
If you don't see the load balancer in the AWS console, this means the ingress has some issue. To identify the underlying issue, you can examine the logs of the controller as follows:
# Describe the ingress
kubectl describe ing my-ingress
# View aws load balancer controller logs
kubectl logs -f deploy/aws-load-balancer-controller -n aws-load-balancer-controller --all-containers=true
Clean Up¶
Assuming your folder structure looks like the one below:
Let's delete all the resources we created: