Skip to content

hostPath Volume Demo

Let's see how we can use emptyDir volume as temporary storage.

Here is the Docker Image used in this tutorial: reyanshkharga/nginx

Step 1: Create a Deployment With hostPath Volume

Let's create a deployment as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: reyanshkharga/nginx:v1
        imagePullPolicy: Always
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo $(date -u) >> /my-data/my-persistent-data.txt; sleep 5; done"]
        volumeMounts:
        - name: my-volume
          mountPath: /my-data
      volumes:
      - name: my-volume
        hostPath:
          path: /data
          type: DirectoryOrCreate

Observe the following:

  1. We have added an hostPath volume in the pod template
  2. Directory location for hostPath volume on host is /data
  3. The pod has only one container
  4. The hostPath volume is mounted on /my-data directory of the container
  5. The container writes some data in the my-persistent-data.txt file every 5 seconds

Apply the manifest to create the deployment:

kubectl apply -f my-deployment.yml

Step 2: Verify Deployment and Pods

# List deployments
kubectl get deployments

# List pods
kubectl get pods

Step 3: Verify Volume Mount and Data

Let's verify if the hostPath volume was mounted in the container.

  1. Start a shell session inside the container:

    kubectl exec -it <pod-name> -- bash
    
  2. Verify if /my-data directory is present in the container:

    ls /my-data
    
  3. View the content of /my-data/my-persistent-data.txt file:

    tail -f /my-data/my-persistent-data.txt
    

Step 4: Verify hostPath Volume on Worker Node

Let's verify if the /data directory was created on the host machine where the pod is running.

  1. Find node where the pod is running:

    kubectl get pods -o wide
    
  2. Access the worker node (EC2 Instance) by logging in through SSH or session manager.

  3. Verify if /data directory is present and has my-persistent-data.txt file in it:

    ls /data
    
  4. View the content of /data/my-persistent-data.txt file:

    tail -f /data/my-persistent-data.txt
    

Step 5: Delete the Deployment

Let's delete the deployment and see what happens to the hostPath volume.

kubectl delete -f manifests/

Step 6: Verify hostPath Volume on Worker Node Again

Access the node by logging in through SSH or session manager and verify if the data persists.

Observation:

  1. The directory and files created by pods persists
  2. Any data generated by pods persists.

Clean Up

Assuming your folder structure looks like the one below:

|-- manifests
│   |-- my-deployment.yml

Let's delete all the resources we created:

kubectl delete -f manifests/