Deploying Microservices with Persistent Volumes in Kubernetes
Deploying Microservices: A Guide to Persistent Volumes in Kubernetes
Table of contents
- Step 1: Cloning the source code of the project
- Step 2: Build the Dockerfile
- Step 3: Push the docker image to Docker Hub
- Step 4: Creating a Persistent Volume Manifest for K8s
- Step 5: Creating a Persistent Volume Claim (PVC) Manifest in K8s
- Step 6: Create a Deployment Manifest file of MongoDB
- Step 6: Create a Service Manifest file of MongoDB
- Step 7: Create the Deployment manifest File of the Flask App
- Step 8: Create the Service manifest File of the Flask App
- Step 9: Test the Application
- Congratulations on Your Microservices Project! 🔥🥳
Welcome back fellow Kubernetes Adventurers! This post will walk you through the steps of deploying a two-tier application that combines the power of a Flask Python backend with a MongoDB database—all within the Kubernetes ecosystem, We will also be using Persistent Volumes (PV) and Persistent Volume Claim (PVC) while doing this project.
Before we start with this exciting Microservices Deployment project, let’s make sure you have the following prerequisites:
Prerequisites:
Basic Knowledge of Kubernetes: This adventure assumes that you have a basic understanding of Kubernetes objects such as Pod, Deployment, and Service.
DockerHub Account: This blog requires an active DockerHub account that will be used to upload and download the docker image we will be building. You can create an account by visiting https://hub.docker.com/
Git: Git installed on your Ubuntu machine
A K8s Cluster: To follow this hands-on, it is required that you have a ready-to-use K8s cluster.
My Setup:
Ubuntu 22.04 VM running in VMware Fusion Player
3 Cores, 4GB RAM assigned to the virtual machine
1 node Kubeadm bootstrapped Kubernetes cluster with Calico CNI plugin
Step 1: Cloning the source code of the project
The first step is to clone the source code of the project using the following command
git clone https://github.com/kunal-gohrani/microservices-k8s.git
Since the project is evolving continuously, we will be using the branch blog
for the purpose of this guide. Let’s checkout into blog
branch using the command: git checkout blog
Step 2: Build the Dockerfile
The project folder includes a Dockerfile that will be used to build the image.
Dockerfile:
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENV PORT 5000
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
Run the below-given commands inside project folder/flask-api to build the docker image, replace the placeholder <username>
with your DockerHub username.
docker build . -t <username>/microservicespythonapp:latest
Use the command docker images
to view your newly built image.
Step 3: Push the docker image to Docker Hub
After building the image, we need to push it to the docker hub, our Kubernetes deployment will later fetch the same image from docker hub to run the app.
Login to DockerHub from docker CLI:
This step is crucial to push docker images. Run the given command to log in to docker hub with your username and password.
docker login
Push the Docker image to DockerHub:
Use the command docker push <username>/microservicespythonapp:latest
to push the image, replace the placeholder <username>
with your username.
After pushing the image, refresh your DockerHub website, and you will be able to see a new repository has been created containing your docker image.
Step 4: Creating a Persistent Volume Manifest for K8s
Congrats on reaching here! 🥳 You have now successfully built a docker image and uploaded it to Docker Hub, thus completing the basic steps of the continuous integration Process.
Now we will be focusing on deploying the app in our K8s cluster. For this, we first need to create a Persistent volume object that will store data of our MongoDB.
Persistent Volumes (PVs) are a way to store data reliably in Kubernetes. When you're working with databases like MongoDB, PVs provide a dedicated storage space that holds your data securely. They ensure that even if your applications or containers change or restart, your valuable data remains safe and accessible. PVs are essential for preserving your database information within the Kubernetes environment. You can read more about persistent volumes here.
You can view the mongo-pv.yml
file present in GitHub or copy the YAML from here:
mongo-pv.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
capacity:
storage: 256Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /home/node/mongodata
We are provisioning a 256Mi persistent volume with Retain policy, which means that the data in this volume will be preserved even if the corresponding persistent volume claim policy has been deleted.
You need to replace the value of the key path
in the above YAML to point to a folder present in the node. In my case, I have already created a folder mongodata in the given path.
Create the persistent volume in your cluster using the command kubectl apply -f mongo-pv.yml
, then run kubectl get pv
to see the volumes available.
You should see an output like this:
NOTE: A local volume has been used over here which is not advised in production clusters or clusters with multiple nodes. Please make sure to use a volume according to your requirements.
Step 5: Creating a Persistent Volume Claim (PVC) Manifest in K8s
After the persistent volume has been created successfully, we can now create the persistent volume claim that our MongoDB deployment will use to claim the volume and store data.
Persistent Volume Claims (PVCs) are similar to special storage request forms in Kubernetes. PVCs are used to request storage space from Kubernetes when your applications want it. They indicate the amount and type of storage required. Kubernetes then matches these requests with available storage, ensuring that your apps have enough storage to store and access data.
You can view the mongo-pvc.yml
file present in GitHub or copy the YAML from here:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi
We are requesting a 256Mi volume with access mode as ReadWriteOnce, similar to what we have in the persistent volume we created above.
Create the persistent volume in your cluster using the command kubectl apply -f mongo-pvc.yml
, then run kubectl get pvc
& kubectl get pv
to see the PVC and the volume state.
You should see an output like this:
The persistent volume is now bound to the persistent volume claim policy as expected, Now let’s use this PVC in the deployment of MongoDB to allow it to store data.
Step 6: Create a Deployment Manifest file of MongoDB
You can view the mongo.yml
file present in GitHub or copy the YAML from here, create a file and then apply it.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
persistentVolumeClaim:
claimName: mongo-pvc
Notice the key spec.template.spec.containers.volumeMounts
where we ask Kubernetes to mount the volume to mount path /data/db
, this is the path where Mongo db stores its data. We also declare the PVC that the containers will use while claiming data under spec.template.spec.volumes
Create the deployment in your cluster using the command kubectl apply -f mongo.yml
and then run kubectl get deployment
after a few seconds to make sure that the pod(s) are ready.
The output should look like this:
Step 6: Create a Service Manifest file of MongoDB
You can view the mongo-svc.yml
file present in GitHub or copy the YAML from here, create a file and then apply it.
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
Create the service in your cluster using the command kubectl apply -f mongo-svc.yml
and then run kubectl get svc
after a few seconds to make sure that the service is ready.
Step 7: Create the Deployment manifest File of the Flask App
You can view the taskmaster.yml
file present in GitHub or copy the YAML from here, create a file and then apply it.
apiVersion: apps/v1
kind: Deployment
metadata:
name: taskmaster
labels:
app: taskmaster
spec:
replicas: 1
selector:
matchLabels:
app: taskmaster
template:
metadata:
labels:
app: taskmaster
spec:
containers:
- name: taskmaster
image: kunalgohrani/microservicespythonapp:latest
ports:
- containerPort: 5000
imagePullPolicy: Always
Make sure to replace the value of image
key with that of your image name as shown in the YAML file.
The deployment creates 1 pod with the kunalgohrani/microservicespythonapp:latest
image we pushed earlier. The key containerPort declares that the container accepts connections from port 5000.
Run kubectl apply -f taskmaster.yml
to create the deployment object in K8s and then run kubectl get deployment
. You should see an output similar to this:
Step 8: Create the Service manifest File of the Flask App
You can view the taskmaster-svc.yml
file present in GitHub or copy the YAML from here, create a file and then apply it.
apiVersion: v1
kind: Service
metadata:
name: taskmaster-svc
spec:
selector:
app: taskmaster
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30007
# This service is of type NodePort, that means you can access it using the URL
# <NodeIP>:30007 which is the nodePort mentioned above
type: NodePort
Run kubectl apply -f taskmaster-svc.yml
to create this service in K8s. Run kubectl get
svc to check if the service has been created, the output should look like this:
Step 9: Test the Application
You can now test the application by using the curl command given below, make sure to replace 192.168.0.159
with your node’s/service’s IP based on the cluster you are using.
curl 192.168.0.159:30007
Test MongoDB:
You can test if the application is working by trying to insert and then fetch data from the DB using the flask app requests GET /tasks and POST /task.
GET /tasks :
curl 192.168.0.159:30007/tasks
The output would be like this considering this is the first time you have run this app and haven’t inserted any data yet:
Insert Data to MongoDB using POST /task path and then get all data using GET /tasks path:
curl -X POST -H "Content-Type: application/json" -d '{"task":"Show everyone the project"}' http://192.168.0.159:30007/task
curl 192.168.0.159:30007/tasks
The output will be like this:
Let’s check if the database has populated the volume hostPath folder that we declared in mongo-pv.yml
under spec.hostpath.path
, In my case, it was /home/node/mongodata
As expected, the given folder path contains data from MongoDB.
Congratulations on Your Microservices Project! 🔥🥳
Kudos on successfully completing this hands-on microservices project with Flask and MongoDB, harnessing the power of Persistent Volumes and Claims. You've not only deployed a fantastic application but also gained valuable insights into Kubernetes.
Now, don't keep this achievement to yourself! Share it on LinkedIn to inspire and connect with fellow tech enthusiasts. And remember, if you ever have questions or need guidance on your next project, feel free to reach out on LinkedIn. Keep innovating and exploring the exciting world of Kubernetes!