Deploying A Microservices based Application in Kubernetes Cluster.
Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. It is a portable, extensible platform that manages containerized workloads and services, facilitating declarative configuration and automation.
Managing a database in a Kubernetes cluster can be challenging due to the distributed container environment. Kubernetes was initially designed for stateless applications, and running stateful workloads like databases requires careful consideration. Kubernetes provides self-healing capabilities of containers, including auto-placement, auto-replication, auto-restart, persistent storage management, and scaling based on CPU usage.
The two common ways to manage Databases in Kubernetes environment where we cn aslo manage the state of the databases along with the Data being preserved as:-
1. Statefulsets:- StatefulSets are well-suited for databases because each StatefulSet can have an associated persistent storage unique network identity. This makes them handy for use with stateful applications, but they often require more work and planning than other Kubernetes controllers.
2. DaemonSets:- DaemonSets are used to deploy a copy of a pod to each node in the cluster, which can be useful for running databases that require a local instance on each node. It is also beneficial to use a database with a Kubernetes Operator to handle configurations, the creation of new databases, scaling instances up or down, backups, and restores.
This Blog will be about the deploying a MongoDB cluster in Kubernetes using concept of StatesfulSets in Kubernetes. And Then We will be deploying a 2 tier application containing the backend and frontend application in the same Kubernetes Cluster. We will be also setup External svc in the kubernetes that will help in keeping the AWS route53 and the AWS EKS in sync. So that Whenever We will deploy any ingress based resources in our cluster it automatically create A record in AWS Route53.
Prerequisite of the setup that need to enabled in the AWS EKS cluster before deploying the application are:-
External Service:- Kubernetes external services facilitate the mapping of an external DNS name to a static name within the cluster, allowing internal services to refer to external entities by a familiar name. This is achieved through the use of ExternalName services, which are implemented at the DNS level using a simple CNAME DNS record. Additionally, when defining a Service in Kubernetes, externalIPs can be specified for any service type, allowing the Service to be accessed by clients using the specified external IP address and port.
AWS Load Balancer Controller:- The AWS Load Balancer Controller is a Kubernetes controller that helps manage Elastic Load Balancers for a Kubernetes cluster. It allows you to easily provision an AWS Application Load Balancer (ALB) from a Kubernetes ingress resource. The controller provisions Kubernetes Ingress and Service resources by provisioning Application Load Balancers and Network Load Balancers, respectively. It was formerly known as the AWS ALB Ingress Controller and is an open-source project. The AWS Load Balancer Controller creates an AWS Network Load Balancer (NLB) when you create a Kubernetes service of type LoadBalancer.
Steps of Creating A AWS EKS cluster:-
There Are 3 ways to creates a EKS Cluster in AWS cloud Environment which include the:-
1. Creating a EKS cluster using EKSCTL command:-
eksctl create cluster --region <your region code> --name <Your Kubernetes Cluster name> --nodes <no of nodes you want in your cluster> --nodegroup-name <Your nodegrp name> --node-type <your node instance type> --node-ami-family <Your instance Family whether it is Amazon linux or Ubuntu>
2. Creating A EKS using Terraform:- Terraform is infrastrature Management tool which can used to deploy your infrastrature in Cloud Environment Using Terraform code. For Deploying EKS uysing Terraform you can follow :- https://github.com/sibasish934/DevOps-terraform
3. Creating A EKS cluster Directly from the AWS console.
Steps to set the AWS Load Balancer Controller in Your EKS cluster:- You can follow this Blog :- https://sibasishblogs.blogspot.com/2023/12/deploying-application-load-balancer.html
3. Steps to setup External-dns Service in your EKS Cluster:-
The prerequisite of setting up the External-Dns in your EKS cluster is that:-
i. You Should have setup a Route53 hosted zone.
ii. You have created a AWSRoute53ExternalDNSRole using the Policy as follows:-
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets",
"route53:ListTagsForResource"
],
"Resource": [
"*"
]
}
]
}
Note:- While Creating the Role select web-identity as the source and then choose the OIDC of your EKS cluster as the provider.
Then Add the ARN of the Role in the folllowing code as mentioned and apply the file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
annotations:
eks.amazonaws.com/role-arn:#replace the arn of your account role.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods","nodes"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
labels:
app.kubernetes.io/name: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default # <<<< REPLACE the namespace you want to deploy the external-dns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: external-dns
template:
metadata:
labels:
app.kubernetes.io/name: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.0
args:
- --source=service
- --source=ingress
- --domain-filter=abc.xyz # <<<< REPLACE with your domain.
- --provider=aws
- --policy=upsert-only
- --aws-zone-type=public
- --registry=txt
- --txt-owner-id=external-dns
After Applying the file check the logs of the external-dns pods if the output is something like the snip I have attached then your external dns service and route53 are in sync now.
Now we are all set to deploy the application pods in the EKS cluster.
In order allow your cluster to create external ebs volumes your cluster should have the EBS CSI Driver add on.
Note:- And In EKS WorkerNode Role add EC2 full access permission also.
In order to create the mongoDB statefulset resources apply the following file:-
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: cloudchamp
spec:
serviceName: mongo
replicas: 3
selector:
matchLabels:
role: db
template:
metadata:
labels:
role: db
env: demo
replicaset: rs0.main
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- rs0.main
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.2
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "rs0"
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2
resources:
requests:
storage: 0.5Gi
After Creating the Mongo StatefulSets, you need to create a headless svc for it, which can be created using following yaml file.
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: cloudchamp
labels:
role: db
env: demo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: db
To add the mongo-0 as the primary node, mongo-1 as the secondary node and mongo-2 as the secondary node. Follow the following steps:-
1. Exec into the mongo-0 pods by the command:-
Kubectl -n cloudchamp exec -it mongo-0 -- mongo
2. Apply the Following commands:-
cat << EOF | kubectl exec -it mongo-0 -- mongo
rs.initiate();
sleep(2000);
rs.add("mongo-1.mongo:27017");
sleep(2000);
rs.add("mongo-2.mongo:27017");
sleep(2000);
cfg = rs.conf();
cfg.members[0].host = "mongo-0.mongo:27017";
rs.reconfig(cfg, {force: true});
sleep(5000);
EOF
3. in order to check that the commands are applied properly you can the check the status of the replicaset of mongo DB.
Now We will create a secret in kubernetes cluster in order to store the username and password of the mongo Database in base 64 encoded form.
The Secret can be created using the following command:-
kubectl create secret generic mongo-cred --from-literal=username=<your mongo DB database username> --from-literal=password=<your mongo Db Database password.>
After creating the secret the secret needs to be mounted as volume or passed as a environment value in the deployment file.
The backend and frontend deployment yaml files are present in my github. You can get it from here
After Deploying the Ingress file for api because you have setup external dns in your eks cluster, it will create the A records automatically in Route 53. And Then your backend should be visible in
http://<your-backend-domain>/ok.
http://<your-frontend-domain>
The A record for the ingress will also get automatically created in the Route 53 for the frontend because of the external-dns pod running in your cluster. And whenever your are creating a ingress resource you need to pass the following annotations in order to tell external-dns pod to create the A record in the Route 53 for the domain of your application.
external-dns.alpha.kubernetes.io/hostname: <your domain name>
Thankyou.