Launching your First Kubernetes Cluster with Nginx running

Launching your First Kubernetes Cluster with Nginx running

#90 Days of DevOps Challenge - Day 31

What is minikube?

  • Minikube is a tool that quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. It can deploy as a VM, a container, or on bare metal.

  • Minikube is a pared-down version of Kubernetes that gives you all the benefits of Kubernetes with a lot less effort.

  • This makes it an interesting option for users who are new to containers, and also for projects in the world of edge computing and the Internet of Things.

Features of minikube

  1. Supports the latest Kubernetes release (+6 previous minor versions)

  2. Cross-platform (Linux, macOS, Windows)

  3. Deploy as a VM, a container, or on bare-metal

  4. Multiple container runtimes (CRI-O, containerd, docker)

  5. Direct API endpoint for blazing-fast image load and build

  6. Advanced features such as LoadBalancer, filesystem mounts, FeatureGates, and network policy

  7. Addons for easily installed Kubernetes applications

  8. Supports common CI environments

Define Pod

  • PODS are the atom of the Kubernetes cluster.

  • In Kubernetes instead of deploying the containers individually, we deploy Pods.

  • Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

  • A pod can have any number of containers running in it.

  • A pod is basically a wrapper around containers running on a node.

Why POD?

The answer is quite simple. For an application user, the container is kind of Virtual Machine, the end user can login, install some packages, stop/start it, so it behaves like an VM to the end user. Also, the containers are designed to run a single process per container. What is an Application requires multiple processes that communicate via IPC or through local files, so in that case they need to run on the same machine(or VM). As mentioned grouping multiple processes in a single container is NOT the best practice, so what the solution is ?? This is where PODs comes into the picture. So, POD

  • POD can have one or more containers running inside it (It doesn't mean that you have to always run multiple containers inside the POD). POD allows you to run closely related processes together and provide them same environment.

  • Containers in a pod have shared volumes, Linux namespaces, and cgroups. Each pod has a unique IP address and the port space is shared by all the containers in that pod. This means that different containers inside a pod can communicate with each other using their corresponding ports on localhost

img

  • POD always run on a single worker node,it never spans across multiple worker nodes.

  • Containers running inside the POD share the same Network Namespace (Linux) i.e. they share the same IP address, same loopback NIC and PORT space.

  • All PODs in a K8s cluster (irrespective of the worker nodes ), reside in a single flat shared Network space, i.e. every Pod can access every Pod with its IP address, no NATING is required, it is just like they are connected over the same LAN.

apiVersion: v1
kind: Pod
metadata:
  name: pod-name
spec:
  containers:
  - name: container1
    image: image1
  - name: container2
    image: image2
  • apiVersion:- Version of the Kubernetes API we are going to use.

  • kind:- The kind of Kubernetes object we are trying to create, which is a Pod in this case.

  • metadata:- Metadata or information that uniquely identifies the object we're creating.

  • spec:- Specification of our pod, such as container name, image name, volumes, and resource requests.

Note: apiVersion, kind, and metadata apply to all types of Kubernetes objects and are required fields. spec is also a required field; however, its layout is different for different types of objects.

Life Cycle of a Pod:-

  • Pending:- This means that the pod has been submitted to the cluster, but the controller hasn't created all its containers yet.

  • Running:- This state means that the pod has been assigned to one of the cluster nodes and at least one of the containers is either running or is in the process of starting up.

  • Succeeded:- This state means that the pod has run, and all of its containers have been terminated with success.

  • Failed:- This state means the pod has run and at least one of the containers has terminated with a non-zero exit code,

  • Unknown:- This means that the state of the pod could not be found. This may be because of the inability of the controller to connect with the node that the pod was assigned to.

Types of Installation of K8s

  1. Mini Kube (Docker Inside Docker DIND) → least use in Prod → Easiest

  2. Kubeadm :→ Baremetal (open-source tool) → Used in Prod → Intermediate

  3. Managed K8s Cluster

    AWS → EKS (Elastic Cloud Kubernetes)

    Azure → AKS (Azure Kubernetes Service)

    GCP → GKE (Google Kubernetes Engine)

  4. KIND (Kubernetes in Docker)

Task 1: Install Minikube on your local

Step 1:- Firstly we need to create an EC2 instance and while launching the EC2 machine need to select t2.medium. Because if you need to install Kubernetes in any EC2 machine, the configuration should be having 2 CPUs, 4GB of free memory, 20 GB of free disk space.

Step 2:- Then need to Install Docker in your system by running the below commands

sudo apt update -y
 sudo apt install docker.io -y

 sudo systemctl start docker
 sudo systemctl enable docker
 sudo systemctl status docker

Step 3:- Need to Add the user to the docker group by below command

sudo usermod -aG docker $USER && newgrp docker

Step 4:- Now need to install Minikube in the system.

 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

 sudo install minikube-linux-amd64 /usr/local/bin/minikube

Step 5:- If we need to interact then we need to install CLI as a kubelet

 sudo snap install kubectl --classic

Step 6:- Now we can start Minikube

 minikube start --driver=docker

Task 2: Create your first pod on Kubernetes through minikube.

Step 1:- To create a pod, we have to write a YAML file which is a.k.a Manifest file. So to create a pod for NGINX we have to pass the values & attributes in key-value format.

 apiVersion: v1
 kind: Pod
 metadata:
   name: nginx
 spec:
   containers:
   - name: nginx
     image: nginx:1.14.2
     ports:
     - containerPort: 80

Step 2:- Now we need to run the command to create the pod

kubectl apply -f pod.yml

Step 3:- Check the pod's status by kubectl get pods, you can see an NGINX pod is created successfully by its status

 kubectl get pods

Step 4:- To check if nginx is running locally or not, do we have to ssh the minikube go inside the minikube cluster . Then curl the IP address of the pod.

 #Get the IP
 kubectl get pods -o wide

 # SSH into minikube
 minikube ssh

 # Curl the IP address to access the NGINX
 curl http://<IP-Addr>

Task 3: Create NGINX pod on K8s through Kubeadm

Step 1:- Firstly we need to launch 2 EC2 instances and while launching the EC2 machine need to select t2.medium. Because if you need to install Kubernetes in any EC2 machine, the configuration should be having 2 CPUs, 4GB of free memory, 20 GB of free disk space.

Step 2:- Then need to Install Docker in both machines by running the below commands

sudo apt update -y
sudo apt install docker.io -y

sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl status docker

Step 3:- Now need to Install Kebeadm on both master and node. and update both systems

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

 sudo apt update -y

Step 4:- need to Install Kubeadm, Kubectl and Kebelet in both Master and Node.

 sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y

Step 5:- Now need to Connect Master with Node and Initialized Kubeadm by running the below command

sudo su
kubeadm init

Step 6:- Setup the kubeconfig for the current user

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 7:- Finish the Master Setup using the following Command:

 kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

Step 8:- Now we need to go to worker and run the below command

Step 9:- Then on the Worker Node reset the checks so it can't assign as Master.

sudo su
kubeadm reset pre-flight checks

Step 10:- Paste the Join command on the worker node and append --v=5 at the end

Step 11:- Now from the master node run the below command we can see both nodes

kubectl get nodes

Kubeadm Installation Steps

Note:-

  • By default, the kubectl run command creates a deployment and a replica set along with the pod. If you only want to create a pod without creating a deployment or replica set, you can use the --restart=Never flag.

  • But if you pass --restart=Always, if your pod is deleted or having an issue, then a new pod will be replaced immediately.

Step 12:- Need to run the below image

kubectl run nginx --image=nginx --restart=Never

Step 13:- Need to check docker image

Step 14:- Get the details of the pod

kubectl get pods -o wide

Devops#devops,#90daysofDevOps

Thank you for reading!! I hope you find this article helpful!!

if any queries or corrections to be done to this blog please let me know.

Happy Learning!!

Saikat Mukherjee

Did you find this article valuable?

Support Saikat Mukherjee's blog by becoming a sponsor. Any amount is appreciated!