Kubernetes Important interview Q&A

Kubernetes Important interview Q&A

#90 Days of DevOps Challenge - Day 37

  1. What is Kubernetes and why it is important?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Kubernetes services provide load balancing and simplify container management on multiple hosts. They make it easy for an enterprise’s apps to have greater scalability and be flexible, portable and more productive.

In fact, Kubernetes is the fastest-growing project in the history of open-source software, after Linux. According to a 2021 study by the Cloud Native Computing Foundation (CNCF), from 2020 to 2021, the number of Kubernetes engineers grew by 67% to 3.9 million. That’s 31% of all backend developers, an increase of 4 percentage points in a year.

  1. What is the difference between docker swarm and Kubernetes?

Point of comparison

Kubernetes

Docker Swarm

Installation

Complex

Comparatively simple

Learning curve

Heavy

Lightweight

GUI

Detailed view

No GUI, needs third party

Cluster setup

Easy

Easy

Availability features

multiple

minimal

Scalability

All-in-one scaling based on traffic

Values scaling quickly (approx. 5x faster than K8s) over scaling automatically

Horizontal auto-scaling

Yes

No

Monitoring capabilities

Yes, built-in

No, needs third party

Load balancing

No built-in internal auto load balancing

Internal load balancing

Security features

Supports multiple security features

Supports multiple security features

CLI

Needs a separate CLI

CLI is out of the box

  1. How does Kubernetes handle network communication between containers?

Kubernetes handles network communication between containers using a networking model that allows for seamless and secure communication within a cluster.

  • Pod Networking: Containers in Kubernetes are organized into pods, which are the smallest deployable units. Each pod gets its own unique IP address within the cluster.

  • Cluster Networking: Kubernetes assigns a unique IP address to each pod, and all pods can communicate with each other across the cluster.

  • Service Networking: Kubernetes Services provide a stable virtual IP address (ClusterIP) to represent a set of pods. Services act as an abstraction layer, allowing other pods or external clients to access the pods running behind the service.

  • Load Balancing: Kubernetes provides built-in load balancing for services. When multiple instances of a pod are running, Kubernetes automatically distributes the incoming traffic across these instances, ensuring high availability and efficient resource utilization.

  1. How does Kubernetes handle the scaling of applications?

  • Kubernetes offers several mechanisms for scaling applications. Horizontal Pod Autoscaling (HPA) adjusts the number of pod replicas based on CPU or custom metrics, ensuring resources match demand.

  • Vertical Pod Autoscaling (VPA) adjusts resource requests and limits per pod to optimize utilization. Cluster Autoscaler dynamically scales the cluster by adding or removing nodes based on resource usage.

  • Manual scaling allows adjusting the desired number of replicas manually. Additionally, Kubernetes supports StatefulSets and DaemonSets for specialized scaling requirements.

  1. What is a Kubernetes Deployment and how does it differ from a ReplicaSet?

In real world thing starts from here, we write deployment to launch Pods. A Deployment provides declarative updates for Pods and ReplicaSets.

  • A Deployment is a Kubernetes object that acts as a wrapper around a ReplicaSet and makes it easier to use.

  • In order to manage replicated services, it's recommended that you use Deployments that, in turn, manage the ReplicaSet and the Pods created by the ReplicaSet.

  • The major motivation for using a Deployment is that it maintains a history of revisions.

  • Every time a change is made to the ReplicaSet or the underlying Pods, a new revision of the ReplicaSet is recorded by the Deployment.

  • This way, using a Deployment makes it easy to roll back to a previous state or version. Keep in mind that every rollback will also create a new revision for the Deployment

  1. Can you explain the concept of rolling updates in Kubernetes?

  • This is a strategy used to update a Deployment without having any downtime.

  • With the RollingUpdate strategy, the controller updates the Pods one by one.

  • Hence, at any given time, there will always be some Pods running.

  • This strategy is particularly helpful when you want to update the Pod template without incurring any downtime for your application.

  • However, be aware that having a rolling update means that there may be two different versions of Pods (old and new) running at the same time.

strategy:

  type: RollingUpdate

  rollingUpdate:

    maxUnavailable: 1

    maxSurge: 1
  • maxUnavailable is the maximum number of Pods that can be unavailable during the update.

  • maxSurge is the maximum number of Pods that can be scheduled/created above the desired number of Pods (as specified in the replicas field).

The two parameters—maxUnavailable and maxSurge—can be tuned for availability and the speed of scaling up or down the Deployment.

  1. How does Kubernetes handle network security and access control?

  • Kubernetes handles network security and access control through various mechanisms. It provides network policies to define and enforce communication rules between pods.

  • Additionally, Kubernetes offers authentication and authorization mechanisms, integrating with external identity providers and supporting RBAC (Role-Based Access Control).

  • It supports transport encryption using TLS for secure communication between components. Kubernetes also provides secrets management for securely storing sensitive information such as API credentials or database passwords.

  1. Can you give an example of how Kubernetes can be used to deploy a highly available application?

  • Deploying Multiple Replicas: Define a Kubernetes Deployment with multiple replicas of your application.

  • Load Balancing: Set up a Kubernetes Service to load balance traffic across the replicas of your application.

  • Health Checks and Self-Healing: Configure readiness and liveness probes for your application. Kubernetes periodically checks the health of each replica using these probes.

  • Node Failure Handling: Configure Kubernetes with multiple worker nodes spread across different availability zones or regions.

  • Persistent Storage: Utilize Kubernetes' persistent volume mechanisms to ensure data durability and availability.

  • Cluster Autoscaling: Enable the Kubernetes Cluster Autoscaler to automatically scale the cluster size based on resource demand.

  1. What is a namespace is Kubernetes? Which namespace any pod takes if we don't specify any namespace?

  • You can think of a Namespace as a virtual cluster inside your Kubernetes cluster. You can have multiple namespaces inside a single Kubernetes cluster, and they are all logically isolated from each other. They can help you and your teams with organization, security, and even performance!

  • In most Kubernetes distributions, the cluster comes out of the box with a Namespace called “default.” In fact, there are actually three namespaces that Kubernetes ships with: default, kube-system (used for Kubernetes components), and kube-public (used for public resources). Kube-public isn’t really used for much right now, and it’s usually a good idea to leave the kube-system alone

  • If a namespace is not specified for a pod, it is automatically assigned to the "default" namespace. The "default" namespace is created by default in every Kubernetes cluster and is used when no explicit namespace is specified.

  1. How does ingress help in Kubernetes?

Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node. This makes it the best option to use in production environments.

In production environments, you typically need content-based routing, support for multiple protocols, and authentication. Ingress allows you to configure and manage these capabilities inside the cluster.

  1. Explain different types of services in Kubernetes?

  • In Kubernetes, there are different types of services to facilitate communication and expose applications within the cluster:

    1. ClusterIP: The default service type, accessible only within the cluster. It assigns a stable internal IP address to the service.

    2. NodePort: Exposes the service on a static port on each cluster node's IP. It enables access to the service from outside the cluster.

    3. LoadBalancer: Automatically provisions an external load balancer (if supported by the underlying infrastructure) to distribute traffic to the service.

    4. ExternalName: Maps a service to a DNS name, allowing the service to be accessed by that name.

  1. Can you explain the concept of self-healing in Kubernetes and give examples of how it works?

Self-healing is a feature provided by the Kubernetes open-source system. If a containerized app or an application component fails or goes down, Kubernetes re-deploys it to retain the desired state. Kubernetes provides self-healing by default.

Kubernetes implements self-healing at the Application Layer. This means that if your app is well-containerized and a pod crashes, Kubernetes will work to reschedule it as soon as possible. Containers are made available for clients only if they are ready to serve. The redeployment is subject to the availability of sufficient infrastructure.

Through its self-healing ability, Kubernetes can achieve the following:

  • Restart failed containers

  • Kill containers that do not respond to client requests

To check if the pods are functioning at desired states, Kubernetes performs two probes:

  1. The liveliness probe checks the running status of a container. If the probe fails, Kubernetes terminates the container and creates a new one according to its restart policy.

  2. The readiness probe checks a container for its client request serving abilities. If the probe fails, Kubernetes will remove the IP address of the affected pod.

  1. How does Kubernetes handle storage management for containers?

Kubernetes provides storage management for containers through various mechanisms:-

  • Persistent Volumes(PV) :-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned.

  • Persistent volume claims(PVC) :-A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany.

  • Container Storage Interface (CSI):-Kubernetes just provides an interface for Pods to consume it. The storage properties like, speed, replication, resiliency etc. all are the storage provides responsibility . For any storage provider to integrate there storage with Kubernetes, they have to write the CSI plugins.

  1. How does the NodePort service work?

  • This exposes the service on each Node’s IP at a static port. Since a ClusterIP service, to which the NodePort service will route, is automatically created. We can contact the NodePort service outside the cluster.

  • A Nodeport service is the most primitive way to get external traffic directly to your service.

  • NodePort, as the same implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.

Note that the NodePort Service has a lot of downsides:

  • you can only have one service per port

  • you can only use ports 30000–32767,

  • if your Node/VM IP address changes, you need to deal with that.

That’s why it’s not recommended for production use cases

  1. What is a multinode cluster and single-node cluster in Kubernetes?

  • A multinode cluster refers to a configuration where multiple worker nodes are connected to a control plane. Each worker node runs containerized applications and contributes to the cluster's overall computing and storage resources.

  • Multinode clusters distribute the workload across nodes, enabling scalability, high availability, and fault tolerance.

  • A single-node cluster consists of a single worker node running both the control plane and application workloads. It is typically used for development or testing purposes when a full-fledged multinode cluster is not required.

  • Single-node clusters are simpler to set up but lack the benefits of distributed resources and fault tolerance offered by multinode clusters.

  1. Difference between create and apply in Kubernetes?

  • In Kubernetes, the "create" and "apply" commands are used to manage resources in the cluster, but they differ in their behavior and usage.

  • The "create" command is used to create new resources in the cluster. It creates the specified resource based on the provided configuration, regardless of whether a similar resource already exists.

  • The "apply" command, on the other hand, is used to apply changes to existing resources. It updates or creates resources based on the provided configuration.

  • If a resource with the same name already exists, the "create" command will fail but If a resource already exists, the "apply" command will update its configuration, while if it doesn't exist, it will create a new resource.

Devops#devops,#90daysofDevOps

Thank you for reading!! I hope you find this article helpful!!

if any queries or corrections to be done to this blog please let me know.

Happy Learning!!

Saikat Mukherjee

Did you find this article valuable?

Support Saikat Mukherjee's blog by becoming a sponsor. Any amount is appreciated!