This blog provides a quick overview of Kubernetes.

At the end of every release cycle, developers package their applications and hand it over to the ops team, who then deploy them and monitor. In case of failures, the team manually migrates it to the health servers.

With the advent of microservices architecture these big monoliths are broken into independently runnable components. As this decouples the systems from each other that can be individually deployed and scaled. This ensures that changes in components are quickly updated with changing business requirements.

Kubernetes has appealed a lot of attention over the years and the main reason for its rapid and immediate adoption is its ability to solve the above mentioned problems efficiently.

In this blog, we’ll discuss some of the Kubernetes’ basic concepts and will also talk about the architecture of the system, the problems it solves, and the model that it uses to grasp containerized deployments and scaling.


Kubernetes is a system for managing and coordinating containerized applications across a cluster of machines. It is a platform to manage the lifecycle of containerized applications and services using the technique that gives predictability, scalability, and high availability.

You can define how your applications should run and the ways they should be able to interact with other applications or with the outside world. You can scale your services ups and downs, perform graceful rolling updates, and switch traffic between different versions of your applications to test its characteristics or rollback problematic deployments. Kubernetes provides interfaces and composable platform primitives that allows you to define and control your applications with high levels of flexibility, power, and reliability.


Kubernetes is a system built in different layers. As its base Kubernetes carry individual or virtual machines into a cluster using a shared network to communicate between each server. The cluster is the physical platform where all kubernetes components, capabilities, and workload are constructed.

In Kubernetes ecosystem, one server functions as the master server. This server performs as a gateway for the cluster by displaying an API for users and clients, health checking other servers, deciding how to schedule and assign work and coordinate communication between other components. The master server acts as the dominant point of contact with the cluster and responsible for all the centralized logic that kubernetes provide.

The other machine in the cluster is labeled and designed as nodes:servers responsible for obtaining and running assignments or workload with the help of local and external resources and to help with isolation, management, and flexibility, Kubernetes runs applications and services in the containers itself, so that, each node needs to be equipped with a container runtime (such as Docker or rkt). The node accepts work instructions from its master server and generate or destroys containers, adjusting appropriate networking the rules to route and forward traffic accordingly.

As above mentioned, the applications and services are run by itself on the cluster within the containers. The repressed or the underlying components make sure that the desired stage of the applications matches the actual condition of the clusters. The users communicate with the cluster by interacting with the dominant API server either directly or with clients and libraries. To get started to an application or service, a proper declarative plan is submitted in JSON or YAML describing what to create and how it should be managed. Then the master server takes the plan and figures it out how to run on the infrastructure by examining the requirements and the current state of the system. This group of user-defined applications runs with the support of specific plans which represents Kubernetes’ final layer.

Master Server Components

As we discussed above the master server acts as the primary control plane for Kubernetes clusters. It performs as the main contact point for administrators and users and also contributes many cluster-wide systems for the relatively unsophisticated worker nodes. In general, the factors of the master server work together to take user requests, regulate the best way to lineup workload containers, authenticate clients and nodes, adjust cluster-wide networking and manage scaling and health checking responsibilities.


One of the essential components that Kubernetes needs to function is a globally available configuration store. The etcd project, developed by the team at CoreOS, is a lightweight, which issues key-value store that can be configured to span across multiple nodes.

Kubernetes uses etcd to store configuration data that can be penetrated by each of the nodes in the cluster. This can be used for service identification and can help elements configure or reconfigure themselves according to up-to-date information. It also helps to continue cluster state with features like leader election and distributed locking. By given a simple HTTP/JSON API, the interface for a framework or reclaim values is very straightforward.

Kube-API Server

The most necessary master services is an API server. This is the central management point of the whole cluster as it allows a user to configure Kubernetes’ workloads and the combined units. It is responsible for making sure whether the etcd store and the service details of deployed containers are in agreement or not. It acts as the intermediate between different factors to control cluster health and disseminates information and commands.


A controller manager is a general service that has a lot of responsibilities. Mainly, it manages various controllers that regulate the state of the cluster, maintains workload lifecycles, and execute routine tasks. For instance, a replication controller assures that the number of replicas (identical copies) detailed for a pod matches the number currently deployed on the cluster. The clear details of these operations are being written to etcd, where the controller manager notices the changes through the API server.


The process that actually allows workloads to specific nodes in the cluster is the scheduler. The service reads in a workload’s running requirements, evaluate the current infrastructure environment, and places the work on an acceptable node or nodes.

There the scheduler is responsible for tracking the available capacity on each host to make sure that whether the workloads are not scheduled in excess of the available resources and the scheduler should know the total capacity as well as the resources already allocated to the existing workloads on each server.


Kubernetes can be expanded in many different environments and can connect with different infrastructure providers to understand and control the state of resources in the cluster. While Kubernetes works with generic representations of resources like attachable storage and load balancers, it needs a way to map these to the actual resources provided by non-homogeneous cloud providers.

Node Server Components

The servers that perform work by running containers are known as nodes, in kubernetes. Node servers have a few requirements that are necessary for communicating with master components, configuring the container networking, and running the actual workloads allowed to the server.

A container Runtime

The container runtime is responsible for starting and controlling containers, applications encapsulated in a relatively isolated but lightweight operating environment. Each unit of work on the cluster is, at its basic level, enforced as one or more containers that must be deployed. The container runtime on each node is the component that finally runs the containers characterized in the workloads submitted to the cluster.


The main connecting point for each node with the cluster groups is a small service called kubelet. This service is mainly responsible for carrying information to and from the control plane services, as well as connecting with the etcd store to read configuration details or write new values.

The kubelet services communicate with the master components to verify the cluster and receive commands and work. Work is accepted in the form of a manifest which characterizes the workload and the running parameters. The kubelet process and then concludes the responsibilities for maintaining the state of the work on the node server. It maintains the container runtime to launch or destroy containers as needed.

Kube Proxy

To maintain individual host subnetting and make services applicable to other components, a small proxy service called Kube-proxy is run on each node server. This process forwards requests to the correct containers, can do primitive load balancing and is generally responsible for making sure the networking environment is predictable and reachable but isolated where appropriate.

Kubernetes Workloads and Objects

Although containers are the elemental mechanism used to deploy applications, Kubernetes uses further layers of considering over the container interface to bring scaling, resiliency, and lifecycle executing features. Rather of managing containers directly, users describe and connect with instances composed of different primitives arranged by the Kubernetes object model.


A pod is the most elemental unit that Kubernetes deals with. Containers themselves are not designated to hosts. Rather, one or more tightly coupled containers are encapsulated in an object called a pod.

A pod commonly represents one or more containers that should be controlled as a single application. Pods subsist of containers that operate firmly together, share a life cycle, and should always be scheduled on the same node. They are handled fully as a unit and share their environment, volumes, and IP space. In spite of their containerized application, you should normally think of pods as a single, monolithic application to best conceptualize how the cluster will manage the pod’s resources and scheduling.

Replication Controllers and Replication Sets

Usually, while working with Kubernetes, instead of working with single pods, you will rather be managing the groups of identical and pods. Those are mainly created from pod templates and can be horizontally scaled by the controllers known as replication controllers and replication sets.

Replication controller is a body that defines a pod template and control parameters to scale identical replicas of a pod horizontally by increasing or decreasing the number of running copies. This is a simple way to distribute loads and increase accessibility natively within Kubernetes. The replication controller understands how to generate new pods as needed because a template that closely resembles a pod clarity is embedded within the replication controller configuration.

The replication controller makes sure that the number of pods deployed in the cluster matches with the number of pods in its configuration. If at all a pod or underlying host fails, the controller has to start new pods to compensate with and If the number of replicas in a particular controller’s configuration changes, the controller either has to start up or kills containers to match up with the desired number. Replication controllers can also develop rolling updates to roll over a set of pods to a new version one by one, minimizing the effect on application availability.

The replication sets are an iteration on the replication controller outline with greater flexibility in how a controller is going to identify the pods which are meant to manage. The replication sets will start to replace replication controllers because of their greater replica selection capabilities, but they are not able to do rolling updates to cycle backends to a new version like replication controllers are able to do. rather, replication sets are basically meant to be used inside of additional, higher level units which provide that functionality.

Like that of pods, both replication controllers and replication sets are hardly the units you will work in direct. While they build on pods design to add horizontal scaling and reliability guarantees, they lack some of the fine-grained lifecycle management capabilities found in more complex Objects.


Deployments are one of the most common workloads to precisely build and manage. Deployments use replication sets as a building block, calculating flexible lifecycle management functionality to the mix.

Deployments are a high-level object constructed to ease the life cycle management of replicated pods. Deployments can be altered quickly by changing the configuration and Kubernetes will adjust the replica sets, manage transitions between different application versions, and optionally maintain event history and undo capabilities automatically. As a result of these features, deployments will likely be the type of Kubernetes object you work with most generally.


Kubernetes is a project that grant users to run scalable, highly available containerized workloads on a highly abstracted platform. Although Kubernetes’ architecture and set of internal components can at first seem daunting, their power, flexibility, and robust feature set are exceptional in the open-source world. By understanding how the basic building blocks apt together, you can begin to arrange systems that entirely leverage the facilities of the platform to run and manage your workloads at scale.

Being a premier cloud solutions company, KloudOne has been using Kubernetes for quite some time and has recently moved to provide Kubernetes complaint solution stack.

Microservices Premier

Previous article

Working From Home [WFH] The shape of things to come..

Next article


Leave a reply

Your email address will not be published. Required fields are marked *

Popular Posts

Login/Sign up