Why Kubernetes (K8S)?

Krishna Chaitanya Sarvepalli
3 min readOct 30, 2018
Giant ship (data center) with docker containers

Note: This post assumes you have basic knowledge of docker and containers.

This is my first post in medium and I am so much excited to be on medium after reading so many articles. I would like to write series of posts on Kubernetes (short form k8s)journey which we recently started adopting for one of our applications and slowly becoming enterprise wide solution for the entire company.

Earlier days all the software applications were big giant monolithic applications which are hard to manage and slow release cycles. Monolithic applications are tightly coupled components that are developed and deployed as single managed entity. Today these giant monolithic applications are broken down into smaller deployable components called as micro services.

A Simple ecommerce web application (like amazon, ebay etc.)

There is one more issue which needs to be solved will all these smaller components. Let’s say one big monolithic application is now divided into 10 smaller deployable components. Now with more number of deployable components we need to get more resources, more nodes and memory. If we are not careful with estimations and proper allocation of resources, this will increase the hardware resource costs. There might be multiple data centers that the apps will be deployed for multi zone support or the app needs active/passive data centers for high availability.

With traditional monolithic approach in olden days, Apps team was dependent on system administrators and ops team to take care about the deployments and if an issue occurs it will take hours to debug (might not be for every app). With micro services we should be able to release the app quickly to production as they are smaller components and developer should be able to deploy their own app to production so that we can cut down the costs on ops team. The focus of ops team can be something other than managing individual apps.

Call from Jerry for production support issue!!!!

Nobody wants to wake up in the middle of the night (at least me), just because one of the hardware resource (node) was failed where our critical app was running. There should be an automated way which can detect resource failures and bring up the app on another node and update the load balancer with new node information (IP Address for routing traffic).

K8S

Kubernetes is the single stop solution for above mentioned (of course there will be lot more use cases) problems.

  1. Kubernetes abstracts the hardware infrastructure and exposes the whole cluster as one giant computational resource.
  2. Kubernetes can take care about failure handling, when one of the node fails, it will automatically schedules the app to another node (which will help ops team as well).
  3. Kubernetes can scale the application during high demand (like thanksgiving shopping days, Christmas holiday season) based on cpu usage.
  4. Kubernetes will give more control to the developers for deploying their own app to production and promotes devops culture.
  5. Kubernetes will monitor the health of the application and it supports rolling deployment and rollbacks to older version just in case if the new version of apps has any issues.

Finally Kubernetes provides a platform for deploying, running and monitoring any containerized application.

Note: I really appreciate any comments and if there are any mistakes I am happy to correct.

--

--

Krishna Chaitanya Sarvepalli

Solution Architect @TSYS Good @ Java, Kubernetes, Kafka, AWS cloud, devops , architecture and complex problems