What is our primary use case?
We are basically integrators for Kubernetes because it is open source. And if we go further for any supported version, like Red Hat OpenShift or AWS EKS, Azure AKS. So Azure Kubernetes Service and Elastic Kubernetes Service. So that's where we are a partner as well, partner and integrator.
Our clients use it mainly for application modularization or new applications in microservices, build, and deployment. So where, like, if the client was running it on a monolithic application or legacy application, and they wanted to refactor their application, we convert it to microservices. That means building those container images, and then running them on a platform like Kubernetes so that it can run across different nodes across the data center, and we can manage it.
Basically, it is more of running as container images. So whenever that application requires more scale-out, features, refactoring, or application modernization, that's where we use this Kubernetes platform to run such applications.
What is most valuable?
There are many good features. I feel that the scale-out features, like replica sets, are very good. The number of running containers can be autoscaled. So, if there is more load on the application, it will automatically replicate the number of container images running. I feel that that is a very good feature, where there is no need to worry about the incoming load or response time or taking care of scaling. It automatically takes care of it.
What needs improvement?
Kubernetes is open source, which is both beneficial and negative depending on the responsibilities. Supported versions like Red Hat, Amazon, Microsoft, or Google are pricey.
It's good for bigger organizations, but for smaller organizations or a few workloads, it may be too heavy, not easy to deploy, and the ROI may be less because it requires a control plane, worker nodes, and multiple VMs to run. It's good for bigger organizations where many applications are run, but overkill for handling one or two small applications.
For how long have I used the solution?
I've been using it for at least the last four or five years. I've been solutioning and setting it up on various cloud providers like AWS and Azure.
Buyer's Guide
Kubernetes
November 2024
Learn what your peers think about Kubernetes. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
816,406 professionals have used our research since 2012.
What do I think about the stability of the solution?
It is quite stable compared to three or four years ago. If you are using a supported version and not a very old version, then it is good.
What do I think about the scalability of the solution?
It is scalable. We can add nodes and then more container images.
Some plugins for monitoring, patching, and operating are automatically available, so those are easy. Some may not be, like in the case of an older environment that may not have supported plugins, so those have to be developed.
How are customer service and support?
The customer service and support are satisfactory. Setting up is more effort-based. Later on, it is okay. Lab features and admissions are required.
How would you rate customer service and support?
How was the initial setup?
It requires initial effort. Later on, managing is okay, but initially, it requires skilled people to deploy it properly due to networking between nodes and worker and control planes.
The deployment time varies depending on the deployment. A simple POC for one VM can be deployed in an hour. For a dev-test environment, it may be around two hours. For production with many nodes, it may be four to five hours. It depends on the configuration, deployment type, and number of nodes.
Kubernetes improved the deployment and scaling processes. It requires underlying infrastructure nodes, which are a control plane (sometimes called a master plane), and worker nodes to run images or workloads. Because the underlying servers or virtual machines can be autoscaled or provisioned through policy, there is no need to take care of the rest. Once the application is deployed as a container image, Kubernetes automatically scales. It's just a matter of adding servers as worker nodes on which multiple applications or microservices can run. There is no need to deploy again.
In a typical scenario, we used to create virtual machines, install operating systems like Windows or Linux, and then deploy the application. Kubernetes eases deployment time, and we can run multiple applications from containers on the same node.
Even for each application, there may be different types of containers, like for front end or middleware connecting to a database. So there are a couple of such options.
What about the implementation team?
For deployment, around one person is good enough for an average setup. For support, one to two people are required, at least one person for each shift.
What's my experience with pricing, setup cost, and licensing?
I would rate the pricing a six out of ten, with ten being expensive. It's a bit costlier for smaller organizations.
It's good for bigger organizations, but for smaller organizations or a few workloads, it may be too heavy, not easy to deploy, and the ROI may be less because it requires a control plane, worker nodes, and multiple VMs to run.
It's good for bigger organizations where many applications are run, but overkill for handling one or two small applications.
What other advice do I have?
I would recommend using it.
I would rate it an eight out of ten, with one being bad and ten being very good.
Disclosure: My company has a business relationship with this vendor other than being a customer: Integrator