What is our primary use case?
Currently, I'm working with Adidas. They are working with a third party called Giant Swarm. We take care of the Kubernetes installation, like the infra site. Everything else is handled on AWS.
They have utilized different EC2 instances in order to create Kubernetes nodes, the master node, and a couple of worker nodes. My company doesn't use Elastic Kubernetes services. That is an inbuilt, AWS-provided, AWS-managed service. It's an on-premises managed cluster.
We have multiple applications and different Docker images that are used as part of different projects. Some of the projects use Java-based microservices, and some of the projects use TIBCO as a middleware application server.
The end product is the Docker images, and the ultimate use of Kubernetes is to have an automated deployment job created on Jenkins to deploy those Docker images and Kubernetes clusters. Kubernetes is an orchestrated way of deployment for different applications. It's a shared platform service.
We're deploying the latest version. It's deployed on an AWS public cloud.
It's difficult to count end users because we generally deploy the application in production. Adidas itself has end users with their e-commerce website. The number could be in the millions.
What is most valuable?
The autoscaling feature is the most valuable. Kubernetes itself is an orchestration tool. It automatically detects the load, and it automatically spins up the new Pod in the form of a new microservice deployment.
Autoscaling is a very important feature. It never interacts with deployment because once any application is deployed in the Kubernetes cluster based on load, it uses the existing application in a different Pod and creates a replica of the deployed application.
What needs improvement?
There are some UI services available for Kubernetes, but it's not very user friendly if we deploy multiple applications that can be viewed on the UI itself.
I'm expecting more improvement on the UI development side, which can be reflected in each object that is part of Kubernetes, like the Pod, deployment set, ReplicaSet, ConfigMap, Secrets, and PersistentVolume.
Those could be visible for the authorized user from the UI itself. It would help to interact and check the status of these objects if there's an issue with the data or memory.
For how long have I used the solution?
I've been using Kubernetes for three years.
What do I think about the stability of the solution?
I would rate the stability as five out of five.
What do I think about the scalability of the solution?
I would rate the scalability as five out of five.
How are customer service and support?
If we have a problem, we raise a ticket and they respond immediately. Technical support is very fast.
Which solution did I use previously and why did I switch?
Compared with Docker Swarm, Kubernetes is far better. Docker provides an enhanced orchestration tool, but it's very unstable. You cannot scale or utilize that tool in production. Kubernetes is far better and has a lot of excellent features.
How was the initial setup?
I would rate deployment as two out of five because it's not easy.
It took four to five days to finish deployment. If we start certain deployments from scratch, we have a DevOps team that works on the deployment scripts and creates Helm charts in order to create different Kubernetes services like the deployment set, the ConfigMap, and Secrets. Everything is set up by the DevOps team.
There were about five people involved in implementation, but it depends on the workload. If we needed to create the deployment setup for a single microservice, one person is enough because we have a standard template to use in order to create the standard deployment set. Once the Helm chart is ready, it's just a matter of triggering the deployment.
We created the automation setup using Bitbucket, Jenkins, Helm and Kubernetes. We created a Helm chart first, then placed it in the Harbor repository. It was already automated with the Bitbucket pull request job. In case of any change in microservices, a respective development team creates the pull request to merge the code.
It automatically triggers Jenkins, compiles the microservices, and creates the Docker images. Once the Docker image has been created, it pushes other respective emails in the Harbor repository or Artifactory, which is just like a Docker repository.
There is another job in Jenkins. Once the new email is created, the deployment is a script that's also managed by a different Jenkins pipeline. It automatically triggers and does the deployment in respective Kubernetes services using a Helm chart.
Everything is well-automated. It's pretty simple after setup is completed. Setup is a one-time activity, but it takes a lot of effort because it's very complex.
A third party takes care of maintenance. We don't have access to the cluster level.
What about the implementation team?
Deployment was done by Adidas itself. The cluster setup was done by a third party. The cluster availability was provided by a third party. The deployment team then deployed the microservices Docker images to Kubernetes.
A third party manages the Kubernetes cluster, and it's quite complex. I have experience with creating clusters. As soon as we started using EKS, Elastic Kubernetes Services, which is managed by AWS itself, it was very simple. We don't need to take care of the cluster stability or cluster scaling.
For example, microservice itself is a micro application. The whole activity takes about five minutes.
What's my experience with pricing, setup cost, and licensing?
Kubernetes is open-source. It's free, but we're charged for AWS utilization.
What other advice do I have?
I would rate this solution as 10 out of 10.
Kubernetes is an excellent tool with many rich features. I would definitely recommend it. From a learning perspective, users should start with Minikube.
It's a single-node Kubernetes cluster that shows how Kubernetes utilizes their main component, which hosts different elements like the Kube Controller Manager, SCD database, and scheduler.
Everything is a very compact Minikube. You can start with the Minikube deployment, and as soon as you feel comfortable, you can extend your deployment to the main Kubernetes cluster with different nodes because it's very helpful for autoscaling. There's node level and Pod level scaling. Both of these features are available in Kubernetes, so it's very flexible.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.