What is our primary use case?
Tekton is the orchestration engine within OpenShift, which is our on-premise platform. Since we are not on the cloud yet, OpenShift plays a strategic role, and Tekton is a significant part of it. It serves as an orchestrator.
In my experience of the last two years using Tekton and OpenShift pipelines, I haven't encountered many issues. As an orchestrator, Tekton works best. It's just one component of the larger OpenShift platform. Tekton consists of multiple components like events, trigger bindings, and more. However, when it comes to the overall OpenShift platform, being a platform as a service, most aspects are taken care of.
How has it helped my organization?
Tekton plays a primary role as an orchestrator. When we receive a webhook from any Git repository, such as Azure Git or GitLab, Tekton triggers the pipeline and performs tasks like code retrieval, running SonarQube or Fortify tasks, and creating and deploying images to multiple environments.
So we can have multiple promoted environments, starting from dev to SIT, then to UAT, and finally to production. We follow a continuous flow branching approach, allowing us to promote changes from smaller environments to larger ones like dev to SIT, SIT to UAT, and UAT to production, which is our master branch. This helps us maintain a smooth workflow and ensures reliable deployment.
What is most valuable?
Tekton is an orchestrator. It provides seamless integration for our pipelines. It offers robust support for executing tasks within the pipeline, allowing us to set up and run pipelines quickly.
Additionally, Tekton's underlying architecture with OpenShift enables us to create, implement, and run end-to-end pipelines. We can integrate various automation tools like Fortify or SonarQube for testing, code scanning, regression testing, and more. All these tasks can be executed within the pipeline using Tekton.
What needs improvement?
There might be occasional issues with storage or cluster-level logging, which can affect production. But as a component, Tekton performs flawlessly.
As an orchestrator, Tekton effectively executes most tasks. However, there are instances where we feel that YAML files, which Tekton reads, could benefit from increased flexibility. You see, in OpenShift, everything revolves around YAML. We have different components specified in YAML files, and when we put them together in an OpenShift pipeline, it generally works fine. However, occasionally we encounter difficulties when editing these YAML files.
Buyer's Guide
Tekton
December 2024
Learn what your peers think about Tekton. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
831,265 professionals have used our research since 2012.
For how long have I used the solution?
I have been working with Tekton since we implemented it in 2020, so it's been almost two years.
What do I think about the stability of the solution?
We haven't encountered any stability issues with it. It has been reliable and available.
What do I think about the scalability of the solution?
How are customer service and support?
Red Hat's support has been quite excellent. We have a close partnership with Red Hat, as our DevOps strategy heavily relies on OpenShift as a core component.
Since our entire architecture is on-premise, we have made significant investments in OpenShift. Setting up the OpenShift cluster and configuring different components, including Tekton, has been smooth and hassle-free for us, thanks to Red Hat's support.
Which solution did I use previously and why did I switch?
It's not solely about Tekton itself. We chose OpenShift as a platform as a service because we opted for on-premise implementation instead of the cloud. The implementation of OpenShift includes the incorporation of Tekton.
How was the initial setup?
The initial setup is actually easy. Tekton is just one of the underlying components in OpenShift pipelines. It's a technology and engine with a straightforward architecture, so the setup process is quite simple.
We have a command-line setup where we use the OpenShift client to connect to Tekton. It's like talking to the cluster, and Tekton executes the tasks on that specific cluster. It's an efficient and streamlined process.
What about the implementation team?
The entire OpenShift platform is supported by just two DevOps engineers.
But we might need to expand the team in the future. Two resources are not sufficient considering the workload and stress we handle.
What's my experience with pricing, setup cost, and licensing?
The pricing is based on OpenShift's vCPU licenses. We pay according to the number of virtual CPUs, which can be costly.
However, it's important to note that Tekton is just one of the underlying components in OpenShift. Therefore, the pricing and licensing considerations are more related to OpenShift as a whole rather than Tekton alone.
Which other solutions did I evaluate?
We have evaluated multiple vendors, including Red Hat, whose DevOps architecture includes Tekton as an underlying component. However, other vendors also offer similar orchestration components in their architectures.
So, there are various tools available from different vendors that serve the same purpose as Tekton.
There are several vendors in the market who provide their own versions of orchestration components for DevOps architectures, apart from Red Hat. They implement their own approaches and name their components accordingly, but the purpose is similar to Tekton.
What other advice do I have?
I would recommend Tekton as an orchestrator because it works well within the OpenShift environment. While there may be similar orchestrator components offered by other vendors in different DevOps architectures, Tekton's integration with other OpenShift components makes it a strong choice.
I would rate Tekton a seven. The only drawback I've experienced is the difficulty of modifying YAML files on the fly and making changes, as it doesn't work well in that aspect. However, apart from that, Tekton performs well in other areas.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.