Try our new research platform with insights from 80,000+ expert users

Apache Flink vs Confluent comparison

 

Comparison Buyer's Guide

Executive SummaryUpdated on Dec 17, 2024

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

Apache Flink
Ranking in Streaming Analytics
5th
Average Rating
7.6
Reviews Sentiment
6.9
Number of Reviews
16
Ranking in other categories
No ranking in other categories
Confluent
Ranking in Streaming Analytics
4th
Average Rating
8.2
Reviews Sentiment
6.7
Number of Reviews
23
Ranking in other categories
No ranking in other categories
 

Mindshare comparison

As of March 2025, in the Streaming Analytics category, the mindshare of Apache Flink is 12.6%, up from 9.4% compared to the previous year. The mindshare of Confluent is 8.6%, down from 11.5% compared to the previous year. It is calculated based on PeerSpot user engagement data.
Streaming Analytics
 

Featured Reviews

Ilya Afanasyev - PeerSpot reviewer
A great solution with an intricate system and allows for batch data processing
We value this solution's intricate system because it comes with a state inside the mechanism and product. The system allows us to process batch data, stream to real-time and build pipelines. Additionally, we do not need to process data from the beginning when we pause, and we can continue from the same point where we stopped. It helps us save time as 95% of our pipelines will now be on Amazon, and we'll save money by saving time.
Yantao Zhao - PeerSpot reviewer
Great tool for sharing knowledge, internal communication and allows for real-time collaboration on pages
Confluence is easy to use and modify. However, sometimes there are too many pages. We have to reorganize the folder or parent account. Since everyone can create a page, the same knowledge might be created in multiple places by different people. This leads to redundancy and makes it difficult to find information. It's not centralized. So it could be more user-friendly and centralized. A way to reduce redundancy would be helpful. It's very easy to use, so everyone can create knowledge. But it would be good to synchronize and organize that information a bit better. Another improvement would be in Confluence search. You can search for keywords, but it's not like AI, not even ChatGPT or OpenAI. It would be nice to get more relevant or organized answers. If you're outside the company, you just get some titles containing the keyword you input. But if Confluence were like a database, you could input something and get a well-organized search offering from multiple pages.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"The event processing function is the most useful or the most used function. The filter function and the mapping function are also very useful because we have a lot of data to transform. For example, we store a lot of information about a person, and when we want to retrieve this person's details, we need all the details. In the map function, we can actually map all persons based on their age group. That's why the mapping function is very useful. We can really get a lot of events, and then we keep on doing what we need to do."
"Apache Flink offers a range of powerful configurations and experiences for development teams. Its strength lies in its development experience and capabilities."
"It provides us the flexibility to deploy it on any cluster without being constrained by cloud-based limitations."
"The top feature of Apache Flink is its low latency for fast, real-time data. Another great feature is the real-time indicators and alerts which make a big difference when it comes to data processing and analysis."
"The documentation is very good."
"Allows us to process batch data, stream to real-time and build pipelines."
"Apache Flink allows you to reduce latency and process data in real-time, making it ideal for such scenarios."
"It is user-friendly and the reporting is good."
"The documentation process is fast with the tool."
"The most valuable feature that we are using is the data replication between the data centers allowing us to configure a disaster recovery or software. However, is it's not mandatory to use and because most of the features that we use are from Apache Kafka, such as end-to-end encryption. Internally, we can develop our own kind of product or service from Apache Kafka."
"Confluent facilitates the messaging tasks with Kafka, streamlining our processes effectively."
"Their tech support is amazing; they are very good, both on and off-site."
"The most valuable is its capability to enhance the documentation process, particularly when creating software documentation."
"I would rate the scalability of the solution at eight out of ten. We have 20 people who use Confluent in our organization now, and we hope to increase usage in the future."
"Confluence's greatest asset is its user-friendly interface, coupled with its remarkable ability to seamlessly integrate with a vast range of other solutions."
"The most valuable feature of Confluent is the wide range of features provided. They're leading the market in this category."
 

Cons

"In a future release, they could improve on making the error descriptions more clear."
"The solution could be more user-friendly."
"One way to improve Flink would be to enhance integration between different ecosystems. For example, there could be more integration with other big data vendors and platforms similar in scope to how Apache Flink works with Cloudera. Apache Flink is a part of the same ecosystem as Cloudera, and for batch processing it's actually very useful but for real-time processing there could be more development with regards to the big data capabilities amongst the various ecosystems out there."
"Apache Flink should improve its data capability and data migration."
"The TimeWindow feature is a bit tricky. The timing of the content and the windowing is a bit changed in 1.11. They have introduced watermarks. A watermark is basically associating every data with a timestamp. The timestamp could be anything, and we can provide the timestamp. So, whenever I receive a tweet, I can actually assign a timestamp, like what time did I get that tweet. The watermark helps us to uniquely identify the data. Watermarks are tricky if you use multiple events in the pipeline. For example, you have three resources from different locations, and you want to combine all those inputs and also perform some kind of logic. When you have more than one input screen and you want to collect all the information together, you have to apply TimeWindow all. That means that all the events from the upstream or from the up sources should be in that TimeWindow, and they were coming back. Internally, it is a batch of events that may be getting collected every five minutes or whatever timing is given. Sometimes, the use case for TimeWindow is a bit tricky. It depends on the application as well as on how people have given this TimeWindow. This kind of documentation is not updated. Even the test case documentation is a bit wrong. It doesn't work. Flink has updated the version of Apache Flink, but they have not updated the testing documentation. Therefore, I have to manually understand it. We have also been exploring failure handling. I was looking into changelogs for which they have posted the future plans and what are they going to deliver. We have two concerns regarding this, which have been noted down. I hope in the future that they will provide this functionality. Integration of Apache Flink with other metric services or failure handling data tools needs some kind of update or its in-depth knowledge is required in the documentation. We have a use case where we want to actually analyze or get analytics about how much data we process and how many failures we have. For that, we need to use Tomcat, which is an analytics tool for implementing counters. We can manage reports in the analyzer. This kind of integration is pretty much straightforward. They say that people must be well familiar with all the things before using this type of integration. They have given this complete file, which you can update, but it took some time. There is a learning curve with it, which consumed a lot of time. It is evolving to a newer version, but the documentation is not demonstrating that update. The documentation is not well incorporated. Hopefully, these things will get resolved now that they are implementing it. Failure is another area where it is a bit rigid or not that flexible. We never use this for scaling because complexity is very high in case of a failure. Processing and providing the scaled data back to Apache Flink is a bit challenging. They have this concept of offsetting, which could be simplified."
"PyFlink is not as fully featured as Python itself, so there are some limitations to what you can do with it."
"The state maintains checkpoints and they use RocksDB or S3. They are good but sometimes the performance is affected when you use RocksDB for checkpointing."
"In terms of improvement, there should be better reporting. You can integrate with reporting solutions but Flink doesn't offer it themselves."
"It could be improved by including a feature that automatically creates a new topic and puts failed messages."
"There is a limitation when it comes to seamlessly importing Microsoft documents into Confluent pages, which can be inconvenient for users who frequently work with Microsoft Office tools and need to transition their content to Confluent."
"I am not very impressed by Confluent. We continuously face issues, such as Kafka being down and slow responses from the support team."
"The Schema Registry service could be improved. I would like a bigger knowledge base of other use cases and more technical forums. It would be good to have more flexible monitoring features added to the next release as well."
"It would help if the knowledge based documents in the support portal could be available for public use as well."
"In Confluent, there could be a few more VPN options."
"Areas for improvement include implementing multi-storage support to differentiate between database stores based on data age and optimizing storage costs."
"It requires some application specific connectors which are lacking. This needs to be added."
 

Pricing and Cost Advice

"It's an open-source solution."
"Apache Flink is open source so we pay no licensing for the use of the software."
"This is an open-source platform that can be used free of charge."
"It's an open source."
"The solution is open-source, which is free."
"It comes with a high cost."
"Confluent is an expensive solution."
"Confluence's pricing is quite reasonable, with a cost of around $10 per user that decreases as the number of users increases. Additionally, it's worth noting that for teams of up to 10 users, the solution is completely free."
"You have to pay additional for one or two features."
"On a scale from one to ten, where one is low pricing and ten is high pricing, I would rate Confluent's pricing at five. I have not encountered any additional costs."
"Confluent has a yearly license, which is a bit high because it's on a per-user basis."
"Confluent is highly priced."
"Regarding pricing, I think Confluent is a premium product, but it's hard for me to say definitively if it's overly expensive. We're still trying to understand if the features and reduced maintenance complexity justify the cost, especially as we scale our platform use."
report
Use our free recommendation engine to learn which Streaming Analytics solutions are best for your needs.
842,466 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Financial Services Firm
23%
Computer Software Company
16%
Manufacturing Company
6%
Retailer
4%
Financial Services Firm
19%
Computer Software Company
16%
Manufacturing Company
7%
Insurance Company
5%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
 

Questions from the Community

What do you like most about Apache Flink?
The product helps us to create both simple and complex data processing tasks. Over time, it has facilitated integration and navigation across multiple data sources tailored to each client's needs. ...
What is your experience regarding pricing and costs for Apache Flink?
The solution is expensive. I rate the product’s pricing a nine out of ten, where one is cheap and ten is expensive.
What needs improvement with Apache Flink?
There are more libraries that are missing and also maybe more capabilities for machine learning. It could have a friendly user interface for pipeline configuration, deployment, and monitoring.
What do you like most about Confluent?
I find Confluent's Kafka Connectors and Kafka Streams invaluable for my use cases because they simplify real-time data processing and ETL tasks by providing reliable, pre-packaged connectors and to...
What is your experience regarding pricing and costs for Confluent?
They charge a lot for scaling, which makes it expensive.
What needs improvement with Confluent?
I am not very impressed by Confluent. We continuously face issues, such as Kafka being down and slow responses from the support team. The lack of easy access to the Confluent support team is also a...
 

Comparisons

 

Also Known As

Flink
No data available
 

Overview

 

Sample Customers

LogRhythm, Inc., Inter-American Development Bank, Scientific Technologies Corporation, LotLinx, Inc., Benevity, Inc.
ING, Priceline.com, Nordea, Target, RBC, Tivo, Capital One, Chartboost
Find out what your peers are saying about Apache Flink vs. Confluent and other solutions. Updated: March 2025.
842,466 professionals have used our research since 2012.