Some areas for improvement in Apache Kafka on Confluent Cloud include issues faced during migration with Kubernetes pods. This aspect could be smoother to better support migration processes.
There's one thing that's a common use case, but I don't know why it's not covered in Kafka. When a message comes in, and another message with the same key arrives, the first version should be deleted automatically. We want to keep only one instance of a message at any given time, the latest one. However, Kafka doesn't have this functionality built-in. It keeps all the data, and we have to manually delete the older versions. So, I would like to have only one instance of messages, based on the keys. If the key is the same, there should always be the latest message present instead of all versions of that message.
The administration port could be more extensive. Additionally, managing the states of certain events could be made easier, perhaps with automatic rollback instead of having to program it manually.
Senior Architect at a outsourcing company with 501-1,000 employees
Real User
Top 20
2023-11-29T07:23:00Z
Nov 29, 2023
Regarding real-time data usage, there were challenges with CDC (Change Data Capture) integrations. Specifically, with PyTRAN, we encountered difficulties. We recommended using our on-premises Kaspersky as an alternative to PyTRAN for that specific use case due to issues with CDC store configuration and log reading challenges with the iton components.
Learn what your peers think about Apache Kafka on Confluent Cloud. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
For the original Kafka, there is room for improvement in terms of latency spikes and resource consumption. It consumes a lot of memory. Resource consumption. It consumes a lot of memory.
Apache Kafka on Confluent Cloud enables organizations to perform real-time data streaming and processing, integrating user data, and managing large transaction volumes efficiently.
Organizations leverage Apache Kafka on Confluent Cloud for several applications such as capturing change data, handling IoT workloads, and fostering data migration and microservices. It is instrumental in publishing and managing data across multiple platforms, ensuring both high throughput and seamless log...
Some areas for improvement in Apache Kafka on Confluent Cloud include issues faced during migration with Kubernetes pods. This aspect could be smoother to better support migration processes.
The solution is expensive.
There's one thing that's a common use case, but I don't know why it's not covered in Kafka. When a message comes in, and another message with the same key arrives, the first version should be deleted automatically. We want to keep only one instance of a message at any given time, the latest one. However, Kafka doesn't have this functionality built-in. It keeps all the data, and we have to manually delete the older versions. So, I would like to have only one instance of messages, based on the keys. If the key is the same, there should always be the latest message present instead of all versions of that message.
The administration port could be more extensive. Additionally, managing the states of certain events could be made easier, perhaps with automatic rollback instead of having to program it manually.
I saw an interesting improvement related to the analytics environment.
Regarding real-time data usage, there were challenges with CDC (Change Data Capture) integrations. Specifically, with PyTRAN, we encountered difficulties. We recommended using our on-premises Kaspersky as an alternative to PyTRAN for that specific use case due to issues with CDC store configuration and log reading challenges with the iton components.
For the original Kafka, there is room for improvement in terms of latency spikes and resource consumption. It consumes a lot of memory. Resource consumption. It consumes a lot of memory.
There could be an in-built feature for data analysis.