We use Apache Kafka on Confluent Cloud for streaming large volumes of data in real-time. It's employed in scenarios such as handling events from various countries and streaming them efficiently for our clients. We also utilize it for data analytics and in client versions for topic creation, consumer consumption, and ACL provisioning.
Whenever you need to handle a huge load of real-time data processing, Kafka is useful. We currently use it for an output management system for insurance, where the system receives data in a fixed amount and has to process it in several steps. We manage these steps with Kafka because the load can be quite big, with millions of XMLs coming into the system that need to be processed in near real-time.
Integration Solution Architect at a consultancy with 11-50 employees
Real User
Top 5
2024-05-24T18:35:00Z
May 24, 2024
In my company, we are not using the tool for analytics and it is more for CDC processes, so we change the capture processes. It is used to extract data from a database and make it available in other parts of our systems or produce events that inform us of data updates.
Data Architect at a government with 10,001+ employees
Real User
2024-01-26T17:21:00Z
Jan 26, 2024
We use Apache Kafka with Confluent Cloud for specific real-time transaction use cases, both on-premise and in the cloud. We have been using Confluent Cloud for about five years. We initially used it for data reputation, then expanded to microservices integration and Kubernetes, focusing on improving data quality and enabling real-time location tracking. We configure it for data transactions across various topics and partitions, depending on the specific use case and required throughput. From an IT perspective, I've used this product across all domains: system development, operations, data management, and system quality.
Senior Architect at a outsourcing company with 501-1,000 employees
Real User
Top 20
2023-11-29T07:23:00Z
Nov 29, 2023
Our use case is for real-time data integration. It was a preferred tool for this purpose. Additionally, we employed Azure EventHub, another service, as an indicator for real-time data in a couple of larger programs focused on integrating real-time data and visualization.
Learn what your peers think about Apache Kafka on Confluent Cloud. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
I use it for real-time processing workloads. So, in some instances, it's like IoT data. We need to put it into a data lake. However, we are using Redpanda, which is still a Kafka protocol. Lots of real-time processing and high-velocity data are the use cases.
We had a legacy website collecting user data as they logged into the portal. We wanted to capture that information in Snowflake and store it in a mobile app. We used Apache Kafka on Confluent Cloud for real-time data streaming.
Apache Kafka on Confluent Cloud enables organizations to perform real-time data streaming and processing, integrating user data, and managing large transaction volumes efficiently.
Organizations leverage Apache Kafka on Confluent Cloud for several applications such as capturing change data, handling IoT workloads, and fostering data migration and microservices. It is instrumental in publishing and managing data across multiple platforms, ensuring both high throughput and seamless log...
We use Apache Kafka on Confluent Cloud for streaming large volumes of data in real-time. It's employed in scenarios such as handling events from various countries and streaming them efficiently for our clients. We also utilize it for data analytics and in client versions for topic creation, consumer consumption, and ACL provisioning.
It's basically four bands of use cases, where we publish data on Kafka topics and stream it across microservices.
Whenever you need to handle a huge load of real-time data processing, Kafka is useful. We currently use it for an output management system for insurance, where the system receives data in a fixed amount and has to process it in several steps. We manage these steps with Kafka because the load can be quite big, with millions of XMLs coming into the system that need to be processed in near real-time.
In my company, we are not using the tool for analytics and it is more for CDC processes, so we change the capture processes. It is used to extract data from a database and make it available in other parts of our systems or produce events that inform us of data updates.
We use Apache Kafka with Confluent Cloud for specific real-time transaction use cases, both on-premise and in the cloud. We have been using Confluent Cloud for about five years. We initially used it for data reputation, then expanded to microservices integration and Kubernetes, focusing on improving data quality and enabling real-time location tracking. We configure it for data transactions across various topics and partitions, depending on the specific use case and required throughput. From an IT perspective, I've used this product across all domains: system development, operations, data management, and system quality.
Our use case is for real-time data integration. It was a preferred tool for this purpose. Additionally, we employed Azure EventHub, another service, as an indicator for real-time data in a couple of larger programs focused on integrating real-time data and visualization.
I use it for real-time processing workloads. So, in some instances, it's like IoT data. We need to put it into a data lake. However, we are using Redpanda, which is still a Kafka protocol. Lots of real-time processing and high-velocity data are the use cases.
We had a legacy website collecting user data as they logged into the portal. We wanted to capture that information in Snowflake and store it in a mobile app. We used Apache Kafka on Confluent Cloud for real-time data streaming.