Big Data Teaching Assistant at Center for Cloud Computing and Big Data, PES University
Real User
Top 5
2024-10-25T14:12:00Z
Oct 25, 2024
Kafka is used as a streaming platform where multiple producers and consumers exchange high load and high volume of messages asynchronously without affecting each other's performance. It serves as an industry-standard platform for such operations. Kafka is also integrated into data system architecture for applications like monitoring events on platforms like LinkedIn to enable further analytical insights.
We use Kafka for a stage event-driven process from a process perspective. Our platform is an ID platform, so after registration data is received, it has to be stored from various registration locations. The process includes stages like quality checking, consistency, format, biometric data checking, and so on. We are using both Kafka and ActiveMQ for almost three years now.
Lead Data Scientist at a transportation company with 51-200 employees
Real User
Top 5
2024-05-02T10:25:11Z
May 2, 2024
I was planning to use the tool for real-time analysis in terms of data processing and real-time analytics workflows. The real-time IoT data comes through with a few challenges, and that is for one time, so it is more like a Kafka topic. I want to actually use multiple Kafka topics where one of them can be directly fed into the data pipeline, another one can be fed into the real-time alert system, and the next one can be fed into machine learning.
We use Apache Kafka to process messages, specifically payment type messages, and incorporate the data from those messages into our analytics and reporting. It utilizes data from additional sources in real-time for our analytics and reporting purposes.
We use Kafka for Elastic Stack and Kafka SCRAM login. I have many users of Apache Kafka. It's like a subject to study in enterprises. However, we have not decided if the systems should generalize Apache Kafka for every application and every IT system.
Group Manager at a media company with 201-500 employees
Real User
Top 20
2023-04-25T09:46:00Z
Apr 25, 2023
We have different use cases for different clients. For example, a banking sector client wants to use Apache Kafka for fraud detection and messaging back to the client. For instance, if there's a fraudulent swipe of a credit or debit card, we stream near real-time data (usually three to five minutes old) into the platform. Then, we use a look-up model to relay the data to the messaging queue to the customers. This is just one use case. We also have data science engineers who use Kafka to feed on the data (usually within the last five to seven minutes of transactions) to detect any fraudulent transactions for internal consumption of the bank. This is not for relaying back to the customer. This client is based in the Middle East, and they have some interesting use cases.
Apache Kafka is used for more than only a messaging bus but also served as a database to store information. It functioned as a streamer, similar to ETL, to manipulate and transform events before migrating them to other systems for use. The database could also act as a cache. Apache Kafka is used as a database broker, streamer, and source of truth for multiple systems due to its ability to maintain events for at least 10 days. It provided both synchronous and asynchronous communication, making it a complex system that would be easier to understand through diagrams or sketches. We use reactive frameworks.
We are using Apache Kafka to extract data from a Portuguese data source, utilizing an open-source project for data capture. The connector for this project is linked to both Kafka and Confluence platforms. We then transform the extracted data and store it in Elasticsearch.
We have got this product, which is meant for integration. So our use cases are essentially integrating with other systems, using any messaging stack. We use these products in Dev and QA and we have connectors for various different messaging applications. Apache Kafka just happens to be one of the messaging applications that we connect with. We also have our own messaging, it's called Enterprise Messaging Server and Rendezvous, we connect to those also. Our product is essentially used for integration. So we connect to almost all messaging applications.
Apache Kafka can be deployed on the cloud and on-premise. We use Apache Kafka internally to build a service on a cluster. Additionally, we use the intermediate persistence layer for events. There are many teams who leverage it as a message queue and further their microservice connections.
We use an open-source version of this solution, and we have two deployments of it. One is on-prem, and the other is in the cloud. We use the on-prem version to aggregate our logs. We use the cloud version to manage queues for financial services.
CEO & Founder at a tech consulting company with 11-50 employees
Consultant
2022-10-06T14:58:58Z
Oct 6, 2022
We used Kafka as a central message bus, transporting data from SNMP through to a database. Some of the processing in between was handled by other components.
We have a scalable architecture where we need multiple workers to handle some processing. To make it possible, the backend catches the request and puts it in a common medium, which is the queue of Apache Kafka. The workers then can share and process it.
Our primary use case of this solution is for data integration and for real-time data consumption. I'm a senior staff engineer for data and infrastructure and we are customers of Apache.
We use Kafka daily for our messaging queue to reduce costs because we have a lot of consumers, producers, and repeat messages. Our company has only one system built on Apache Kafka because it's based on microservices, so all of the applications can communicate using it.
We use Apache Kafka to ingest a lot of data in real time that Apache Spark processes, and the result is used for a tech decision in real time – in the IT environment, infrastructure environment, and IOT environment, like for a manufacturing plant. This is an open-source framework. We also sell professional services on this solution and specifically create a business application for customers. The application is called Sherlogic. We have two kinds of customers. We have end-user customers that use the Sherlogic solution, and maybe customers don't know that there is Spark and Kafka in Sherlogic. But we have another kind of customer that uses professional services by Xautomata to create tailor-made applications in analytics and the automation process. We use Apache Kafka for our digital cloud.
This is a system for email and other small devices. There has been a relay of transactions continuously over the last two years it has been in production.
We use Apache Kafka for our messaging. We publish a message and ask the subscriber to listen to it. We use it to save events generated by integration with external systems. There are external events, that are first published to our Kafka queue, and then to a topic, and then we save it to our own data storage system.
We deployed this solution in a project for one of our customers to synchronize the different applications; to transport information from one application to another. I'm a program manager and we are customers of Apache.
It's a combination of an on-premise and cloud deployment. We use AWS, and we have our offshore deployment that's on-premise for OpenShift, Red Hat, and Kafka. Red Hat provides managed services and everything. We use Kafka and a specific deployment where we deploy on our basic VMs and consume Kafka as well. We publish or stream all our business events as well as some of the technical events. You stream it out to Kafka, and multiple consumers develop a different set of solutions. It could be reporting, analytics, or even some data persistence. Later, we used it to build a data lake solution. They all would be consuming the data or events we are streaming into Kafka.
We primarily use the solution for upstreaming messages with different payload for our applications ranging from iOT, Food delivery and patient monitoring. For example for one solution we have a real-time location finding, whereby a customer for the food delivery solution wants to know, where his or her order is on a map. The delivery person's mobile phone would start publishing its location to Kafka, and then Kafka processes it, and then publishes it to subscribers, or, in this case, the customer. It allows them to see information in real-time almost instantly.
Chief Technology Officer at a tech services company with 1-10 employees
Real User
2021-05-12T12:29:06Z
May 12, 2021
Our primary use case is based on the writing microservices, event architecture and using Kafka as an event bus. We work on distribution - enterprise-grade - and we design, develop and deploy in a confluent environment. We are customers of Kafka and I'm the chief technology officer.
Senior Technology Architect at a tech services company with 10,001+ employees
Real User
2020-10-22T16:39:00Z
Oct 22, 2020
We use Apache Kafka for financial purposes. Every time one of our subscribed customers is due for an insurance payment, Apache Kafka sends an automated notification to the customer to let them know that their bill is due.
I am an enterprise architect involved in Big Data and integration projects using Apache Kafa. We use it for integrating our different management systems.
I am a solution architect and this is one of the products that I implement for my customers. Kafka works well when subscribes want to stream data for specific topics.
Our company provides services and we use Apache Kafka as part of the solution that we provide to clients. One of the use cases is to collect all of the data from multiple endpoints and provide it to the users. Our application integrates with Kafka as a consumer using the API, and then sends information to the users who connect.
It's convenient and flexible for almost all kinds of data producers. We integrated it with Kafka Streams, which can perform some easy data processing, like summary, count, group, etc
Apache Kafka is an open-source distributed streaming platform that serves as a central hub for handling real-time data streams. It allows efficient publishing, subscribing, and processing of data from various sources like applications, servers, and sensors.
Kafka's core benefits include high scalability for big data pipelines, fault tolerance ensuring continuous operation despite node failures, low latency for real-time applications, and decoupling of data producers from consumers.
Key...
Kafka is used as a streaming platform where multiple producers and consumers exchange high load and high volume of messages asynchronously without affecting each other's performance. It serves as an industry-standard platform for such operations. Kafka is also integrated into data system architecture for applications like monitoring events on platforms like LinkedIn to enable further analytical insights.
We use Kafka for a stage event-driven process from a process perspective. Our platform is an ID platform, so after registration data is received, it has to be stored from various registration locations. The process includes stages like quality checking, consistency, format, biometric data checking, and so on. We are using both Kafka and ActiveMQ for almost three years now.
I was planning to use the tool for real-time analysis in terms of data processing and real-time analytics workflows. The real-time IoT data comes through with a few challenges, and that is for one time, so it is more like a Kafka topic. I want to actually use multiple Kafka topics where one of them can be directly fed into the data pipeline, another one can be fed into the real-time alert system, and the next one can be fed into machine learning.
We use Apache Kafka to process messages, specifically payment type messages, and incorporate the data from those messages into our analytics and reporting. It utilizes data from additional sources in real-time for our analytics and reporting purposes.
We use the solution for analytics for streaming. We also use it for fraud detection.
My company uses Apache Kafka to keep some intermediate data in the workflow.
I have previous professional experience using Kafka to implement a system related to gathering software events in one centralized location.
We use Kafka for Elastic Stack and Kafka SCRAM login. I have many users of Apache Kafka. It's like a subject to study in enterprises. However, we have not decided if the systems should generalize Apache Kafka for every application and every IT system.
We have different use cases for different clients. For example, a banking sector client wants to use Apache Kafka for fraud detection and messaging back to the client. For instance, if there's a fraudulent swipe of a credit or debit card, we stream near real-time data (usually three to five minutes old) into the platform. Then, we use a look-up model to relay the data to the messaging queue to the customers. This is just one use case. We also have data science engineers who use Kafka to feed on the data (usually within the last five to seven minutes of transactions) to detect any fraudulent transactions for internal consumption of the bank. This is not for relaying back to the customer. This client is based in the Middle East, and they have some interesting use cases.
I primarily use Kafka in the investment banking sector to update prices and inform clients of updates.
Apache Kafka is used for more than only a messaging bus but also served as a database to store information. It functioned as a streamer, similar to ETL, to manipulate and transform events before migrating them to other systems for use. The database could also act as a cache. Apache Kafka is used as a database broker, streamer, and source of truth for multiple systems due to its ability to maintain events for at least 10 days. It provided both synchronous and asynchronous communication, making it a complex system that would be easier to understand through diagrams or sketches. We use reactive frameworks.
We are using Apache Kafka to extract data from a Portuguese data source, utilizing an open-source project for data capture. The connector for this project is linked to both Kafka and Confluence platforms. We then transform the extracted data and store it in Elasticsearch.
I use Kafka to send network packets from different sources to my cluster. We have around 10 users at my company.
My primary use case for Apache Kafka is replacing ETL and doing data transformations.
We have got this product, which is meant for integration. So our use cases are essentially integrating with other systems, using any messaging stack. We use these products in Dev and QA and we have connectors for various different messaging applications. Apache Kafka just happens to be one of the messaging applications that we connect with. We also have our own messaging, it's called Enterprise Messaging Server and Rendezvous, we connect to those also. Our product is essentially used for integration. So we connect to almost all messaging applications.
Apache Kafka can be deployed on the cloud and on-premise. We use Apache Kafka internally to build a service on a cluster. Additionally, we use the intermediate persistence layer for events. There are many teams who leverage it as a message queue and further their microservice connections.
We use an open-source version of this solution, and we have two deployments of it. One is on-prem, and the other is in the cloud. We use the on-prem version to aggregate our logs. We use the cloud version to manage queues for financial services.
We used Kafka as a central message bus, transporting data from SNMP through to a database. Some of the processing in between was handled by other components.
We have a scalable architecture where we need multiple workers to handle some processing. To make it possible, the backend catches the request and puts it in a common medium, which is the queue of Apache Kafka. The workers then can share and process it.
Our primary use case for this solution is streaming.
We utilize Apache Kafka in several areas, including financials, logistics, and client management to name a few.
Our primary use case of this solution is for data integration and for real-time data consumption. I'm a senior staff engineer for data and infrastructure and we are customers of Apache.
We use Kafka daily for our messaging queue to reduce costs because we have a lot of consumers, producers, and repeat messages. Our company has only one system built on Apache Kafka because it's based on microservices, so all of the applications can communicate using it.
We are building solutions on Apache Kafka for four customers. The customers we have are in various sectors, such as healthcare and architecture.
We use Apache Kafka to ingest a lot of data in real time that Apache Spark processes, and the result is used for a tech decision in real time – in the IT environment, infrastructure environment, and IOT environment, like for a manufacturing plant. This is an open-source framework. We also sell professional services on this solution and specifically create a business application for customers. The application is called Sherlogic. We have two kinds of customers. We have end-user customers that use the Sherlogic solution, and maybe customers don't know that there is Spark and Kafka in Sherlogic. But we have another kind of customer that uses professional services by Xautomata to create tailor-made applications in analytics and the automation process. We use Apache Kafka for our digital cloud.
This is a system for email and other small devices. There has been a relay of transactions continuously over the last two years it has been in production.
We use Apache Kafka for our messaging. We publish a message and ask the subscriber to listen to it. We use it to save events generated by integration with external systems. There are external events, that are first published to our Kafka queue, and then to a topic, and then we save it to our own data storage system.
We deployed this solution in a project for one of our customers to synchronize the different applications; to transport information from one application to another. I'm a program manager and we are customers of Apache.
It's a combination of an on-premise and cloud deployment. We use AWS, and we have our offshore deployment that's on-premise for OpenShift, Red Hat, and Kafka. Red Hat provides managed services and everything. We use Kafka and a specific deployment where we deploy on our basic VMs and consume Kafka as well. We publish or stream all our business events as well as some of the technical events. You stream it out to Kafka, and multiple consumers develop a different set of solutions. It could be reporting, analytics, or even some data persistence. Later, we used it to build a data lake solution. They all would be consuming the data or events we are streaming into Kafka.
We primarily use the solution for upstreaming messages with different payload for our applications ranging from iOT, Food delivery and patient monitoring. For example for one solution we have a real-time location finding, whereby a customer for the food delivery solution wants to know, where his or her order is on a map. The delivery person's mobile phone would start publishing its location to Kafka, and then Kafka processes it, and then publishes it to subscribers, or, in this case, the customer. It allows them to see information in real-time almost instantly.
One of our clients needed to take events out of SAP to stream them through Apache Kafka while applying data enrichment before reaching the consumers.
Our primary use case is based on the writing microservices, event architecture and using Kafka as an event bus. We work on distribution - enterprise-grade - and we design, develop and deploy in a confluent environment. We are customers of Kafka and I'm the chief technology officer.
We are a solution provider and Apache Kafka is being used in our client's company.
We use Apache Kafka for financial purposes. Every time one of our subscribed customers is due for an insurance payment, Apache Kafka sends an automated notification to the customer to let them know that their bill is due.
I am a solution architect and I used Apache Kafka in this role.
I am a user, as well as an integrator for our clients. This is one of the products that we implement for others.
Apache Kafka is used for stream processing, metric and log aggregations, and as a message queue for connecting different microservices.
We use Kafka for event monitoring.
I am an enterprise architect involved in Big Data and integration projects using Apache Kafa. We use it for integrating our different management systems.
I am a solution architect and this is one of the products that I implement for my customers. Kafka works well when subscribes want to stream data for specific topics.
We primarily use the solution for big data. We often get a million messages per second, and with such a high output we use Kafka to help us handle it.
I'm a software architect. The use case will depend my customers. They usually use it for data transfer from static files to a legacy system.
We are currently using this solution on our cloud-based clusters.
Our company provides services and we use Apache Kafka as part of the solution that we provide to clients. One of the use cases is to collect all of the data from multiple endpoints and provide it to the users. Our application integrates with Kafka as a consumer using the API, and then sends information to the users who connect.
It's convenient and flexible for almost all kinds of data producers. We integrated it with Kafka Streams, which can perform some easy data processing, like summary, count, group, etc