The product helps us to create both simple and complex data processing tasks. Over time, it has facilitated integration and navigation across multiple data sources tailored to each client's needs. We use Apache Flink to control our clients' installations.
Partner / Head of Data & Analytics at Intelligence Software Consulting
Real User
Top 5
2021-03-03T20:13:19Z
Mar 3, 2021
The top feature of Apache Flink is its low latency for fast, real-time data. Another great feature is the real-time indicators and alerts which make a big difference when it comes to data processing and analysis.
Principal Software Engineer at a tech services company with 1,001-5,000 employees
Real User
2020-10-21T04:33:00Z
Oct 21, 2020
Apache Flink is meant for low latency applications. You take one event opposite if you want to maintain a certain state. When another event comes and you want to associate those events together, in-memory state management was a key feature for us.
Sr. Software Engineer at a tech services company with 10,001+ employees
Real User
2020-10-19T09:33:00Z
Oct 19, 2020
Another feature is how Flink handles its radiuses. It has something called the checkpointing concept. You're dealing with billions and billions of requests, so your system is going to fail in large storage systems. Flink handles this by using the concept of checkpointing and savepointing, where they write the aggregated state into some separate storage. So in case of failure, you can basically recall from that state and come back.
Lead Software Engineer at a tech services company with 5,001-10,000 employees
Real User
2020-10-13T07:21:29Z
Oct 13, 2020
The event processing function is the most useful or the most used function. The filter function and the mapping function are also very useful because we have a lot of data to transform. For example, we store a lot of information about a person, and when we want to retrieve this person's details, we need all the details. In the map function, we can actually map all persons based on their age group. That's why the mapping function is very useful. We can really get a lot of events, and then we keep on doing what we need to do.
Software Architect at a tech vendor with 501-1,000 employees
Real User
2020-10-07T07:04:00Z
Oct 7, 2020
With Flink, it provides out-of-the-box checkpointing and state management. It helps us in that way. When Storm used to restart, sometimes we would lose messages. With Flink, it provides guaranteed message processing, which helped us. It also helped us with maintenance or restarts.
Apache Flink is an open-source batch and stream data processing engine. It can be used for batch, micro-batch, and real-time processing. Flink is a programming model that combines the benefits of batch processing and streaming analytics by providing a unified programming interface for both data sources, allowing users to write programs that seamlessly switch between the two modes. It can also be used for interactive queries.
Flink can be used as an alternative to MapReduce for executing...
The product helps us to create both simple and complex data processing tasks. Over time, it has facilitated integration and navigation across multiple data sources tailored to each client's needs. We use Apache Flink to control our clients' installations.
Apache Flink allows you to reduce latency and process data in real-time, making it ideal for such scenarios.
Apache Flink's best feature is its data streaming tool.
Easy to deploy and manage.
Allows us to process batch data, stream to real-time and build pipelines.
It is user-friendly and the reporting is good.
The top feature of Apache Flink is its low latency for fast, real-time data. Another great feature is the real-time indicators and alerts which make a big difference when it comes to data processing and analysis.
The setup was not too difficult.
This is truly a real-time solution.
Apache Flink is meant for low latency applications. You take one event opposite if you want to maintain a certain state. When another event comes and you want to associate those events together, in-memory state management was a key feature for us.
Another feature is how Flink handles its radiuses. It has something called the checkpointing concept. You're dealing with billions and billions of requests, so your system is going to fail in large storage systems. Flink handles this by using the concept of checkpointing and savepointing, where they write the aggregated state into some separate storage. So in case of failure, you can basically recall from that state and come back.
The event processing function is the most useful or the most used function. The filter function and the mapping function are also very useful because we have a lot of data to transform. For example, we store a lot of information about a person, and when we want to retrieve this person's details, we need all the details. In the map function, we can actually map all persons based on their age group. That's why the mapping function is very useful. We can really get a lot of events, and then we keep on doing what we need to do.
The documentation is very good.
With Flink, it provides out-of-the-box checkpointing and state management. It helps us in that way. When Storm used to restart, sometimes we would lose messages. With Flink, it provides guaranteed message processing, which helped us. It also helped us with maintenance or restarts.