Amazon Kinesis is a queuing or buffering system that we use as a central place to buffer the incoming data we receive from the source. The actual destination is open-faced. Amazon Kinesis is used as a buffer in between to decouple the workload.
Senior DevOps Engineer at a tech services company with 201-500 employees
Provides near real-time data streaming at a consistent rate, but its cost is too high
Pros and Cons
- "Amazon Kinesis's main purpose is to provide near real-time data streaming at a consistent 2Mbps rate, which is really impressive."
- "We were charged high costs for the solution’s enhanced fan-out feature."
What is our primary use case?
What is most valuable?
Amazon Kinesis's main purpose is to provide near real-time data streaming at a consistent 2Mbps rate, which is really impressive.
What needs improvement?
The solution currently provides an option to retrieve data in the stream or the queue, but it's not that helpful. We have to write some custom scripts to fetch data from there. An option to search for data in the queue can really help us in our day-to-day operations.
Since the solution is a buffer system, you write to it and read from it. The readers are called consumers. If you want to run multiple consumers reading from the queue, you have to enable the enhanced fan-out feature on Amazon Kinesis. This enhanced fan-out feature is quite costly.
There was a point when we had a huge budget increase in one week just because of the enhanced fan-out feature. This feature does not provide any special out-of-the-box functionality. Hence, we struggle to optimize multiple consumers reading from a single queue. We were charged high costs for the solution’s enhanced fan-out feature.
For how long have I used the solution?
I have been using Amazon Kinesis for more than two years.
Buyer's Guide
Amazon Kinesis
November 2024
Learn what your peers think about Amazon Kinesis. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
816,406 professionals have used our research since 2012.
What do I think about the scalability of the solution?
The solution is pretty good in terms of scaling. Amazon Kinesis has shards, which are the instances or units that the solution spins up for you. Depending upon your account quota, you can spin up as many shards as you want. You can even raise a request to increase that quota, which will be done sooner. Overall, Amazon Kinesis is a really scalable solution.
Our team, consisting of four to five people, uses the solution extensively in our organization.
How are customer service and support?
We really struggle to get better support for Amazon Kinesis.
What's my experience with pricing, setup cost, and licensing?
Amazon Kinesis is an expensive solution.
What other advice do I have?
Amazon Kinesis is an AWS-managed service, just like S3 or EC. We don't have to deploy it; it is just there, and we spin it up. You must go to AWS' service page and click on Kinesis. Then, you can create it by clicking on Create and entering the name.
I would not recommend Amazon Kinesis to other users. Users can choose a cheaper alternative. They can use any other queuing system or in-house Kafka if they have a Kafka team. Amazon Kinesis provides near real-time read-and-write, but its cost is too high. Users can choose another option that provides the same functionality at less cost.
With Amazon Kinesis, you have to run a consumer who sees from Amazon Kinesis. AWS provides the Kinesis Client Library (KCL), which reads from the Kinesis stream. That library is also used in DynamoDB for data checkpointing. For example, if you have one day of data in Amazon Kinesis and started reading from 12 AM yesterday. The Kinesis Client Library (KCL) will check on the data in the DynamoDB. You get charged for the DynamoDB table out-of-the-box, along with Amazon Kinesis.
The DynamoDB table also costs a lot, which should not be the case. It is just read-and-write and is downloaded from the Kinesis Client Library (KCL). The DynamoDB table's cost should be very minimal, but that's not the case. The consumer is not optimal for efficient read-and-write, which further increases the cost. Both Amazon Kinesis and DynamoDB come into the picture.
Overall, I rate the solution a five or six out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Last updated: Apr 28, 2024
Flag as inappropriateData Engineer
User friendly and feature rich solution
Pros and Cons
- "Its scalability is very high. There is no maintenance and there is no throughput latency. I think data scalability is high, too. You can ingest gigabytes of data within seconds or milliseconds."
- "Kinesis Data Analytics needs to be improved somewhat. It's SQL based data but it is not as user friendly as MySQL or Athena tools."
What is our primary use case?
One use case is consuming sales data and then writing it back into the S3. That's one small use case that we have; from data Shields to data Firehose, from data Firehose to Amazon S3.
There are OneClick data streams that are coming in. For click streams data we established Kinesis data streams and then from Kinesis data streams, we dump data into the S3 using Kinesis Data Firehose. This is the main use case that we have. We did many POC's on Kinesis, as well. Also, one more live project using the DynamoDB database is running in Amazon. From DynamoDB we have triggers that automatically trigger to Lambda, and then from Lambda we call Kinesis then Kinesis writes back into the S3. This is another use case.
Another thing that we did is called Kinesis data analytics where you can directly stream data. For that, we use a Kinesis data producer. From that, we establish a connection to the data stream and then from the data streams to the SQL, which is the Kinesis data analytics tool. From Kinesis analytics, we again establish a connection to the data Firehose and then drive data back into the S3. These are the main use cases that we have for working on Amazon Kinesis.
How has it helped my organization?
In my client's company, there is one live database that comes into the DynamoDB. They want to replicate that in Amazon S3 for their data analytics and they do not want data to be refreshed every second. They want their data to be refreshed at a particular size, like five MBs. Kinesis provides data for that. That's the main improvement that we give to the client.
What is most valuable?
The features that I have found most valuable depend on the use case. I find data Firehose and data streams are much more intelligent than other streaming solutions.
There is a time provision as well as data size. Let's suppose you want to store data within 60 seconds, you can. Let's suppose you want to store data up to a certain size, you can, too. And then you can it write back to the S3. That's the beauty of the solution.
What needs improvement?
Kinesis Data Analytics needs to be improved somewhat. It's SQL based data but it is not as user friendly as MySQL or Athena tools. That's the one improvement that I'm expecting from Amazon. Apart from that everything is fine.
For how long have I used the solution?
I have two years of project experience on AWS, and around six months with Kinesis.
What do I think about the stability of the solution?
I am satisfied with Amazon Kinesis. It is pretty exiting to work on.
What do I think about the scalability of the solution?
Its scalability is very high. There is no maintenance and there is no throughput latency. I think data scalability is high, too. You can ingest gigabytes of data within seconds or milliseconds.
We are a team of five members using Amazon Kinesis. Two are working onshore and three of us are working offshore.
We are all developers implementing, developing, and designing the data pipeline, as well. The thing is we work in a startup company so we have to do all the things from our end on this.
How are customer service and technical support?
As of now we have not had any contact with customer support because we didn't face any complex types of problems while we were implementing our use cases.
How was the initial setup?
The initial setup is very straightforward. It is very well documented and anyone with simple knowledge or common sense can do it.
Implementing is very simple. You can just do it with your fingertips. There might be some improvements that can be made according to the requirements. For that, we do versioning. First we establish the pipeline from the data stream to the S3. That's very easy. You can do it within hours or within minutes. I can say the process is very simple and it's not as complex as it looks.
One more beauty is that Kinesis data Firehose will directly write to S3 in a partitioned way. Based on the timestamp it can directly write in the year, month, day and hour. That's the good thing I found about Amazon Kinesis.
We follow an implementation. We do the deployment directly on Dev. Once we get our results and our processes, and go through Q&A, we implement it directly throughout.
What was our ROI?
Our clients definitely see a return on their investment with Amazon Kinesis.
What's my experience with pricing, setup cost, and licensing?
The pricing depends on the number of shards that we are providing and the time the application is running.
We reduced the cost of the pipeline that we built. We built a generic type of pipeline so that two more times can use same data pipeline.
What other advice do I have?
My advice to anyone thinking about Amazon Kinesis, is that if they have ClickStream or any streaming data which varies from megabytes to gigabytes, they can definitely go for Amazon Kinesis. If they want to do data processing, or batch or streaming analytics, they can choose Amazon Kinesis. And if you want to enable database stream events in Amazon DynamoDB, then you can definitely go for Amazon Kinesis. I don't see any better option for these other than Amazon Kinesis. You can use Amazon Kinesis Data Analytics Tool to detect an anomaly before you process the data. That's one more beauty. The first things we need to determine are the source and the throughput of the data and the latency you want.
On a scale of one to ten I would rate Amazon Kinesis a nine.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Amazon Kinesis
November 2024
Learn what your peers think about Amazon Kinesis. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
816,406 professionals have used our research since 2012.
Senior Software Engineer at a tech services company with 501-1,000 employees
Easily replay your streaming data with this reliable solution
Pros and Cons
- "The feature that I've found most valuable is the replay. That is one of the most valuable in our business. We are business-to-business so replay was an important feature - being able to replay for 24 hours. That's an important feature."
- "In general, the pain point for us was that once the data gets into Kinesis there is no way for us to understand what's happening because Kinesis divides everything into shards. So if we wanted to understand what's happening with a particular shard, whether it is published or not, we could not. Even with the logs, if we want to have some kind of logging it is in the shard."
What is our primary use case?
In the simpler use case, we were just pumping in some data. We wanted a product, an AWS service, that would accept data in bursts. We were pushing in, for example, 500 records every 300 milliseconds. What I'm trying to say is per second we were trying to pump in around 1,500 records into some streaming services what we were looking at. That type of streaming information would then go into another source, for example Lambda. Then Lambda would consume the data and ultimately we would process and store it in DynamoDB.
This was the basic flow that we had. We were looking for a service. And at that point in time in our organization, the architects were asking us to leverage Kinesis to see how it performed. They wanted to see how it performs, so they were encouraging us to use it. Although we were looking at something as simple as SQS and SNS, they were encouraging us to use Kinesis and that is what we did.
There were a few considerations when we moved Kinesis. What is the reliability? When I say reliability, I mean resilience, or the failure mechanism we thought was required for that use case because we did not want to lose data. Also, we wanted to have the ability to replay from a certain point because we were pumping in reports from a data source and we were always keeping track of the point at which we had stopped. So if we wanted to replay something from the prior data which was already processed by Kinesis, and it failed in the Lambda, we wanted to have the ability to retry and replay the previously processed stream.
That prompted us to use Kinesis because it has the really good feature of being able to replay for 24 hours whatever you've already processed and this allows us to replay it. That was one key feature that we thought we would need. In fact, performance-wise, it performed really well. We also understood that it is actually meant for streaming, video streaming and stuff like that. Even data streaming. It does a good job with it. But mostly, we saw that it is a more suitable service for video streaming simply because when we actually pump data into Kinesis, we don't know how to test it other than waiting for the data to come out of it from the other end and hook into Lambda and extract data out of it and process it.
That's the only way we can test it. That was a drawback but it did not matter too much. But it did matter in the next project, and for the bigger use cases where we used Kinesis. But this project was a simple use case and it served really well, so we kept it as-is. We moved on to the next project, which was bigger. It was an event-driven architecture that we were trying out on one of the features. When we went event-driven, at that time a few of the new features and new services from Amazon which are available right now, were not available.
We thought of using Kinesis again to stream the data from one microservice to another in a proper microservice architecture. We were using this as a communication medium between microservices. This is where the testing was a little complicated for us. Ultimately, what we realized out of the entire exercise was that Kinesis may not have been the right choice of service for us for our use case. But what we discovered were the benefits of using Kinesis and also the limitations in certain use cases.
The biggest lesson learned for us was even before you take up anything like Kinesis, which is a big AWS service, there has to be a POC, proof of concept, done. To see whether it really suits that use case or not. That is what we ultimately realized. Before that, there were a few other reasons why we chose Kinesis over DynamoDB streaming. Ultimately it was from one microservice to another, and each microservice had its own DynamoDB data store.
We were thinking of using the DynamoDB Stream and Kinesis to keep things simple. But it turned out that DynamoDB Streams have a limitation that whatever a stream comes out of DynamoDB it can be consumed only by a single client. But with Kinesis it doesn't matter. Any number of data sources can come in and whatever Kinesis publishes can be consumed by any number of clients. That is why we went with Kinesis in order to see how it performed. Because even performance-wise, we found that we need a crazy load server because we are part of the wagering industry, which needs peak performance. Online betting. In Australia, it's a regulated market and one of the most happening businesses. Here, performance is really important, because there are quite a few competitors, around 10 to 15 prominent competitors and if we have to stand out, our performance has to be beyond the customer's expectation.
So, with that in mind, they knew our performance had to scale up. That is where we found the advantage of using Kinesis. It's been reliable. It has not failed to publish. It actually did fail, but the failure was simply because of pumping in too much data than what Kinesis can take in.
There is a limit that we discovered. I don't remember the numbers there. But we did manage to break Kinesis by pumping in too much data.
How has it helped my organization?
The major advantage with Amazon Kinesis is the availability. Additionally, the reliability is awesome when it comes to Kinesis. Kinesis also offers the replay.
It is incredibly fast. The ingesting of data, the buffering, and processing the data are really fast. With AWS you always get the the dashboard for monitoring. That is a really good aspect for us to see how Kinesis is performing. Otherwise there is no other way for us to know what's happening within Kinesis other than the Lambda kicking in and processing. So the Lambda logs were indirectly necessary for us to look into Kinesis.
The dashboarding AWS provides out of the box for monitoring the performance of benefits is quite nice. Also, it is a self-managed service so we don't need to worry about what happens behind Kinesis. That was another big win for us. We did not have to worry about how to maintain or manage Kinesis in general. That was a consideration. It is kind of server-less.
The scalability was quite acceptable. It can handle a large amount of data as well. It can take in a large amount of data, but there is a limit. It can take a huge amount of data and process it from many sources. We can have any number of data sources coming in, and it can ingest all of them and publish it to wherever you want.
You can design your code in such a way that the Lambda that actually processes whatever is published by Kinesis can kind of segregate the data coming in from multiple data sources, based on the logic that is implemented there. That is a nice feature. Ingesting data from multiple sources, and being able to publish it to multiple destinations.
What is most valuable?
The feature that I've found most valuable is the replay. That is one of the most valuable in our business. We are business-to-business so replay was an important feature - being able to replay for 24 hours. That's an important feature.
In our use case Kinesis was able to handle the rate at which we were pumping in data and it could publish the data to whatever destination, be it Lambda or any other consumer.
We were seeing that there was a delay in the amount of processing time of the Lambda and the subsequent storing into DynamoDB. There was a delay in that process. So, at the rate at which we were pumping in the data, it was obvious we had ensured that this should work. This rate at which we were pumping it is the rate at which the data is published and processed, as well. But we saw that it was not working. Not the Kinesis data nor the subsequent parts of our application, they tended to not be up to the mark with Kinesis. So the business asked us for the ability to be able to get back to a certain point in time and replay the entire thing. That way there is a record if there is an error when it is being processed.
The ordering is another big thing for us. Kinesis is known for maintaining the order in which the data is ingested. We can tweak that and can configure Kinesis to ensure that the ordering is maintained. The order in which the data is actually being published is also important for us. That is why the business was ok even if a thousand record failed to process, because they were okay to start from 500 again, and again reach a thousand. They wanted to ensure that there was no scope for failure there. That is why the replay feature was useful for us. That is why both performance and replay are important. When I say performance, I mean the reliability. Kinesis has an inbuilt replay mechanism that also came in handy for us.
These were the crucial things that we were looking at, and it worked quite well.
What needs improvement?
In general, the pain point for us was that once the data gets into Kinesis there is no way for us to understand what's happening because Kinesis divides everything into shards. So if we wanted to understand what's happening with a particular shard, whether it is published or not, we could not. Even with the logs, if we want to have some kind of logging it is in the shard. That is something that we thought we needed then, but later we realized that Kinesis was not built for that. They must have already improved by now, because I have not been in touch with AWS for the last five, six months since I joined this organization which uses Azure. I did not get to experiment with AWS Kinesis too much after that.
It was built for something else, but we used Kinesis for one purpose and we were expecting a feature out of it that may not have really been the design of the service when they built Kinesis. It was almost like a black box for us, because once the data comes in we need to rely on the Lambda itself to let us know. Because if some Kinesis code is coming in, it processes that we will log back in using the Lambda. And that is where we would know, "Oh, okay this guy has come in, this guy has come in." We hoped for a better way of being able to track the shard being processed or how they streamed within Kinesis.
We wanted to have a look at that, but that was not available then. It may not even be available now. We did not have the feature that we expected in the first place from Kinesis. Overall that was the only thing that we felt was lacking. Our use case may not have been the most ideal one, but other than that we did not have many qualms with Kinesis. Overall, we felt we would have simplified the entire design of what we did by simply using an SNS and SQS, because we have much better visibility in terms of tracking what happens within the SNS and SQS.
For how long have I used the solution?
I have used Amazon Kinesis for a couple of projects starting from August 2019 until July 2020. I used Amazon Kinesis in exactly two projects in fact, one after the other.
What do I think about the scalability of the solution?
In terms of scalability, there is a limit which is documented by Amazon. But when we started using it, we didn't know that. We did not evaluate its complete documentation. Of course we went through the aspects that we wanted to understand and we made the choice. But it did break at a certain point.
It was okay for us simply because we could do with a lower pumping rate. So, it did not cause too much of a hazard for the business as such, but we did manage to break Kinesis.
Overall, what we realized was for event driven architecture for simple use cases where you need reliable streaming, Amazon Kinesis works really well. But, for event driven it may not be the best choice.
That's what we figured out at the end of our project. The project was successful. It served its purpose. But the amount of support that we had to provide to see that the entire infrastructure holds up to the load was high.
We felt that we could have done with an easier adaptation of the same architecture. We could have gone with an easier implementation, by probably choosing SNS and SQS over Kinesis in our use case. So, lessons learned.
This is all that we worked on with Kinesis. This is what we figured out after close to a year of working with it. One project was no problem at all. Whatever the purpose, Kinesis did more than expected. And, in the other one we kind of hit the boiling point of Kinesis and realized that it may not be the right choice in that scenario. But it was still okay. We still left it there, and it served its purpose.
How are customer service and technical support?
We had an Amazon technical advisor who was visiting us once every week on the same day. He would be with us and he would just be there and we could reach out to him and ask him for suggestions as to what we could use and what we should do. He would help us with whatever queries we would give him. Even if he did not know he went back to the Amazon experts and then he would get us the answers. But, in this case for Kinesis, it was more driven by the architecture teams here, for us to try it out and see how it performs.
We did go to the Amazon technical support guy who was available for us to understand the limitations and the use cases. He did help us, but we were deep into our implementation when we went to him so we could not change or accommodate because we were almost at the end of the implementation. But, yes his inputs were definitely valuable for us to understand Kinesis better.
How was the initial setup?
In terms of initial setup, Kinesis is available for us to use. All we need to do is see what stack we are using. For example, our stack consists of a Lambda, Kinesis stream, DynamoDB, and some data source that is probably another Lambda or something. So Lambda feeds data into Kinesis and Kinesis publishes it into another Lambda. I'm just giving an example. All these four components come under a certain stack so there's not much to set up other than ensuring that it's part of a used CloudFormation for ensuring that we maintain stacks separately. Kinesis had to be part of the stack and data CloudFormation stack template and also it needs permissions from the data source of both source and destination. We just need to give permission to those data sources to be able to access Kinesis. Other than that, there's nothing much to set up because Kinesis is a self managed service.
What about the implementation team?
We were four developers and one principal developer who were taking us from the architecture standpoint during setup.
What's my experience with pricing, setup cost, and licensing?
I think there is a paid version only, there is no free version. I think it is possibly on the expensive side.
I did not go too deep into pricing, because our business did not care about pricing that much. They just wanted the product to be solid and level at all times. The business is generally conservative about services and pricing. But, this was a different case for us where the price did not matter.
I did not explore that much into the pricing of Kinesis, per se.
Which other solutions did I evaluate?
I'm aware of Costco streaming, but I have not used it in any project. This was the only streaming service that I used.
Here, we mostly use Azure Web Apps, Azure Web Jobs and the function apps, which are similar to Lambda. The exposure that I'm seeing is not as extensive here. It is not as extensive as it was for me in my previous organization. In the previous organization the entire infrastructure was on cloud, but here in my current organization it's partially on cloud. So the exposure into many Azure services is limited at this point.
What other advice do I have?
With my limited exposure to Kinesis, and with the pain points and probably not using it properly, we did see that it was successful. Having said all that, and the pain points that we went through, on a scale of one to ten I would give Kinesis an eight out of 10.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Easy to implement and use, with a robust and fault-tolerant data capturing facility
Pros and Cons
- "The most valuable feature is that it has a pretty robust way of capturing things."
- "If there were better documentation on optimal sharding strategies then it would be helpful."
What is our primary use case?
As part of my interest in obtaining Amazon certification and learning more about Kinesis, I am currently using it to capture streaming Twitter data.
I get an avalanche of tweets and I need some technology to harness and capture them. I have used the streaming Twitter API to deal with it. Twitter is updated every half a second, so I'm tapping into the streaming API and capturing a lot of stuff.
It has also been used for the Internet of Things (IoT), where there is a lot of streaming stuff that comes out and you need a mechanism to capture all of it from your devices. This includes things such as logs. My company was recently working on a project with Kinesis where we were capturing data from racecars.
These racecars were emitting tons of data and it needed to be captured by some kind of tool for analytics. Kinesis was used to capture all of that information. The basic use case is just capturing the data. In the streams, you can do some sort of interim transformations but for the most part, the basic use case is just capturing data and persisting it in a data store like Amazon S3. Another example is Elastic MapReduce permanent storage. Once it lands in some kind of permanent store, further transformations or aggregations can be done at that point.
How has it helped my organization?
In the racecar project that we worked on, the client wanted to be able to capture metrics in real-time to allow for the adjustment of racing strategy.
What is most valuable?
The most valuable feature is that it has a pretty robust way of capturing things. You can capture things from the beginning, or start capturing tweets at a certain point in time.
It has some good fault tolerance in case something breaks.
It's really easy to implement, get started, and use.
With AWS, you don't have to invest in any kind of infrastructure. All you have to do is go to the portal, create an account, turn it on, and use a few lines of Python code in order to capture what you're looking for.
The Kinesis API is really easy to put information on the shards. You just need to enter a few lines of code.
What needs improvement?
I'm currently trying to figure out production rates and consumption rates for data. If there were better documentation on optimal sharding strategies then it would be helpful.
What do I think about the stability of the solution?
I think that this product is very stable and very fault-tolerant.
As part of consuming data off of the stream, you do get some sort of unique number that is somewhat sequential. This means that if you have a problem with the data and something breaks, you can simply go back to that location in the stream.
Imagine that it gives you an integer, 100, to indicate your point in the stream. Then, if something fails, at a later point in time you can go back to spot 101 and continue retrieving data inside the stream. It's very fault-tolerant.
What do I think about the scalability of the solution?
The product is very scalable. Especially on the cloud, there is a large advantage.
How are customer service and technical support?
I haven't needed to contact technical or customer support.
Which solution did I use previously and why did I switch?
I am familiar with Kafka, although I have never used it.
Compared to Kafka, which requires physical servers, Kinesis, being on the cloud, is very easy to implement. It is a little easier to use, as well. Anybody who is interested in using it does not have to invest any money in a server or invest time in setting things up and configuring it on an actual environment with Kafka. All they have to do is go to AWS and turn it on.
I don't have any experience with other streaming analytics solutions.
How was the initial setup?
If someone knows what they're doing, they can have something up and running in half an hour. You can certainly use a deployment strategy, although I haven't to this point. I've just done it on my desktop, locally, in an IDE called PyCharm.
One can go ahead and deploy to an Amazon EC2 instance or AWS Beanstalk. I chose not to do this because it's easier for my project.
What about the implementation team?
I think as far as maintenance is concerned, you just kind of have to watch the production and the consumption of your data. You just have to make sure that everything's in order. They have metrics on the AWS console to help keep an eye on that kind of stuff but once it's up and running, you really don't have to do a whole lot of maintenance.
What other advice do I have?
My advice for anybody who is implementing this product is to start by reading through the Amazon documentation, as well as go through some videos on YouTube or Pluralsight just to get a high-level idea of what's going on. Then, start experimenting and trying to figure out how it works. From there, try to figure out how to choose your optimal sharding strategy, like how many shards do you need within the stream and how you want to partition the data within it.
I think from there, you need to look at your production and consumption rates on the stream. This is how much data you are putting onto the stream and at what kind of rate. You need to make sure that you're consuming data off of the stream, also, and look at that rate too.
The ideal use case is to be able to consume data faster than producing because then you're able to control things. If you're not able to do that, then you could get overwhelmed.
The biggest lesson that I learned from using this product is that it's a whole new world of processing big data. I come from a traditional data warehousing background where everything is batch-oriented. So for this, this is a whole new ball game in terms of how to process data. It's a new mechanism for harnessing the power of data. A traditional data warehouse could not analyze, for example, what is going on in real-time on a racing car. It's not scalable and it's not going to work. However, something like this is dynamic and big enough to handle this kind of application.
This is a pretty good product, albeit I don't have much to compare it with. That said, I don't have any problems with it. It's done what it's asked and it's easy to use.
I would rate this solution a nine out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Cloud Engineer at Xgrid, Inc.
Effective for small businesses, easy to use, and has excellent reporting, but only supports limited file size, batch size, and throughput
Pros and Cons
- "What I like about Amazon Kinesis is that it's very effective for small businesses. It's a well-managed solution with excellent reporting. Amazon Kinesis is also easy to use, and even a novice developer can work with it, versus Apache Kafka, which requires expertise."
- "One area for improvement in the solution is the file size limitation of 10 Mb. My company works with files with a larger file size. The batch size and throughput also need improvement in Amazon Kinesis."
What is our primary use case?
We collect data from AWS IoT Core and then capture the stream in Amazon Kinesis. The data is then stored in S3 and shifted to Snowflake for analysis.
What is most valuable?
What I like about Amazon Kinesis is that it's very effective for small businesses. It's a well-managed solution with excellent reporting. Amazon Kinesis is also easy to use, and even a novice developer can work with it, versus Apache Kafka, which requires expertise.
What needs improvement?
My company found some Amazon Kinesis discrepancies, so it's looking forward to a more modernized solution from Apache Kafka.
One area for improvement in the solution is the file size limitation of 10 Mb. My company works with files with a larger file size.
The batch size and throughput also need improvement in Amazon Kinesis. The solution needs to be more open regarding the type of files for streaming and the streaming size. Amazon should not limit those aspects. It should be unlimited. If a company is ready to pay, why not make it unlimited?
What I want to add to Amazon Kinesis is modernization based on the container environment, where I can add containers and more workers. I also expect some human resources to be added and an SLA agreement with Amazon, if possible.
For how long have I used the solution?
I've been using Amazon Kinesis for about one year, and I'm still using it.
What do I think about the stability of the solution?
Amazon Kinesis could be more stable. One of my clients rejected it, while some clients find it okay, stability-wise. I'd rate Amazon Kinesis stability as five out of ten.
What do I think about the scalability of the solution?
I can rate Amazon Kinesis scalability according to the organization size and data load. For a small organization using the solution and Lambda with some transformation through AWS Glue, Amazon Kinesis is the best, scalability-wise. However, if you're dealing with a billion tuples, for example, the solution isn't as scalable, so I would go for Apache Spark or Apache Kafka to handle the load.
When I see that the processing takes longer than fifteen minutes with Lambda and the tenants fail, I use Apache Spark for processing, but that could take up to three or four days to be comparable to big data technologies.
I'd rate the scalability of Amazon Kinesis as four out of ten.
How are customer service and support?
My company contacted some premium partners and technicians of Amazon Kinesis and found the technical support good, but with some limitations. I'd rate support a seven out of ten. Though it had limitations, the interaction with support was pleasant.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
In the future, my company plans to switch to Apache Kafka because it's very flexible and easier to manage. It's also easier to control and manage limits about topics. On the other hand, Amazon Kinesis has some limitations to its charts. It also has a 10 Mb limit to its file size, so if you have a 20 Mb file, you have to make it 10 Mb.
How was the initial setup?
Amazon Kinesis is easy to set up, and it's a ten out of ten for me. Setting it up is a straightforward process.
What about the implementation team?
My company set up Amazon Kinesis for the client.
What's my experience with pricing, setup cost, and licensing?
If you ask a client about Amazon Kinesis pricing, the client usually says it's high. If you ask a business owner, the business owner would tell you that pricing for Amazon Kinesis is a little bit high. For each region, it's a little bit high.
There is a particular concern regarding Amazon Kinesis here in Pakistan because there's no zone in Pakistan. Amazon needs to develop zones here because Pakistan is the biggest country in the region after India. Amazon is losing a lot of business in Pakistan because there's no AWS zone here.
AWS also didn't accept my Pakistan credit card when I was trying to register with AWS. AWS should develop trust here in Pakistan and excellent AWS zones, so Pakistan businesses that want to purchase Amazon Kinesis won't need to depend on Singapore or India.
When I'm closing a deal with a new client, the client would ask, "Why do you need to sign up with a zone in India or Singapore to save data?" I don't have an answer to that question, so a workaround would be to develop on-premise environments for clients to save data.
Amazon Kinesis pricing is sometimes reasonable and sometimes could be better, depending on the planning, so it's a five out of ten for me.
What other advice do I have?
Nowadays, my company works with AWS, Snowflake, Redshift, Amazon Kinesis, Firehose, Aurora, and Athena. In the future, my company plans to work with SAP HANA.
My rating for Amazon Kinesis is six out of ten.
My company is a user of Amazon Kinesis.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Senior Software Engineer at a computer software company with 201-500 employees
Fast solution that saves us a lot of time
Pros and Cons
- "Amazon Kinesis also provides us with plenty of flexibility."
- "I think the default settings are far too low."
What is our primary use case?
I work as a senior software engineer in eCommerce analytics company, we have to process a huge amount of data.
Only a few people within our organization use Kinesis. My team, which includes three backend developers, simply wanted to test out different approaches.
We are now in the middle of migrating our existing databases in MySQL and Postgres, to Snowflake. We use Kinesis Firehose to ingest data in Snowflake at the same time that we ingest data in MySQL, without it impacting any performance.
If you ingest two databases in a synchronous way, then the performance is very slow. We wanted to avoid that so we came up with this solution to ingest the data in the stream.
We use Kinesis Firehose to send the data to the stream, which then buffers the data for roughly two minutes. Afterwards, it places the files in an S3 bucket, which is then loaded automatically, via an integration with Snowflake that's called Snowpipe. Snowpipe reads and ingests every message and every file that's in the S3 bucket. This stage doesn't bother us because we don't need to wait for it. We just stream the data — fire and forget. Sometimes, if the record is not ingested successfully, we have to retry. Apart from that, it's great because we don't need to wait and the performance is great.
There are some caveats there, but overall, the performance and the reality of it all has been great. This year, 100% of the time when there was an issue in production, it was due to a bug in our code rather than a bug in Kinesis.
How has it helped my organization?
We save a lot of time with Kinesis, but it's difficult to measure just how much. We actually have something similar regarding some other processes. We have developed somewhere else a tool that takes note of the contents of the stream, places them into a file, manually uploads them to the S3, and copies the files into Snowflake. That could be done with Kinesis, but it could take two weeks or 1 month less to get it production-ready.
What is most valuable?
The first would be the one found in the AWS SDK using the asynchronous client: put Record batch function which allows you to put a list of records in one put record request, which saves time and it's more efficient. Also, by using the asynchronous client, the records are sent in the background using an internal thread pool that can be configurable for your needs. In our performance testing, we came across this setting was the fastest solution. It didn't impact anything in the performance of the system process.
The second one would be the ability to link the stream to other places other than S3 via configuration of the stream and without changing a line of code.
Lastly, you can also link a lambda function to the stream to transform the data as it arrives in before writing it in S3, which is great to perform some aggregations or enrich the data with other data sources.
What needs improvement?
The default limit that they have, which at the moment is 5,000 records per second (I'm talking about Kinesis Firehose which is a specialized form of the Amazon Kinesis service) seems too low. Actually, on the first week that we deployed it into production, we had to roll it back and ask Amazon to increase the default limits.
It's mentioned in the documentation, but I think the default settings are far too low. The first week it was extremely slow because the records were not properly ingested in the stream, so we had to try it again. This happened the first week that we deployed it into production, but after talking with Amazon, they increased their throttling limits up to 10,000 records. Now it works fine.
For how long have I used the solution?
We've been using this solution since September 2019.
What do I think about the stability of the solution?
The stability is great. I'd say that maybe we have it running 99% of the time, and nothing stops it.
What do I think about the scalability of the solution?
Amazon Kinesis is definitely scalable. We have huge spikes of data that get processed around midnight and Kinesis handles it fine.
It automatically scales up and down, We don't need to compute it for that. It's great.
How are customer service and technical support?
The only time that we needed to contact Amazon was to ask them to increase the throttling limit. They replied to us very quickly and did what we asked.
Which solution did I use previously and why did I switch?
Initially, we were evaluating Kafka. I think Kafka is faster, but it's less reliable in terms of maintenance; however, when Kafka works, and you have it properly configured, it's much better than Kinesis, to be honest.
On the other hand, Kinesis provides us with better maintenance. Our DevOps team is already oversaturated, so we didn't want to increase the maintenance cost of the production environment. That's why we decided to go with Kinesis; because performance-wise, it's easy to configure and maintain.
How was the initial setup?
I found this solution to be really easy to configure. The essential parts of the configuration include naming the stream and also configuring the buffering time that it takes for a record to get ingested into S3 (how long it will be in the stream until it's put into an S3). You also need to link the Amazon S3 buckets with the Amazon Kinesis stream. After you've completed these configurations, it's pretty much production-ready. It's very, very easy. That's a huge advantage of using this service.
What about the implementation team?
Deployment took a few minutes.
You don't need a deployment plan or an implementation strategy because once you configure it, you can just use a stream. It's not an obligatory version that needs a library, etc. This stream is completely abstract in that way. You only need to configure it once, that's it.
What was our ROI?
We have seen a return on our investment with Amazon Kinesis. We are able to process data without any issue. It's our solution for ingesting data in other databases, such as Snowflake.
Which other solutions did I evaluate?
Developing the stream process manually or using Kafka
What other advice do I have?
If you want to use a stream solution you need to evaluate your needs. If your needs are really performance-based, maybe you should go with Kafka, but for near, real-time performance, I would recommend Amazon Kinesis.
If you need more than one destination for the data that you are ingesting in the stream, you will need to use Amazon Kinesis Data Streams rather than Firehose. If you only want to integrate from one point to another, then Kinesis Firehose is a considerably cheaper option and is much easier to configure.
From using Kinesis, I have learned a lot about the synchronous way of processing data. We always had a more sequential way of doing things.
On a scale from one to ten, I would give this solution a rating of eight.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Founder & CTO at QuriousBit
Helps to stream events but needs improvement in limit
Pros and Cons
- "I have worked in companies that build tools in-house. They face scaling challenges."
- "Amazon Kinesis should improve its limits."
What is our primary use case?
I work in a gaming company that builds games for the global market. We use Amazon Kinesis to stream events.
How has it helped my organization?
I have worked in companies that build tools in-house. They face scaling challenges.
What needs improvement?
Amazon Kinesis should improve its limits.
For how long have I used the solution?
I have been using the product for a month.
What do I think about the stability of the solution?
I rate the tool's stability a ten out of ten.
What do I think about the scalability of the solution?
My company has two to three users for Amazon Kinesis.
How was the initial setup?
I rate the tool's deployment a nine out of ten. Deployment takes one day to complete.
What's my experience with pricing, setup cost, and licensing?
The tool's entry price is cheap. However, pricing increases with data volume.
What other advice do I have?
I rate Amazon Kinesis a seven out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Easy to use, easy to configure, and stable
Pros and Cons
- "Setting Amazon Kinesis up is quick and easy; it only takes a few minutes to configure the necessary settings and start using it."
- "Kinesis can be expensive, especially when dealing with large volumes of data."
What is our primary use case?
We use the solution for streaming data, in simpler terms. For example, there is a backend application; we need to make that data available for analysis. On the backend side, we don't store the history. We get all the events regarding changes incrementally. If something changes, an event is generated. This is a convenient way to keep track of all the changes.
What is most valuable?
Amazon Kinesis is similar to Kafka, another type of streaming technology, which can be referred to as a queue service to exchange data. Setting Amazon Kinesis up is quick and easy; it only takes a few minutes to configure the necessary settings and start using it. In comparison, Kafka requires setting up a cluster, even if it is available in the cloud, which can be time-consuming. Amazon Kinesis has a user-friendly interface, making it easy to adjust and scale up the number of shards if needed. The cloud is especially useful when starting something new and not needing a lot of resources initially, but with the potential to upgrade later when there is a larger load. Although there is a cost associated with using the cloud, Amazon Kinesis is very flexible and can be easily adjusted when necessary, making it a great advantage.
What needs improvement?
Kinesis can be expensive, especially when dealing with large volumes of data.
For how long have I used the solution?
I have been using the solution for two years.
What do I think about the stability of the solution?
The solution is stable and I don't recall any issues. Once we set the solution up, it usually works and we only investigate if we encounter a problem. However, if there is a large number of events to process, due to limited capacity for example with the shards, then some events may be delayed. This can be easily resolved by adjusting the configuration to provide more capacity.
What do I think about the scalability of the solution?
The solution is scalable, but this also comes with a financial cost. If we want to increase throughput, we can simply increase the number of shards or adjust some config parameters, which can be done in a matter of minutes if we know how to do it. We can scale the solution almost without limitation.
How was the initial setup?
There are a lot of details involved with the initial setup, so if we need something at the outset, we can set up the solution easily. However, the details are important since they are related to how much money we pay and we need to tailor the solution to our needs. If we want to do something more sophisticated, then we need to spend more time comprehending all the details. Initially, we can easily set something up, but eventually, we need to understand it better and adjust it more to our needs.
What's my experience with pricing, setup cost, and licensing?
Cloud services are often cheaper in the beginning, but when the amount of data and needed resources grows, they cost more and more. In my opinion, it is sometimes simpler to use an existing service rather than having to maintain our own internal infrastructure. This way, we can focus on the things we are good at and can make money from, rather than having to employ people to support the infrastructure. In general, cloud services are very convenient to use, even if we have to pay a bit more, as we know what we are paying for and can focus on other tasks. However, if the scale is large, I would consider making changes depending on the situation.
What other advice do I have?
I give the solution a nine out of ten. Amazon Kinesis is easy to use and configure, especially in the beginning. The solution is stable and I have not encountered any issues with it, nor am I aware of any. The solution is effective.
I don't see any missing features in Amazon Kinesis. I haven't spent a lot of time with this interface, as I have only configured it once. If any changes need to be made, I simply adjust Amazon Kinesis and it works. I only go into Amazon Kinesis if there is a need for a new data stream to be included or if the throughput needs to be increased. This doesn't happen very often.
Depending on the requirements, if there is a need to stream data and access it in real time, then I would consider Amazon Kinesis. However, if there is no need for real-time data access, then I will look for some other cheaper options. Companies such as Redshift, Snowflake, and BigQuery are developing databases with built-in streaming functionality. Depending on the case, this may be an option to consider. It also depends on the target; sometimes it is better to use the mechanisms available in the target tool. If we want to have the data on a stream or some hot stories, then I would consider Amazon Kinesis in that case.
Which deployment model are you using for this solution?
Public Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Download our free Amazon Kinesis Report and get advice and tips from experienced pros
sharing their opinions.
Updated: November 2024
Product Categories
Streaming AnalyticsPopular Comparisons
Databricks
Confluent
Azure Stream Analytics
Apache Flink
Amazon MSK
Google Cloud Dataflow
Spring Cloud Data Flow
PubSub+ Platform
Apache Spark Streaming
Cloudera DataFlow
Apache Pulsar
IBM Streams
Buyer's Guide
Download our free Amazon Kinesis Report and get advice and tips from experienced pros
sharing their opinions.