The most valuable feature is it’s robustness. Message queues need to be extremely reliable as they are the glue between system components.
Also, the speed is important and its good scaling capabilities.
The most valuable feature is it’s robustness. Message queues need to be extremely reliable as they are the glue between system components.
Also, the speed is important and its good scaling capabilities.
It allows developers to focus on application functionality without having to re-invent interprocess communication, which is difficult.
I also allows us to develop smaller, more efficient, and less complex subcomponents of a larger application.
I would like to see better documentation on how to set up complex webs of RabbitMQ servers — master/slave, multi-master, etc.
I have been using RabbitMQ for 7+ years.
We have not encountered any stability issues.
We have not encountered any scalability issues.
We were using IBM MQ, but it was too costly and not open source.
The initial setup was simple for my applications, but I have not used RabbitMQ on a complex project that would require clusters of servers.
My advice is to read the message boards and play with the API.
Message queue, because it is easy to use, reliable, not a big load.
We are using it to distribute pieces of some large jobs to multiple machines, which improves performance several times.
Improve the ability to handle the large message load.
People usually use RabbitMQ as the lightweight messenger, if they have a large message load people are inclined to use Kafka. But at the beginning stage of most projects, the data is small, people do not need to use a Kafka type of messenger, they are more likely to use RabbitMQ. If RabbitMQ can handle the large message load and support ordered delivery, with the project growing, data bigger, people can still use RabbitMQ and wouldn't need to find another tool to use like Kafka which is much more convenient.
Half a year.
Didn’t have issues.
Didn’t have issues.
Very good. 8/10.
Simple. We followed the tutorial about RabbitMQ with Python.
We are using it internally with a very small data load in the developing period, which is free right now.
Yes, I evaluated Kafka.
Kafka is more suitable to large amount events in order. RabbitMQ is more suitable to the related small amount of messages, which is my situation and I don’t care about the message order.
RabbitMQ is a very easy to use and reliable message broker. If the work has a relatively small message load, RabbitMQ is the most robust and reliable choice.
Append Only tables, data compression and bulk load and extraction using External Tables are very valuable features for us.
We have improved our quarterly statements turnaround dramatically and could sustain for increasing data.
With the ORCA optimizer the earlier Append-Only feature has been upgraded to Append-Optimized where now we can update the data on earlier Append-Only tables just like any other heap tables. But I found this has increased the time taken for Vacuum Analyze operation on these tables like from 10 mins to 1 hr + (on large tables). In our case we don't need an update on our Append Only tables and hence this became a drawback. VA on Append-Optimized tables need to be improved.
Backup & Restore performance need to be improved.
ORCA optimizer when turned on is not showing consistency. Some workloads shows improved performance and some workloads became very slow. This need to be improved for consistency.
I have used it for about 4 years now.
Pre ORCA version was stable. ORCA release is not stable. Some workloads slowed down with new release even when the new optimizer is not turned ON.
Tech support is average. They lack information about new features in the new releases and the possible impact of them.
Earlier we were using OLTP based RDBMS solution. We realized we needed a OLAP solution and also something that can scale horizontally.
Processing speed of queries used for ‘Reporting’ solutions is the most valuable feature.
Not Applicable for the area I was responsible for, as we ended up migrating away from Greenplum.
Stability and scalability for large number of concurrent applications & their users. The results we got were very inconsistent, depending on number of connections taken up by multiple applications and users.
When our application was first deployed using Greenplum, the number of users of the rrack on which Greenplum was deployed was very limited. We got excellent query performance results at that time. But as more applications started getting deployed, we started getting very inconsistent performance results. Sometimes the queries would run in sub-seconds, and sometimes same queries would run 10 times longer. The reason we found this was that Greenplum limits the number of active concurrent connections. Once all connections are being used, any new query gets queued, and thus response time suffers.
The impression we got was that the EMC Sales team that sold Greenplum to the organization did a great job. But later on the ball was dropped when it came to educating on which type of applications are suitable to Greenplum , and how to configure it to get optimal performance. When Pivotal took over support of Greenplum, their consultant visited us to go over the issues we were having. He advised us that Greenplum is not the best environment for our application needs. We ended up migrating our application out of Greenplum, along with a few other applications.
There was no issue with the deployment.
There were issues with the stability.
There were issues with the scalability.
Ensure that this is the right tool for your needs. For instance, Greenplum is not the best tool for cases where data has to be kept up to date in real time. Capacity planning is key to success, once you do decide it is the right tool for you.
We use it for data warehousing.
For complex queries, which would normally take a long time, and for reporting, it is very efficient. It doesn't take a long time for the execution of any report for the end-user.
The implementation of an upgrade takes a long time. But maybe it's different from one instance to another, I'm not sure.
Also, one of the disadvantages, not a disadvantage with the product itself, but overall, is the expertise in the marketplace. It's not easy to find a Greenplum administrator in the market, compared to other products such as Oracle. We used to work with such products, but for Greenplum, it's not easy to find resources with the knowledge of administration of the database.
If we face any issues they're normal and we open tickets.
It's scalable. I would rate scalability seven out of 10.
We hired one DB admin for Greenplum. If he faces any issues he opens tickets with the vendor, but most of the issues, 90% of them, he is able to solve without support.
We used to other products before, but when we worked with Greenplum, as compared to other products on the market, we found it's a good product.
Before Greenplum, we used Oracle but it was mostly obsolete. So we had to upgrade our tools. We needed to have a database with an API tool.
I'm not a professional in the setup but setup of the environment itself was managed by us. We managed between development, testing, and production servers. We are able to maintain it. I don't think it is complicated.
Most of the issues can be solved without referring back to support. A very small minority of issues required support from the vendor.
Pricing is good compared to other products. It's fine.
We did a comparison among some databases, one of them Greenplum. We assessed features, did a comparison in terms of the price, then we chose Greenplum. And we've retained it. We've found it's a good product, to date.
Oracle Exadata was part of the comparison, as was IBM Netazza. In terms of quality and the price, compared to the other products, we chose Greenplum. Also, to be honest, at that time we got a good offer: Use it for the first year with a minimal price. Then they opened a support contract with us, later. That was one of the advantages.
I give it an eight out of 10. To bring it up to a 10, they need to interact more with customers. They need to explain the features, especially when there are new releases of Greenplum. I know just from information I've found that it has other features, it can be used to for analytics, for integration with Big Data, Hadoop. They need to focus on this part with the customer.
Also they need to enhance integration with other Big Data products. They need to adapt more, give more features, because customers are looking for these things in the market now. They have the product itself already, but they need to integrate with Big Data platforms and to open a bi-directional connection between Greenplum and Big Data. They need to focus on these features more.
But, from my perspective, for what I'm looking for, I can say it's a good product. Most of the features I'm looking for are available.
Processing speed – especially loading and transformation of large data sets.
Before we implemented Greenplum, our weekly data loads (for third party marketing data sets) were taking over three days. (We also had some monthly data that could take up to 3 days to load and transform via Informatica.) After we implemented Greenplum, the loads were reduced to less than nine hours. Previously, we were receiving data early Wed a.m. and not getting out to the salesforce (if we were lucky) until noon on the following Monday. Now we get the data to the field early Friday mornings before they wake up.
The Greenplum appliance itself has had some reliability issues, so it would be great if that could be improved in the next version. More critical, though, is that the latest devices are not backward compatible. i.e., We have to replace our entire environment to upgrade. That’s quite an expense. I would hope they could improve the upgrade roadmap in the future.
We have used EMC Consulting for some projects, and we have lots of EMC storage.
If you can, do a benchmark with other MPP options including cloud alternatives. Although our Greenplum implementation was very successful (going on 4 years ago), I wish we had benchmarked against Teradata and Netezza (now IBM) at least. Today, I would consider not even buying hardware… just doing it all in the cloud.
RabbitMQ helped us build a database synchronization framework that allowed us to transfer our clients data to our cloud based data processing centers.
The web management tool.
I have used this solution since 2013.
We had several de-clustering problems.
We did not have any scalability problems.
I have never used support.
This is the first solution we implemented.
It was a very simple setup. We had some issues with the home folder being on a non-standard system drive (The location of the RMQ cookie was changed.)
The Community Edition works fine for us.
We evaluated several other solutions; the MQSeries and MSMQ.
Use it for implementations that require a queuing solution. It is easy to overuse it as a universal communication bus of the entire system.
With RabbitMQ cluster servicing micro-services, we don't have any downtime and we don't lose any data. We can update and/or upgrade the micro-services without downtime.
I have used RabbitMQ for four years.
We did have stability issues in the past. After shutting it down, the cluster did not start until we deleted some corrupted file. This occurred more than a year ago.
It works as expected, i.e., flawless.
We have not needed any technical support as of yet.
We did not evaluate any previous solutions.
Just enter this command: $ apt-get install rabbitmq-server
It’s open source with paid support.
We looked at Kafka, but we needed the routing as well.
Start it in Docker and use Java Spring Boot or Node.JS with amqplib to connect to it. It has transformed how I think data should flow in an organization.