I use the solution on cloud and bare metal servers for our company's cloud-native application development needs.
My company's IT personnel work with VMware products since we have its license.
I use the solution on cloud and bare metal servers for our company's cloud-native application development needs.
My company's IT personnel work with VMware products since we have its license.
Speaking about the features that have proven to be the most effective for managing database operations, I would say that RabbitMQ is used for the middleware layer, which will give good performance in terms of TPS and availability.
Other tools besides RabbitMQ provide good TPS and HA. If RabbitMQ can match what its competitors provide, then we can probably give it a ten out of ten.
I have been using VMware Tanzu Data Solutions for around ten to twelve years. I am a user of the solution.
Stability-wise, I rate the solution a seven or eight out of ten. I am exploring other products right now. So far it has given a good performance.
There are more than 1,000 people with more than ten administrators using the tool in our company.
My company has RabbitMQ's support. The solution's technical support is good. My company gets a quick response from the product's support team. I rate the technical support an eight to nine out of ten.
Positive
I have experience with MSN which is a Microsoft, but it was not offering a good performance owing to which we had to move to RabbitMQ.
There is no point in spending money on a tool that is an open-source product. It will perform well. Everything in the tool will be a value addition to our platforms.
RabbitMQ is an open-source tool. Since the tool is an open-source product, there is no need to pay anything.
I am correctly exploring options like Apache Pulsar and NATS, and I see both offer good performance in terms of TPS. The benchmarks and numbers of both tools are good when compared to RabbitMQ. Apache Pulsar and NATS are committed and offer 100 times the performance when compared to RabbitMQ. Apache Pulsar and NATS are open-source tools, but both tools have commercial support, similar to RabbitMQ.
Speaking about how the automation capabilities enhance our company's data management processes, I would say that in case of any volumes, we basically use RabbitMQ for the messaging platforms. Our company handles 1,500,000,000 transactions per day.
RabbitMQ affected our company's teams' collaboration and workflow efficiency since everybody is familiar with the tool we have been using for a while now.
My company has not explored the AI part of the tool.
I recommend the product to others.
My company has provided a lot of suggestions regarding VMware Tanzu Data Solutions to other people since it is free and serves everybody's purpose.
I rate the tool an eight out of ten.
Postgres is a database. Like Oracle is a database, this is a database.
We replaced our Oracle paid database with the open-source Postgres database, and we migrated around 50 lakhs of consumer data there with different rows and tables. We deployed this in different production staging and testing. We created three deployments, and each deployment has three servers.
Earlier, we were using the Oracle paid database, which is a commercial product. Then we switched to open-source due to the fact that we have lots of new projects and we could not handle the licensing costs. So we migrated our data from the Oracle Database Server to the Postgres Server. It helped us to evaluate the cost. We saved lots of money in terms of licensing.
It is open-source.
It provides the database load-balancing capability itself. It has data, like Pgpool, an open-source database, and a load balancer also.
We can also create the cluster in between the database in active, standby mode.
The solution is stable.
It's scalable.
It doesn't have any GUI-based monitoring tools. Oracle has some proprietary tools for monitoring all the databases and all that. Postgres doesn't have any graphical capabilities where we can monitor the database. We have to do it with the Sierra stuff and run some random commands. Then we can get the data from the cluster and databases table.
The initial setup is complex.
It would be ideal if they could provide an active cluster in Postgres. If one primary DB goes down, it should automatically fail over to the second database.
We've been using it for four years. I've used it since around 2018.
It's stable. There are no bugs or glitches. It doesn't crash or freeze. It's reliable.
It is very scalable.
From the user's perspective, we have around 500 users. However, we have around 50 lakhs of consumer records in the solution.
We plan to increase usage. We already added the sponsors, and we require the capacity and the transactional processing. Therefore, it's scalable. We don't have any licensing restrictions, so we can add on as required.
While we don't have technical support, we do have creative support. They are quite good.
Positive
We used to use Oracle Database.
We required lots of planning during the initial setup. The migration phase is very complex when you move from Oracle to Postgres. The installation and configuration have a moderate amount of difficulty.
The deployment and maintenance require three people, including one system administrator and two database administrators.
We handled the initial setup in-house. We didn't need any outside assistance.
We haven't invested any money into the solution and therefore haven't looked into ROI.
We have zero licensing costs. The solution is open-source.
We also evaluated some parts of MySQL. However, we didn't find it very suitable and scalable.
You need to be clear about your use cases and the transactional requirement you are observing from the database architecture before beginning with this solution. You need to consider your architecture based on the scalability and reliability of the applications. You need to take this into account before deploying any solution to Postgres.
I'd rate the solution eight out of ten.
We are using the product for analytical purposes like reporting and billing.
We maintain the servers on our premises. Compared to Snowflake, Greenplum is a cheap solution for analytical purposes.
The latest version is better than the older ones. The solution updates very fast. The loading speed is very good.
Maintenance is time-consuming. It takes time to VACUUM and ANALYZE the tables to remove the fragmentations.
I have been using the solution for five years.
The solution is stable.
Compared to Snowflake, Greenplum is not scalable. The solutions used on premises are not scalable compared to the cloud solutions. Around 200 to 300 people use the product in our organization.
Support is fine. We do not use high-level support. The support team is quite supportive.
The setup is easy. It is not complex.
We must set up the instance and run scripts to deploy the product. It is very simple. We can deploy the scripts with one or two commands. One person is enough to deploy the solution.
It’s an open-source solution. There are no expenses for using it.
We are using the latest version of the solution. Some of our clients asked us why we were not using Snowflake, so we are evaluating Snowflake as per their request. If we replace Greenplum with Snowflake, the purpose would be to minimize maintenance time and enhance scalability. If someone is looking for a cheap solution, Greenplum is a good choice for them. Overall, I rate the product a six out of ten.
We specifically use the solution for queuing purposes, and it has proven to be fantastic in that aspect.
The solution's best feature is its exceptional speed, delivering efficient utilization of resources. It uses a memory desk processor very efficiently. It offers high performance while maintaining a low cost.
The solution is a fine product. However, to make it perfect, in some cases, there might be a need to traverse the queue. RabbitMQ currently lacks the capability for archiving the queue, which essentially turns it into a log.
For such requirements, you may need to explore other options like Kafka or custom drivers that allow traversing the entire queue. In RabbitMQ, while you can traverse the entire queue, you need to devise a workaround to handle the messages. For example, you can read a message from one queue, publish it to another queue or keep it in some other way to retain the desired entries, and then stop at that point.
Additionally, the need for support may vary depending on the usage and potential heavy loads on the system. The support feature could benefit from some improvement in terms of accessibility and responsiveness.
I don't encounter significant challenges or areas that require improvement while using the solution. Everything works smoothly, and I find it well thought out. It's got excellent compliance with MQP 9.0. Overall, I have had a positive experience with the solution.
I have been using the solution since 2017.
The solution is highly stable. As an example, at this moment, I am in front of my admin panel and can confirm that it has been running continuously for the past 173 days.
The solution is scalable, although I still need to utilize the clustering option. A single server is sufficient and efficiently handles most of our workloads. It effectively uses system resources such as memory, CPU, and disks, resulting in excellent performance with minimal resource usage.
So far, we have not needed any support from the solution's official support team or community. We rely on Google search and our team's research, leveraging various online resources to explore and implement solutions independently.
When I joined my current company, I initially explored Apache Kafka, but I realized that Kafka is primarily a log system rather than a queuing system. I encountered limitations with Kafka, such as maintaining pointers for each process and manually removing messages from the queue.
Comparatively, RabbitMQ proved to be more convenient as it automatically deletes messages from the queue when using auto or manual acknowledgment. Considering these factors, we switched from Kafka to this solution due to its efficiency.
The solution's installation process was straightforward, especially if you have good skills in installing software and a good command of Linux. Once the Bandit software is downloaded and extracted, the installation is completed.
After that, accessing the admin interface allows for a user-friendly GUI experience. The deployment process took around half an hour.
We have a private cloud infrastructure using VMware, which means our servers are running on-premises and are owned by our company. We have a limited number of servers running the solution.
Specifically, we have one primary server and one secondary server without implementing clustering. Replicating these two servers is sufficient for our workload, and they can be installed by a single system administrator in just half an hour without any issues, provided they have DPU-installed Linux available.
Overall, I would rate the setup experience as nine out of ten.
The solution's pricing is cost-effective as it does not involve significant expenses. Licensing is required only for the server, while clients do not need any licensing. Therefore, it proves to be a cost-efficient option.
In my previous organization, we heavily relied on Tibco messaging solutions like Tibco RD (Rendezvous) and Tibco RV (Rendezvous) for the entire rating system. I have also explored Apache Kafka.
If you are looking for a queuing system for your application that guarantees insured delivery and ensures single delivery without duplicates, RabbitMQ is the right solution as it provides all these capabilities with ease of use.
With RabbitMQ, your application doesn't need to worry about receiving duplicate messages as the solution handles that internally, ensuring that each message goes through a single process for one delivery.
I highly recommend the solution and would rate it an eight out of ten.
I've found that the data compression and ETL are the most valuable features for us.
In 4.3.8.1 Pivotal confirmed that even restoring schema level backup is possible from a DB backup.
- restoring schema from a DB level backup has been tested and working fine .
ORCA - the Pivotal Optimizer does a good query plan but does not works with all business logics. This needs to be tested based on your requirement.
Loading batch data has really improved the efficiency of our organization.
Running Extracts has drastically improved the timings. Being MPP which is a bulk operator - we were able to do 1.5 million calculation in 15 minutes.
Scaling of the solution needs to be improved.
HD connection is available where as, not to any file system.
Connecting Greenplum with Gemfire(In-Memory) to load, sync, and reconcile data would be really valuable.
I've used it for nearly for 3 years
We had deployment issues after installing new patches. Every new patches has some or other business hit where the release notes needs to be reviewed.
It's been stable for us.
They have a quick turn around but to dig into the actual information takes time, based on the Severity.
Technical Support:First level of technical support would not be that effective (based on own observation).
We were using Sybase and handling massive data, bulk operation was not possible.
It was simple.
We primarily use the solution for consumers and publishers. It's for messaging and consumer publishing. That's it.
The solution is simple to use.
It's great for messaging and consumer publishing.
Companies can scale the solution, so long as they have server room.
The stability is good.
The user interface could be improved. We have an interface that shows the consumption rate, the number of consumers, their occupation rate. We should have a column in that interface that shows the estimated time until, at the current rate of consumption, the number of messages is to be consumed from a specific queue. That would be great. I wanted to read, however, as it is right now, JavaScript would have loaded the browser too much. Basically, I'd just like to see the consumption rate in each queue without too much fuss.
The solution could use some plugins that could be integrated into the server installation. We had a plugin that we used to delay something that from one version to the other was integrated into the server setup. Maybe it was more of an extension. However, more plugins could be also be integrated into newer versions of Rabbit.
I've used the solution since 2013 or 2014. It's been about eight years at this point.
The solution is usually stable. We have problems with space on the Rabbit servers. When they are full, we might lose everything. That's a big no-no. This is a problem for Kafka as well, however, we have higher thresholds in that area. Rabbit is the poor brother to Kafka, so it receives less space. That's why, sometimes, in some departments, this problem occurs.
The solution can scale, however, we use a lot of space for Kafka. We have clusters through the servers, and there may be more for each department. If some needs appear, we can increase the number of servers in a cluster to better manage messages. As long as your company can increase the number of servers, it can scale.
We have about 100 departments that use this solution in some way.
In our case, we have in our department five people and we have two clusters with Rabbit for two different directions. For us, it's enough. We do not plan to increase usage.
I've never directly contacted technical support. We use recommendations on the site, which is very good. I appreciate the recommendations, however, I'm not sure about the maintenance of the documentation from one version of Rabbit to the other. The older versions of the documentation might be less accurate.
Other departments might use, for example, Kafka, however, I'm unsure as I have no visibility on them.
I was there when the solution was initially implemented and, from what I recall, it took half a year.
It was completely new. No one knew anything about it. However, we knew that we had to do something to improve the communication between departments. It was a good solution. That said, it took a long time before everyone understood how it works.
We had a few dedicated people who liked the idea of Rabbit and implemented it. It took a while for the rest of the company to get behind them and learn how to do it.
There are one or two people at any given time available to handle any type of maintenance responsibilities.
We handled the implementation process ourselves.
We're using a few different versions. It depends on the department. Some departments have the latest, some don't, some use a very old version. I'm using 3.8. We do have plans to make an upgrade.
It was a few years ago now when I learned this process of separating publishers versus consumers in terms of messages and communicating between departments. This was the biggest game changer for myself. I'd advise new users study that aspect and understand it.
I'd rate the solution at an eight out of ten. It's a very good tool and we use it all the time.
Greenplum is a distributed database that we used for data warehousing.
The parallel load features mean that Greenplum is capable of high-volume data loading in parallel to all of the cluster segments, which is really valuable.
The service management capabilities are good.
The external data integration with Parquet, Avro, CSV, and unstructured JSON works well.
It has an advanced query optimizer.
The initial setup is somewhat complex and the out-of-the-box configuration requires optimization.
- OS settings need to be tuned according to the Install guide.
- Only group/spread mirroring by gpinistsystem, block mirroring is manual (Best Practices Guide)
- Db maintenance scripts are not supplied - some of them added in cloud - need to be implemented based on the Admin Guide.
- Comes with two query optimizers, PQO is default, some queries perform better with the legacy planner, it needs to be set.
We have been working with Greenplum for about five years.
Greenplum is pretty stable.
This product is absolutely scalable. We have more than 400 users in our database.
The technical support is exquisite.
This is a company that really listens to its customers. I am very happy with our relationship.
Before I joined this company, I used different data warehousing solutions.
Making the transition to Greenplum requires a completely different mindset because it is massively parallel. It's more like a Big Data mindset, where you need to consider that you are distributing data between cluster nodes. It is not always straightforward to make the switch.
The initial setup is kind of complex. You need an expert to set up a Greenplum cluster.
It may not be possible to simplify the initial setup because there's an out of the box configuration and you can use it. I've actually seen companies using it for years and it works, but it didn't work optimally so they were not happy with the results.
You can set up Greenplum but you really need to read the manual and the installation guide. I've seen people skipping it and then complaining.
A few people are enough to maintain this product. If you want to have around the clock support then you will need a couple of people in different time zones, but generally, maintenance is straightforward.
We are currently in the process of upgrading from version 5.26 to 6.11 and I can already see a lot of improvements. I can't wait to try them. According to the roadmap, there are a lot of new improvements coming in the V7 version, which is due out next year.
My advice for anybody who is implementing Greenplum is that they really need an expert to assist them. They might hire consultants or grow experts in-house, although that takes time and it is not always straightforward. You can use Greenplum out of the box but to really leverage all of the capabilities, you definitely need to tune your system and also design your database objects.
When people think about a database they usually think about Oracle, Mircosoft SQL, or maybe MySQL. Greenplum is a distributed database that needs a completely different mindset. I think that when people start to use it, they don't really understand. For example, you cannot switch from Oracle to Hadoop because you will need the same change, but when they switch to Greenplum from Oracle, or just put data from Oracle to Greenplum, they don't consider this change as seriously as they would for Hadoop.
Overall, I am very happy with this product.
I would rate this solution a nine out of ten.
We use the solution for event-driven programming. We have multiple queues and channels to provide scenarios for publishing into containers. You have to communicate the microservices, and consumers consume the services.
We were using the solution to setting the tenant settings into the service. For example, if you have five microservices using the tenant settings, after updated, we publish the updates to other microservices. It helps get the updated data to be able to publish the settings into the updated queue.
The queues and the publishing are quite useful. We're able to create hierarchies and control channels and flows to control what is going from which queue.
The solution can scale.
It is stable and reliable.
The availability could be better. When something crashes, a queue gets deleted, and my data is lost. They need to improve this so that we don't lose data during issues like crashes.
We'd like to understand how many queues are running on RabbitMQ. I'm not sure how to get these details and how to verify the information.
We need other protocols.
I've been using the solution for three years or so.
The solution is stable and reliable. There are no bugs or glitches.
The solution is scalable. However, we have issues with availability.
Sometimes, it is hard to understand what is going on when you reach out to technical support.
Our DevOps team deployed the solution.
I'm not sure what the exact pricing is. I don't handle the licensing aspect.
I am using the latest version of the solution. I'm not sure of the version number.
I've used this on multiple projects, and it has proven to be quite useful.
I'd rate the solution nine out of ten. It is a very good tool.
The schema level backup is still a question and I am not sure if it works as expected with the latest patch delivered.