Try our new research platform with insights from 80,000+ expert users

Apache Hadoop vs Dremio comparison

 

Comparison Buyer's Guide

Executive Summary

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

Apache Hadoop
Average Rating
7.8
Reviews Sentiment
6.8
Number of Reviews
39
Ranking in other categories
Data Warehouse (7th)
Dremio
Average Rating
8.6
Reviews Sentiment
7.2
Number of Reviews
7
Ranking in other categories
Cloud Data Warehouse (10th), Data Science Platforms (8th)
 

Featured Reviews

Sushil Arya - PeerSpot reviewer
Provides ease of integration with the IT workflow of a business
When working with Kafka, I saw that the data came in an incremental order. The incremental data processing part is still not very effective in Apache Hadoop. If the data is already there, it can be processed very effectively, especially if the data is coming in every second. If you want to know the location of some data every second, then such data is not processed effectively in Apache Hadoop. I can say that one of the features where improvements are required revolves around the licensing cost of the tool. If the tool can build some licensing structures in a pay-per-use manner, organizations can get the look and feel of Apache Hadoop. Apache Hadoop can offer a licensing structure of the product that can be seen as similar to how AWS operates. Apache Hadoop can look into the capability of processing incremental data. The tool's setup process can be a scope of improvement. Also, it is not very simple because while doing the setup, we need to do all the server settings, including port listing and firewall configurations. If we look at other products on the market, then they can be made simpler. There are certain shortcomings when it comes to the product's technical support part, making it an area where improvements are required. The time frame for the resolution is an area that needs to be improved. The overall communication part of the technical support team also needs improvement.
MikeWalker - PeerSpot reviewer
It enables you to manage changes more effectively than any other platform.
Dremio enables you to manage changes more effectively than any other data warehouse platform. There are two things that come into play. One is data lineage. If you are looking at data in Dremio, you may want to know the source and what happened to it along the way or how it may have been transformed in the data pipeline to get to the point where you're consuming it. There's another thing called data providence. They're tied together. Data providence allows you to go back and recreate the data at any particular point in time. It's extremely important for compliance and governance issues because data changes all time. How did it change? What was it three days or months ago? You may have made some decisions based on data that was three months old, so you might need to revisit those. It's essential for things like machine learning and deep learning, where you are generating AI models off data. When the model stops working or doesn't work as expected, you need to figure out why. You have to go back and adjust the datasets used to train the model. We do that through an open-source project called Nessie, which is their basis for providing data lineage and data province capabilities. It's super powerful. Arrow is another open-source project for storing data in memory and performing data query operations. Data sits on a disk in one format. If you want to do anything with data, you have to load it into your computer and put it into memory so you can work with it. Arrow provides a format in memory that enables the whole library to perform various operations on that data. Every vendor has its own way of representing data in memory. They've latched onto an industry standard and developed it so it's open. Now people can use the exact same format in memory to do operations and use the library set to perform functions on data. New developers can decide if they want to develop their own memory format or use one that's already there. Data transfer is a massive problem when you're working with large datasets, doing advanced analytics, and trying to train machine learning or deep learning models. What happens often is companies downsample their data sets to do training on models because transferring and managing data on a deep learning or machine learning platform is too much.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"Two valuable features are its scalability and parallel processing. There are jobs that cannot be done unless you have massively parallel processing."
"What I like about Apache Hadoop is that it's for big data, in particular big data analysis, and it's the easier solution. I like the data processing feature for AI/ML use cases the most because some solutions allow me to collect data from relational databases, while Hadoop provides me with more options for newer technologies."
"It's good for storing historical data and handling analytics on a huge amount of data."
"One valuable feature is that we can download data."
"I recommend it for the telecom sector. I know it well, and it's a good fit."
"The most valuable features are the ability to process the machine data at a high speed, and to add structure to our data so that we can generate relevant analytics."
"Its flexibility in handling and storing large volumes of data is particularly beneficial, as is its resilience, which ensures data redundancy and fault tolerance."
"The most valuable features are powerful tools for ingestion, as data is in multiple systems."
"Dremio gives you the ability to create services which do not require additional resources and sterilization."
"Dremio enables you to manage changes more effectively than any other data warehouse platform. There are two things that come into play. One is data lineage. If you are looking at data in Dremio, you may want to know the source and what happened to it along the way or how it may have been transformed in the data pipeline to get to the point where you're consuming it."
"Everyone uses Dremio in my company; some use it only for the analytics function."
"Dremio allows querying the files I have on my block storage or object storage."
"Dremio is very easy to use for building queries."
"The most valuable feature of Dremio is it can sit on top of any other data storage, such as Amazon S3, Azure Data Factory, SGFS, or Hive. The memory competition is good. If you are running any kind of materialized view, you'd be running in memory."
"We primarily use Dremio to create a data framework and a data queue."
 

Cons

"The product's availability of comprehensive training materials could be improved for faster onboarding and skill development among team members."
"It needs better user interface (UI) functionalities."
"In certain cases, the configurations for dealing with data skewness do not make any sense."
"There is a lack of virtualization and presentation layers, so you can't take it and implement it like a radio solution."
"It could be more user-friendly."
"There are certain shortcomings when it comes to the product's technical support part, making it an area where improvements are required."
"The key shortcoming is its inability to handle queries when there is insufficient memory. This limitation can be bypassed by processing the data in chunks."
"What could be improved in Apache Hadoop is its user-friendliness. It's not that user-friendly, but maybe it's because I'm new to it. Sometimes it feels so tough to use, but it could be because of two aspects: one is my incompetency, for example, I don't know about all the features of Apache Hadoop, or maybe it's because of the limitations of the platform. For example, my team is maintaining the business glossary in Apache Atlas, but if you want to change any settings at the GUI level, an advanced level of coding or programming needs to be done in the back end, so it's not user-friendly."
"I cannot use the recursive common table expression (CTE) in Dremio because the support page says it's currently unsupported."
"We've faced a challenge with integrating Dremio and Databricks, specifically regarding authentication. It is not shaking hands very easily."
"They have an automated tool for building SQL queries, so you don't need to know SQL. That interface works, but it could be more efficient in terms of the SQL generated from those things. It's going through some growing pains. There is so much value in tools like these for people with no SQL experience. Over time, Dermio will make these capabilities more accessible to users who aren't database people."
"There are performance issues at times due to our limited experience with Dremio, and the fact that we are running it on single nodes using a community version."
"It shows errors sometimes."
"Dremio takes a long time to execute large queries or the executing of correlated queries or nested queries. Additionally, the solution could improve if we could read data from the streaming pipelines or if it allowed us to create the ETL pipeline directly on top of it, similar to Snowflake."
"Dremio doesn't support the Delta connector. Dremio writes the IT support for Delta, but the support isn't great. There is definitely room for improvement."
 

Pricing and Cost Advice

"​There are no licensing costs involved, hence money is saved on the software infrastructure​."
"It's reasonable, but there's room for improvement in cost-effectiveness."
"The product is open-source, but some associated licensing fees depend on the subscription level."
"This is a low cost and powerful solution."
"If my company can use the cloud version of Apache Hadoop, particularly the cloud storage feature, it would be easier and would cost less because an on-premises deployment has a higher cost during storage, for example, though I don't know exactly how much Apache Hadoop costs."
"We don't directly pay for it. Our clients pay for it, and they usually don't complain about the price. So, it is probably acceptable."
"The price could be better. Hortonworks no longer exists, and Cloudera killed the free version of Hadoop."
"We just use the free version."
"Dremio is less costly competitively to Snowflake or any other tool."
"Right now the cluster costs approximately $200,000 per month and is based on the volume of data we have."
report
Use our free recommendation engine to learn which Cloud Data Warehouse solutions are best for your needs.
831,158 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Financial Services Firm
34%
Computer Software Company
11%
University
7%
Energy/Utilities Company
5%
Financial Services Firm
32%
Computer Software Company
10%
Manufacturing Company
8%
Retailer
4%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
 

Questions from the Community

What do you like most about Apache Hadoop?
It's primarily open source. You can handle huge data volumes and create your own views, workflows, and tables. I can also use it for real-time data streaming.
What is your experience regarding pricing and costs for Apache Hadoop?
The product is open-source, but some associated licensing fees depend on the subscription level. While it might be free for students, organizations typically need to pay for their subscriptions. Th...
What needs improvement with Apache Hadoop?
Hadoop lacks OLAP capabilities. I recommend adding a Delta Lake feature to make the data compatible with ACID properties. Also, video and audio streaming import issues could be improved to ensure p...
What do you like most about Dremio?
Dremio allows querying the files I have on my block storage or object storage.
What is your experience regarding pricing and costs for Dremio?
The licensing is very expensive. We need a license to scale as we are currently using the community version.
What needs improvement with Dremio?
There are performance issues at times due to our limited experience with Dremio, and the fact that we are running it on single nodes using a community version. We face certain issues when connectin...
 

Comparisons

 

Learn More

Video not available
 

Overview

 

Sample Customers

Amazon, Adobe, eBay, Facebook, Google, Hulu, IBM, LinkedIn, Microsoft, Spotify, AOL, Twitter, University of Maryland, Yahoo!, Cornell University Web Lab
UBS, TransUnion, Quantium, Daimler, OVH
Find out what your peers are saying about Apache Hadoop vs. Dremio and other solutions. Updated: January 2025.
831,158 professionals have used our research since 2012.