Try our new research platform with insights from 80,000+ expert users

Dremio vs Teradata comparison

 

Comparison Buyer's Guide

Executive SummaryUpdated on Oct 6, 2024
 

Categories and Ranking

Dremio
Ranking in Cloud Data Warehouse
10th
Average Rating
8.6
Reviews Sentiment
7.2
Number of Reviews
7
Ranking in other categories
Data Science Platforms (8th)
Teradata
Ranking in Cloud Data Warehouse
6th
Average Rating
8.2
Reviews Sentiment
7.0
Number of Reviews
76
Ranking in other categories
Customer Experience Management (5th), Backup and Recovery (19th), Data Integration (18th), Relational Databases Tools (7th), Data Warehouse (3rd), BI (Business Intelligence) Tools (10th), Marketing Management (6th)
 

Featured Reviews

MikeWalker - PeerSpot reviewer
It enables you to manage changes more effectively than any other platform.
Dremio enables you to manage changes more effectively than any other data warehouse platform. There are two things that come into play. One is data lineage. If you are looking at data in Dremio, you may want to know the source and what happened to it along the way or how it may have been transformed in the data pipeline to get to the point where you're consuming it. There's another thing called data providence. They're tied together. Data providence allows you to go back and recreate the data at any particular point in time. It's extremely important for compliance and governance issues because data changes all time. How did it change? What was it three days or months ago? You may have made some decisions based on data that was three months old, so you might need to revisit those. It's essential for things like machine learning and deep learning, where you are generating AI models off data. When the model stops working or doesn't work as expected, you need to figure out why. You have to go back and adjust the datasets used to train the model. We do that through an open-source project called Nessie, which is their basis for providing data lineage and data province capabilities. It's super powerful. Arrow is another open-source project for storing data in memory and performing data query operations. Data sits on a disk in one format. If you want to do anything with data, you have to load it into your computer and put it into memory so you can work with it. Arrow provides a format in memory that enables the whole library to perform various operations on that data. Every vendor has its own way of representing data in memory. They've latched onto an industry standard and developed it so it's open. Now people can use the exact same format in memory to do operations and use the library set to perform functions on data. New developers can decide if they want to develop their own memory format or use one that's already there. Data transfer is a massive problem when you're working with large datasets, doing advanced analytics, and trying to train machine learning or deep learning models. What happens often is companies downsample their data sets to do training on models because transferring and managing data on a deep learning or machine learning platform is too much.
SurjitChoudhury - PeerSpot reviewer
Offers seamless integration capabilities and performance optimization features, including extensive indexing and advanced tuning capabilities
We created and constructed the warehouse. We used multiple loading processes like MultiLoad, FastLoad, and Teradata Pump. But those are loading processes, and Teradata is a powerful tool because if we consider older technologies, its architecture with nodes, virtual processes, and nodes is a unique concept. Later, other technologies like Informatica also adopted the concept of nodes from Informatica PowerCenter version 7.x. Previously, it was a client-server architecture, but later, it changed to the nodes concept. Like, we can have the database available 24/7, 365 days. If one node fails, other nodes can take care of it. Informatica adopted all those concepts when it changed its architecture. Even Oracle databases have since adapted their architecture to them. However, this particular Teradata company initially started with its own different type of architecture, which major companies later adopted. It has grown now, but initially, whatever query we sent it would be mapped into a particular component. After that, it goes to the virtual processor and down to the disk, where the actual physical data is loaded. So, in between, there's a map, which acts like a data dictionary. It also holds information about each piece of data, where it's loaded, and on which particular virtual processor or node the data resides. Because Teradata comes with a four-node architecture, or however many nodes we choose, the cost is determined by that initially. So, what type of data does each and every node hold? It's a shared-no architecture. So, whatever task is given to a virtual processor it will be processed. If there's a failure, then it will be taken care of by another virtual processor. Moreover, this solution has impacted the query time and data performance. In Teradata, there's a lot of joining, partitioning, and indexing of records. There are primary and secondary indexes, hash indexing, and other indexing processes. To improve query performance, we first analyze the query and tune it. If a join needs a secondary index, which plays a major role in filtering records, we might reconstruct that particular table with the secondary index. This tuning involves partitioning and indexing. We use these tools and technologies to fine-tune performance. When it comes to integration, tools like Informatica seamlessly connect with Teradata. We ensure the Teradata database is configured correctly in Informatica, including the proper hostname and properties for the load process. We didn't find any major complexity or issues with integration. But, these technologies are quite old now. With newer big data technologies, we've worked with a four-layer architecture, pulling data from Hadoop Lake to Teradata. We configure Teradata with the appropriate hostname and credentials, and use BTEQ queries to load data. Previously, we converted the data warehouse to a CLD model as per Teradata's standardized procedures, moving from an ETL to an EMT process. This allowed us to perform gap analysis on missing entities based on the model and retrieve them from the source system again. We found Teradata integration straightforward and compatible with other tools.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"We primarily use Dremio to create a data framework and a data queue."
"Dremio allows querying the files I have on my block storage or object storage."
"Everyone uses Dremio in my company; some use it only for the analytics function."
"Dremio enables you to manage changes more effectively than any other data warehouse platform. There are two things that come into play. One is data lineage. If you are looking at data in Dremio, you may want to know the source and what happened to it along the way or how it may have been transformed in the data pipeline to get to the point where you're consuming it."
"Dremio gives you the ability to create services which do not require additional resources and sterilization."
"The most valuable feature of Dremio is it can sit on top of any other data storage, such as Amazon S3, Azure Data Factory, SGFS, or Hive. The memory competition is good. If you are running any kind of materialized view, you'd be running in memory."
"Dremio is very easy to use for building queries."
"The most valuable features of Teradata are that it is a massively parallel platform and I can receive a lot of data and get the queries out correctly, especially if it's been appropriately designed. The native features make it very suitable for multiple large data tasks in a structured data environment. Additionally, the automation is very good."
"It has increased the speed of reporting."
"Designing the database is easy."
"The feature that we find most valuable is its ability to perform Massive Parallel Processing."
"It is a stable solution. Stability-wise, I rate the solution a nine out of ten."
"The two types of partitioning have been very significant for us - row and columnar partitioning."
"Auto-partitioning and indexing, and resource allocation on the fly are key features."
"Teradata can be deployed on-premise, on the cloud, or in a virtual machine, which means customers can move without having to create their architecture all over again."
 

Cons

"There are performance issues at times due to our limited experience with Dremio, and the fact that we are running it on single nodes using a community version."
"Dremio doesn't support the Delta connector. Dremio writes the IT support for Delta, but the support isn't great. There is definitely room for improvement."
"I cannot use the recursive common table expression (CTE) in Dremio because the support page says it's currently unsupported."
"We've faced a challenge with integrating Dremio and Databricks, specifically regarding authentication. It is not shaking hands very easily."
"Dremio takes a long time to execute large queries or the executing of correlated queries or nested queries. Additionally, the solution could improve if we could read data from the streaming pipelines or if it allowed us to create the ETL pipeline directly on top of it, similar to Snowflake."
"It shows errors sometimes."
"They have an automated tool for building SQL queries, so you don't need to know SQL. That interface works, but it could be more efficient in terms of the SQL generated from those things. It's going through some growing pains. There is so much value in tools like these for people with no SQL experience. Over time, Dermio will make these capabilities more accessible to users who aren't database people."
"The solution’s pricing, scalability, and technical support response time could be improved."
"The capability to implement it with comparable performance across various private cloud environments, ensuring adaptability to different infrastructure setups would be beneficial."
"It could use some more advanced analytics relating to structured and semi-structured data."
"The primary challenge with Teradata lies in its cost structure, encompassing subscription fees, software licenses, and hardware expenses."
"There is some improvement required on OLTP level and some analytical function is missing."
"The tool's flexibility and capacity for expansion are areas of concern where improvements are required."
"Teradata's UI could be improved."
"I would like more security and speed."
 

Pricing and Cost Advice

"Dremio is less costly competitively to Snowflake or any other tool."
"Right now the cluster costs approximately $200,000 per month and is based on the volume of data we have."
"Users have to pay a yearly licensing fee for Teradata IntelliFlex, which is very expensive."
"The initial cost may seem high, but the TCO is low."
"​I would advise others to look into migration and setup as a fixed price and incorporate a SaaS option for other Teradata services​."
"Teradata is a very expensive solution."
"Make sure you have the in-house skills to design and support the solution, as relying on external sources is extremely costly and tends to lock you into specific platforms, tools, and paradigms."
"In this day and age, we want to get things done quickly. So, we go to the AWS Marketplace."
"The cost of running Teradata is quite high, but you get a good return on investment."
"It comes at a notably high cost for what it offers."
report
Use our free recommendation engine to learn which Cloud Data Warehouse solutions are best for your needs.
824,067 professionals have used our research since 2012.
 

Comparison Review

it_user232068 - PeerSpot reviewer
Aug 5, 2015
Netezza vs. Teradata
Original published at https://www.linkedin.com/pulse/should-i-choose-net Two leading Massively Parallel Processing (MPP) architectures for Data Warehousing (DW) are IBM PureData System for Analytics (formerly Netezza) and Teradata. I thought talking about the similarities and differences…
 

Top Industries

By visitors reading reviews
Financial Services Firm
32%
Computer Software Company
11%
Manufacturing Company
8%
Retailer
4%
Financial Services Firm
26%
Computer Software Company
11%
Manufacturing Company
8%
Healthcare Company
7%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
 

Questions from the Community

What do you like most about Dremio?
Dremio allows querying the files I have on my block storage or object storage.
What is your experience regarding pricing and costs for Dremio?
The licensing is very expensive. We need a license to scale as we are currently using the community version.
What needs improvement with Dremio?
There are performance issues at times due to our limited experience with Dremio, and the fact that we are running it on single nodes using a community version. We face certain issues when connectin...
Comparing Teradata and Oracle Database, which product do you think is better and why?
I have spoken to my colleagues about this comparison and in our collective opinion, the reason why some people may declare Teradata better than Oracle is the pricing. Both solutions are quite simi...
Which companies use Teradata and who is it most suitable for?
Before my organization implemented this solution, we researched which big brands were using Teradata, so we knew if it would be compatible with our field. According to the product's site, the comp...
Is Teradata a difficult solution to work with?
Teradata is not a difficult product to work with, especially since they offer you technical support at all levels if you just ask. There are some features that may cause difficulties - for example,...
 

Comparisons

 

Also Known As

No data available
IntelliFlex, Aster Data Map Reduce, , QueryGrid, Customer Interaction Manager, Digital Marketing Center, Data Mover, Data Stream Architecture
 

Learn More

Video not available
 

Overview

 

Sample Customers

UBS, TransUnion, Quantium, Daimler, OVH
Netflix
Find out what your peers are saying about Dremio vs. Teradata and other solutions. Updated: December 2024.
824,067 professionals have used our research since 2012.