Try our new research platform with insights from 80,000+ expert users

Infobright DB vs Teradata comparison

 

Comparison Buyer's Guide

Executive SummaryUpdated on Jan 12, 2025

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

Infobright DB
Ranking in Relational Databases Tools
37th
Ranking in Data Warehouse
27th
Average Rating
7.6
Reviews Sentiment
6.3
Number of Reviews
10
Ranking in other categories
No ranking in other categories
Teradata
Ranking in Relational Databases Tools
7th
Ranking in Data Warehouse
3rd
Average Rating
8.2
Reviews Sentiment
7.0
Number of Reviews
76
Ranking in other categories
Customer Experience Management (6th), Backup and Recovery (20th), Data Integration (17th), BI (Business Intelligence) Tools (10th), Marketing Management (6th), Cloud Data Warehouse (6th)
 

Mindshare comparison

As of April 2025, in the Data Warehouse category, the mindshare of Infobright DB is 0.5%, up from 0.1% compared to the previous year. The mindshare of Teradata is 16.3%, up from 15.2% compared to the previous year. It is calculated based on PeerSpot user engagement data.
Data Warehouse
 

Q&A Highlights

it_user104457 - PeerSpot reviewer
Apr 13, 2014
 

Featured Reviews

SD
If you need a real big data solution, look for a distributed solution that actually has a proven track record.
This version of Infobright has zero support for distributed scalability. The internal smart grid employed for each table has a major flaw in that the data size cannot be expunged until 2GB of data is reached at the column-level. This is a major flaw, making usage in a big-data scenario impossible. This means that you can delete as many records from a database table as you want. However, unless the 2GB aggregate size threshold was reached for some of the columns in the table, no reduction in disk space usage will occur. Only the data from the columns that reached 2GB will actually decrease. Other columns below 2GB in size do not leave the disk. I spent countless hours trying to find some workaround for this. I have nightmares of my e-mail inbox full of unsolvable questions about data size reduction from our field engineers.
SurjitChoudhury - PeerSpot reviewer
Offers seamless integration capabilities and performance optimization features, including extensive indexing and advanced tuning capabilities
We created and constructed the warehouse. We used multiple loading processes like MultiLoad, FastLoad, and Teradata Pump. But those are loading processes, and Teradata is a powerful tool because if we consider older technologies, its architecture with nodes, virtual processes, and nodes is a unique concept. Later, other technologies like Informatica also adopted the concept of nodes from Informatica PowerCenter version 7.x. Previously, it was a client-server architecture, but later, it changed to the nodes concept. Like, we can have the database available 24/7, 365 days. If one node fails, other nodes can take care of it. Informatica adopted all those concepts when it changed its architecture. Even Oracle databases have since adapted their architecture to them. However, this particular Teradata company initially started with its own different type of architecture, which major companies later adopted. It has grown now, but initially, whatever query we sent it would be mapped into a particular component. After that, it goes to the virtual processor and down to the disk, where the actual physical data is loaded. So, in between, there's a map, which acts like a data dictionary. It also holds information about each piece of data, where it's loaded, and on which particular virtual processor or node the data resides. Because Teradata comes with a four-node architecture, or however many nodes we choose, the cost is determined by that initially. So, what type of data does each and every node hold? It's a shared-no architecture. So, whatever task is given to a virtual processor it will be processed. If there's a failure, then it will be taken care of by another virtual processor. Moreover, this solution has impacted the query time and data performance. In Teradata, there's a lot of joining, partitioning, and indexing of records. There are primary and secondary indexes, hash indexing, and other indexing processes. To improve query performance, we first analyze the query and tune it. If a join needs a secondary index, which plays a major role in filtering records, we might reconstruct that particular table with the secondary index. This tuning involves partitioning and indexing. We use these tools and technologies to fine-tune performance. When it comes to integration, tools like Informatica seamlessly connect with Teradata. We ensure the Teradata database is configured correctly in Informatica, including the proper hostname and properties for the load process. We didn't find any major complexity or issues with integration. But, these technologies are quite old now. With newer big data technologies, we've worked with a four-layer architecture, pulling data from Hadoop Lake to Teradata. We configure Teradata with the appropriate hostname and credentials, and use BTEQ queries to load data. Previously, we converted the data warehouse to a CLD model as per Teradata's standardized procedures, moving from an ETL to an EMT process. This allowed us to perform gap analysis on missing entities based on the model and retrieve them from the source system again. We found Teradata integration straightforward and compatible with other tools.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"It has very amazing smart grid query feature for very fast aggregate queries across millions of rows"
"Teradata has good performance, the response times are very fast. Overall the solution is easy to use. When we do the transformation, we have all of our staging and aggregation data available."
"Teradata can be deployed on-premise, on the cloud, or in a virtual machine, which means customers can move without having to create their architecture all over again."
"The data processing, clustering, and distributed computing are impressive."
"The product is reliable."
"Teradata features high productivity and reliability because it has several redundancy options, so the system is always up and running."
"It is a stable program."
"The functionality of the solution is excellent."
"The tool's most valuable feature is the warehousing model."
 

Cons

"Only the data from the columns that reached 2GB will actually decrease. Other columns below 2GB in size do not leave the disk."
"I've been using the same UI for 20 years in Teradata. It could use some updating. Adding more stability around Teradata Studio would be outstanding. Teradata Studio is a Java-based version of their tool. It's much better now, but it still has some room for improvement."
"There is a need to improve performance in high transaction processes, as well as the reporting system."
"The user interface needs to be improved."
"The solution could improve by having a cloud version or a cloud component. We have to use other solutions, such as Amazon AWS, Microsoft Azure, or Snowflake for the cloud."
"The capability to implement it with comparable performance across various private cloud environments, ensuring adaptability to different infrastructure setups would be beneficial."
"​I think the UI is not there yet. It could be improved by being more user-friendly.​"
"From my perspective, it would be good if they gave better ITIN/R plugins to use the data for AI modeling, or data science modeling. We can do it now; however, it could be more elegant in terms of interfacing."
"The tool's flexibility and capacity for expansion are areas of concern where improvements are required."
 

Pricing and Cost Advice

"Our pricing was based on server instances and it was actually very cheap compared to Oracle. I guess you get what you pay for."
"The price needs to be more competitive as Hadoop, Redshift, Snowflake, etc are constantly making way into EDW space."
"The cost is substantial, totaling around $1.2 million, solely dedicated to upgrading the hardware."
"We are looking for a more flexible cost model for the next version that we use, whether it be cloud or on-premise."
"Teradata is expensive but gives value for money, especially if you don't want to move your data to the cloud."
"The cost is significantly high."
"Users have to pay a yearly licensing fee for Teradata IntelliFlex, which is very expensive."
"The cost of running Teradata is quite high, but you get a good return on investment."
"In this day and age, we want to get things done quickly. So, we go to the AWS Marketplace."
report
Use our free recommendation engine to learn which Data Warehouse solutions are best for your needs.
846,617 professionals have used our research since 2012.
 

Comparison Review

it_user232068 - PeerSpot reviewer
Aug 5, 2015
Netezza vs. Teradata
Original published at https://www.linkedin.com/pulse/should-i-choose-net Two leading Massively Parallel Processing (MPP) architectures for Data Warehousing (DW) are IBM PureData System for Analytics (formerly Netezza) and Teradata. I thought talking about the similarities and differences…
 

Answers from the Community

it_user104457 - PeerSpot reviewer
Apr 13, 2014
Apr 13, 2014
I think hands down it's Exadata since for the front end apps it's just another Oracle database which means everything under the sun is compatible with it.
2 out of 3 answers
it_user89046 - PeerSpot reviewer
Apr 10, 2014
Given we partner with many or all of the above, or can get to them as we access all data, I have the following opinion - InfoBright is very new and probable to be sold long term. It is also an expensive subscription so presents highest risk to me. Exidata is Oracle - if you like Oracle and their style, it maybe ok, but then it is Oracle. Microsoft is Microsoft - tends to be cheap to acquire and expensive to implement and maintain. Teradata is pricey but of the group presents the least risk and the greatest number of front end partners. The product I represent is unique as it is designed for high complexity large numbers of users and data and runs inside Teradata taking better advantage of the architecture. Disclosure: I work for Information Builders
it_user3309 - PeerSpot reviewer
Apr 10, 2014
You are asking about front end tools but you do not mention which ones. What you have are "database backends" and each has different features. The utilization will depend on what kind of expertise you have available else you will end up trying to implement say, Teradata on Exadata which may not give you the best solution. What are your criteria for success? Based on these you will have to evaluate each solution -- I am sure each vendor will be happy to set up the environment and work with your set of sampl,e data to show you have they evaluate against your criteria.
 

Top Industries

By visitors reading reviews
No data available
Financial Services Firm
26%
Computer Software Company
10%
Manufacturing Company
7%
Healthcare Company
7%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
 

Questions from the Community

Ask a question
Earn 20 points
Comparing Teradata and Oracle Database, which product do you think is better and why?
I have spoken to my colleagues about this comparison and in our collective opinion, the reason why some people may declare Teradata better than Oracle is the pricing. Both solutions are quite simi...
Which companies use Teradata and who is it most suitable for?
Before my organization implemented this solution, we researched which big brands were using Teradata, so we knew if it would be compatible with our field. According to the product's site, the comp...
Is Teradata a difficult solution to work with?
Teradata is not a difficult product to work with, especially since they offer you technical support at all levels if you just ask. There are some features that may cause difficulties - for example,...
 

Comparisons

No data available
 

Also Known As

Infobright
IntelliFlex, Aster Data Map Reduce, , QueryGrid, Customer Interaction Manager, Digital Marketing Center, Data Mover, Data Stream Architecture
 

Overview

 

Sample Customers

REZ-1, SonicWALL, IntegriChain, Fuseforward International Inc., Polystar, Live Rail, Mavenir Systems, JDSU Partners, Bango
Netflix
Find out what your peers are saying about Infobright DB vs. Teradata and other solutions. Updated: April 2025.
846,617 professionals have used our research since 2012.