Try our new research platform with insights from 80,000+ expert users
it_user99918 - PeerSpot reviewer
Chief Data Scientist at a tech vendor with 10,001+ employees
Vendor
We're using Vertica, just because of the performance benefits. On big queries, we're getting sub-10 second latencies.

My company recognized early, near the inception of the product, that if we were able to collect enough operational data about how our products are performing in the field, get it back home and analyze it, we'd be able to dramatically reduce support costs. Also, we can create a feedback loop that allows engineering to improve the product very quickly, according to the demands that are being placed on the product in the field.

Looking at it from that perspective, to get it right, you need to do it from the inception of the product. If you take a look at how much data we get back for every array we sell in the field, we could be receiving anywhere from 10,000 to 100,000 data points per minute from each array. Then, we bring those back home, we put them into a database, and we run a lot of intensive analytics on those data.

Once you're doing that, you realize that as soon as you do something, you have this data you're starting to leverage. You're making support recommendations and so on, but then you realize you could do a lot more with it. We can do dynamic cache sizing. We can figure out how much cache a customer needs based on an analysis of their real workloads.

We found that big data is really paying off for us. We want to continue to increase how much it's paying off for us, but to do that we need to be able to do bigger queries faster. We have a team of data scientists and we don't want them sitting here twiddling their thumbs. That’s what brought us to Vertica.

We have a very tight feedback loop. In one release we put out, we may make some changes in the way certain things happen on the back end, for example, the way NVRAM is drained. There are some very particular details around that, and we can observe very quickly how that performs under different workloads. We can make tweaks and do a lot of tuning.

Without the kind of data we have, we might have to have multiple cases being opened on performance in the field and escalations, looking at cores, and then simulating things in the lab.

It's a very labor-intensive, slow process with very little data to base the decision on. When you bring home operational data from all your products in the field, you're now talking about being able to figure out in near real-time the distribution of workloads in the field and how people access their storage. I think we have a better understanding of the way storage works in the real world than any other storage vendor, simply because we have the data.

I don’t remember the exact year, but it may have been eight years ago roughly that I became aware of Vertica. At some point, there was an announcement that Mike Stonebraker was involved in a group that was going to productize the C-Store Database, which was sort of an academic experiment at UC Berkeley, to understand the benefits and capabilities of real column store.

I was immediately interested and contacted them. I was working at another storage company at the time. I had a 20 terabyte (TB) data warehouse, which at the time was one of the largest Oracle on Linux data warehouses in the world.

They didn't want to touch that opportunity just yet, because they were just starting out in alpha mode. I hooked up with them again a few years later, when I was CTO at a different company, where we developed what's substantially an extract, transform, and load (ETL) platform.

By then, they were well along the road. They had a great product and it was solid. So we tried it out, and I have to tell you, I fell in love with Vertica because of the performance benefits that it provided.

When you start thinking about collecting as many different data points as we like to collect, you have to recognize that you’re going to end up with a couple choices on a row store. Either you're going to have very narrow tables and a lot of them or else you're going to be wasting a lot of I/O overhead, retrieving entire rows where you just need a couple fields.

That was what piqued my interest at first. But as I began to use it more and more, I realized that the performance benefits you could gain by using Vertica properly were another order of magnitude beyond what you would expect just with the column-store efficiency.

That's because of certain features that Vertica allows, such as something called pre-join projections. At a high-level, it lets you maintain the normalized logical integrity of your schema, while having under the hood, an optimized denormalized query performance physically on disk.

Can you be efficient if you have a denormalized structure on disk because Vertica allows you to do some very efficient types of encoding on your data. So all of the low cardinality columns that would have been wasting space in a row store end up taking almost no space at all.

It's been my impression, that Vertica is the data warehouse that you would have wanted to have built 10 or 20 years ago, but nobody had done it yet.

Nowadays, when I'm evaluating other big data platforms, I always have to look at it from the perspective of it's great, we can get some parallelism here, and there are certain operations that we can do that might be difficult on other platforms, but I always have to compare it to Vertica. Frankly, I always find that Vertica comes out on top in terms of features, performance, and usability.

I built the environment at my current company from the ground up. When I got here, there were roughly 30 people. It's a very small company. We started with Postgres. We started with something free. We didn’t want to have a large budget dedicated to the backing infrastructure just yet. We weren’t ready to monetize it yet.

So, we started on Postgres and we've scaled up now to the point where we have about 100 TBs on Postgres. We get decent performance out of the database for the things that we absolutely need to do, which are micro-batch updates and transactional activity. We get that performance because the database lives here.

I don't know what the largest unsharded Postgres instance is in the world, but I feel like I have one of them. It's a challenge to manage and leverage. Now, we've gotten to the point where we're really enjoying doing larger queries. We really want to understand the entire installed base of how we want to do analyses that extend across the entire base.

We want to understand the lifecycle of a volume. We want to understand how it grows, how it lives, what its performance characteristics are, and then how gradually it falls into senescence when people stop using it. It turns out there is a lot of really rich information that we now have access to to understand storage lifecycles in a way I don't think was possible before.

But to do that, we need to take our infrastructure to the next level. So we've been doing that and we've loaded a large number of our sensor data that’s the numerical data I have talked about into Vertica, started to compare the queries, and then started to use Vertica more and more for all the analysis we're doing.

Internally, we're using Vertica, just because of the performance benefits. I can give you an example. We had a particular query, a particularly large query. It was to look at certain aspects of latency over a month across the entire installed base to understand a little bit about the distribution, depending on different factors, and so on.

We ran that query in Postgres, and depending on how busy the server was, it took anywhere from 12 to 24 hours to run. On Vertica, to run the same query on the same data takes anywhere from three to seven seconds.

I anticipated that because we were aware upfront of the benefits we'd be getting. I've seen it before. We knew how to structure our projections to get that kind of performance. We knew what kind of infrastructure we'd need under it. I'm really excited. We're getting exactly what we wanted and better.

This is only a three node cluster. Look at the performance we're getting. On the smaller queries, we're getting sub-second latencies. On the big ones, we're getting sub-10 second latencies. It's absolutely amazing. It's game changing.

People can sit at their desktops now, manipulate data, come up with new ideas and iterate without having to run a batch and go home. It's adramatic productivity increase. Data scientists tend to be fairly impatient. They're highly paid people, and you don’t want them sitting at their desk waiting to get an answer out of the database. It's not the best use of their time.

When it comes to the cloud model for deployment, there's the ease of adding nodes without downtime, the fact that you can create a K-safe cluster. If my cluster is 16 nodes wide now, and I want two nodes redundancy, it's very similar to RAID. You can specify that, and the database will take care of that for you. You don’t have to worry about the database going down and losing data as a result of the node failure every time or two.

I love the fact that you don’t have to pay extra for that. If I want to put more cores or nodes on it or I want to put more redundancy into my design, I can do that without paying more for it. Wow! That’s kind of revolutionary in itself.

It's great to see a database company incented to give you great performance. They're incented to help you work better with more nodes and more cores. They don't have to worry about people not being able to pay the additional license fees to deploy more resources. In that sense, it's great.

We have our own private cloud -- that’s how I like to think of it -- at an offsite colocation facility. We do DR here. At the same time, we have a K-safe cluster. We had a hardware glitch on one of the nodes last week, and the other two nodes stayed up, served data, and everything was fine.


Those kinds of features are critical, and that ability to be flexible and expand is critical for someone who is trying to build a large cloud infrastructure, because you're never going to know in advance exactly how much you're going to need.

If you do your job right as a cloud provider, people just want more and more and more. You want to get them hooked and you want to get them enjoying the experience. Vertica lets you do that.

Disclosure: PeerSpot has made contact with the reviewer to validate that the person is a real user. The information in the posting is based upon a vendor-supplied case study, but the reviewer has confirmed the content's accuracy.
PeerSpot user
MunkhsaikhanBayar - PeerSpot reviewer
Project Lead - Digital Transformation Unit at Bodi Electronics LLC
Real User
Scalable big data analytics platform that is reasonably priced compared to other solutions
Pros and Cons
  • "The hardware usage and speed has been the most valuable feature of this solution. It is very fast and has saved us a lot of money."
  • "The integration of this solution with ODI could be improved."

What is our primary use case?

Our primary use case for this solution is data analytics.

What is most valuable?

The hardware usage and speed has been the most valuable feature of this solution. It is very fast and has saved us a lot of money.

What needs improvement?

The integration of this solution with ODI could be improved. 

For how long have I used the solution?

I have used this solution for 1 year.

What do I think about the stability of the solution?

This is a stable solution with a fast compression system.

What do I think about the scalability of the solution?

This is a scalable solution. 

How are customer service and support?

The technical support for this solution is really good. The support team are friendly and provide assistance quickly. 

How would you rate customer service and support?

Positive

How was the initial setup?

The initial setup is straightforward. 

What's my experience with pricing, setup cost, and licensing?

The pricing for this solution is very reasonable compared to other vendors.

What other advice do I have?

I would rate this solution a ten out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: partner
PeerSpot user
Buyer's Guide
Vertica
January 2025
Learn what your peers think about Vertica. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
832,138 professionals have used our research since 2012.
it_user431877 - PeerSpot reviewer
Consultant at a tech services company with 10,001+ employees
Real User
All joint operations were enhanced by creating identically segmented projections
Pros and Cons
  • "I like the projection feature, which increases query performance."
  • "Limitations in group by projections is where I would like to see an improvement."

What is most valuable?

  • I found the columnar storage, which increases performance of sequential record access, to be the most valuable feature. 
  • I also like the projection feature, which increases query performance.

How has it helped my organization?

  • The workload on our ETL tools were reduced. 
  • All joint operations were enhanced by creating identically segmented projections.

What needs improvement?

Limitations in group by projections is where I would like to see an improvement. 

What was my experience with deployment of the solution?

We have not had any issues with deployment.

What do I think about the stability of the solution?

We have not had any issues with stability.

What do I think about the scalability of the solution?

We have been able to scale it for our needs.

What other advice do I have?

It is a good database that can be used for ad hoc queries as well as analytical queries.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Senior business Intelligence consultant at Asociación SevillaUP
Consultant
​Data Warehouse response times have decreased​. It doesn't support stored procedures in the way we are used to thinking of them.

What is most valuable?

Speed in query in general and specifically in aggregate functions on multi-million rows tables.

How has it helped my organization?

Data Warehouse response times have decreased of one order of magnitude with respect to the previous solution (SQL Server + Oracle).

What needs improvement?

Sadly, it does not support stored procedures in the way we are used to thinking of them. There is the possibility to code plug-in in C++, but that's out of our reach. Correlated sub-queries are another point where we'd love to see enhancements, plus the overall choice of functions available. ETL with SSIS was not as easy as one we had expected (must remember to COMMIT and we had some issues with datetime + timezone, but that's was probably our fault).

OleDB and .NET providers need some touches; and another great improvement would be support for Entity Framework, which so far I haven't seen.

There is no serious graphical IDE for HPE Vertica, that's frustrating. One free option available is DbVisualizer for Vertica, but it's a bit basic.

For how long have I used the solution?

One year.

What do I think about the stability of the solution?

We have a one node cluster on Red Hat and last week the DB went down. The setting to restart the database is not very intuitive and by default the DB does not restart alone.

After a reboot, which may be good in some environments, but leaves you with an insecurity feeling.

What do I think about the scalability of the solution?

Our DB isin in the tens of Gigs, we did not need to scale yet.

How are customer service and technical support?

N/A, not used.

Which solution did I use previously and why did I switch?

We had SQL Server, switched for money reasons and space. But we're not sure yet, SQL Server is way more stable and predictable.

How was the initial setup?

No, the documentation is scarce on non standard setups. We had to create a virtual machine locally, set it up and then upload it to AWS.

What's my experience with pricing, setup cost, and licensing?

We use the free community license, plenty of space for our environment. If I had unlimited budget I'd buy a preinstalled instance on EC2, much faster, but costly.

Which other solutions did I evaluate?

Netezza, but I didn't like it. For no particular reason, but the feeling was not right. Redshift - I was not impressed by the performance. Google Big Query - we tried it.

What other advice do I have?

Do COMMIT, and enable/enforce constraints because by default they ARE NOT!!!!

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Management Consultant at a computer software company with 51-200 employees
Vendor
A SQL-based compute platform like Vertica enables far less human overhead in operations and analytics.

What is most valuable?

Scale-out, analytical functions, ML.

How has it helped my organization?

We are an HP partner. A SQL-based compute platform like Vertica enables far less human overhead in operations and analytics.

What needs improvement?

More ML, both data prep, models, evaluation and workflow.  Improved support for deep analytics/ predictive modelling with machine learning algorithms. This area of analytics need a stack of functionality in order to support the scenario. The needed functionality includes:

  • Data preparation. Scaling, centering, removing skewness, gap filling, pivoting, feature selection and feature generation
  • Algorithms/models. Non-linear models in general. More specifically, penalized models, tree/rule-based models (incl. ensambles), SVM, MARS, Neural networks, K-nearest neighbours, Naïve bayes, etc.
  • Support the concept of a “data processing pipeline” with data prep. + model. One would typically use “a pipeline” as the overall logical unit used to produce predictions/scoring.
  • Automatic model evaluation/tuning. With algorithms requiring tuning, support for automated testing of different settings/tuning parameters is very useful. Should include (k fold) cross validation and bootstrap for model evaluation
  • Some sort of hooks to use external models in a pipeline i.e. data prep in Vertica + model from Spark/R. 
  • Parity functionality for the Java SDK compared to C++. Today the C++ SDK is the most feature rich. The request is to bring (and keep) the Java SDK up to feature parity with C++.
  • Streaming data and notifications/alerts. Streaming data is starting to get well supported with the Kafka integration. Now we just need a hook to issue notifications on streaming data. That is, running some sort of evaluation on incoming records (as they arrive to the Vertica tables) and possibly raising a notification.

For how long have I used the solution?

Two years.

What was my experience with deployment of the solution?

No, not really.

What do I think about the stability of the solution?

No.

What do I think about the scalability of the solution?

No.

Which solution did I use previously and why did I switch?

Postgresql, MySQL, SQL Server. Switched because of scalability and reliability, analytics functionality. V being a better engineered product.

How was the initial setup?

Straightforward. Good docs helped a lot.

What's my experience with pricing, setup cost, and licensing?

Its reasonably priced for non-trivial data problems.

Which other solutions did I evaluate?

Yes, Hadoop / Spark, SQL Server.

What other advice do I have?

See additional functionality above.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are a vendor partner.
PeerSpot user
PeerSpot user
Sr. Developer, Big Data at a comms service provider with 51-200 employees
Vendor
The most valuable feature for me is the columnar data store.

Valuable Features:

Columnar data store

Room for Improvement:

Add geospatial indexes (sounds like they have done it in version 8.0)

Deployment Issues:

No

Stability Issues:

No

Scalability Issues:

No

Customer Service:

Above average

Initial Setup:

Setup was very simple

Disclosure: My company has a business relationship with this vendor other than being a customer: We are partners with HPE
PeerSpot user
it_user471384 - PeerSpot reviewer
CIO at a tech services company with 1,001-5,000 employees
Consultant
It works well. When we ran into issues, there seemed to be a lot of different opinions for how to resolve them.

What is most valuable?

We use Vertica as our primary data warehouse. It works well, relatively, most of the time.

What needs improvement?

I just expect it to work and be serviceable. When we ran into issues, there seemed to be a lot of different opinions for how to resolve the issues and that was the feedback I gave to them. You talked to one tech, you talk to a different tech they had a much different approach. That was a big frustration point for us.

The upgrade path and which way we should go. So at the end it created a lot of confusion for us, so I wouldn't upgrade it again lightly. We're going to remain on it for the next year, but we'll probably re-evaluate at that point if we want to continue with Vertica or something else.

What do I think about the stability of the solution?

It's been stable since November and before that, to be fair, it was stable for quite a while.

What do I think about the scalability of the solution?

The reason we like Hadoop and others is because they scale up, pricing doesn't scale up at the same level. Vertica is a license per terabyte product. They do give you discounts the more volume you get, but it adds up over time fast. We could scale at a lower cost with than other solutions.

Scaling was a pain point. Getting recommendation on how to set it up ultimately to provide the best performance, how many notes, other things. We got different answers from them.

Which solution did I use previously and why did I switch?

We use MongoDB for some of our other internal production apps. It's a lot more involved and more complex than we like to go for a, just standard data warehouse, but we might look at Hadoop or similar for that.

How was the initial setup?

There's a lot of complexities with the upgrade and costs of data failures. That was last year. It was kind of good that I forgot about those pain points.

What other advice do I have?

I would recommend that they highly evaluate all their options. If they're just going to run a small data warehouse, it's probably not a bad solution. If it's something they know is going to grow dramatically and unpredictably? I don't know. I would evaluate hard.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user427470 - PeerSpot reviewer
Technical Team Lead, Business Intelligence at a tech company with 501-1,000 employees
Vendor
The most valuable feature is the merge function, which is essentially the upsert function. We've had issues with query time taking longer than expected for our volume of data.

What is most valuable?

The most valuable feature is the merge function, which is essentially the upsert function. It's become our ELT pattern. Previously, when we used the ETL tool to manage upserts, the load time was significantly longer. The merge function load time is pretty much flat relative to the volume of records processed.

How has it helped my organization?

HP Vertica has helped us democratize data, making it available to users across the organization.

What needs improvement?

We've had issues with query time taking longer than expected for our volume of data. However, this is due to not understanding the characteristics of the database and how to better tune its performance.

For how long have I used the solution?

We've been using HP Vertica for three years, but only in the last year have we really started to leverage it more. We're moving to a clustered environment to support the scale out of our data warehouse.

We use it as the database for the our data warehouse. In it's current configuration, we use it as a single node, but we're moving to a clustered environment, which is what the vendor recommends.

What was my experience with deployment of the solution?

We had no issues with the deployment.

What do I think about the stability of the solution?

We've had no issues with the stability.

What do I think about the scalability of the solution?

We've had no issues scaling it.

How are customer service and technical support?

I'd rate technical support as low to average. The tech support provides the usual canned response. We've had to learn most of how to harness the tool on our own.

Which solution did I use previously and why did I switch?

I haven't used anything similar.

How was the initial setup?

HP Vertica was in place when I joined the company, but it wasn't used as extensively as it is now.

What about the implementation team?

We implemented it in-house, I believe.

What other advice do I have?

Loading into HP Vertica is straightforward, similar to other data warehouse appliance databases such as Netezza. However, tuning it for querying requires a lot more thought. It uses projections that are similar to indexes. Knowing how to properly use projections does take time. One thing to be mindful of with columnar databases is that the fewer the columns in your query, the faster the performance. The number of rows impacts query time less.

My advice would be to try out the database connecting to your ETL tools and perform time studies on the load and query times. It's a good database. It works similar to Netezza from my experience but it is a lot cheaper. Pricing is based on the size of the database.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Download our free Vertica Report and get advice and tips from experienced pros sharing their opinions.
Updated: January 2025
Buyer's Guide
Download our free Vertica Report and get advice and tips from experienced pros sharing their opinions.