Try our new research platform with insights from 80,000+ expert users

NetApp Cloud Volumes ONTAP vs Zerto comparison

Sponsored
 

Comparison Buyer's Guide

Executive SummaryUpdated on Jan 12, 2025

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

IBM Turbonomic
Sponsored
Ranking in Cloud Migration
5th
Average Rating
8.8
Reviews Sentiment
7.4
Number of Reviews
205
Ranking in other categories
Cloud Management (4th), Virtualization Management Tools (4th), IT Financial Management (1st), IT Operations Analytics (4th), Cloud Analytics (1st), Cloud Cost Management (1st), AIOps (5th)
NetApp Cloud Volumes ONTAP
Ranking in Cloud Migration
1st
Average Rating
8.8
Reviews Sentiment
7.0
Number of Reviews
62
Ranking in other categories
Cloud Storage (1st), Cloud Backup (9th), Public Cloud Storage Services (5th), Cloud Software Defined Storage (1st)
Zerto
Ranking in Cloud Migration
3rd
Average Rating
9.0
Reviews Sentiment
7.2
Number of Reviews
304
Ranking in other categories
Backup and Recovery (2nd), Cloud Backup (2nd), Disaster Recovery (DR) Software (2nd)
 

Mindshare comparison

As of April 2025, in the Cloud Migration category, the mindshare of IBM Turbonomic is 4.0%, down from 5.3% compared to the previous year. The mindshare of NetApp Cloud Volumes ONTAP is 15.2%, down from 19.7% compared to the previous year. The mindshare of Zerto is 5.1%, up from 3.0% compared to the previous year. It is calculated based on PeerSpot user engagement data.
Cloud Migration
 

Featured Reviews

Keldric Emery - PeerSpot reviewer
Saves time and costs while reducing performance degradation
It's been a very good solution. The reporting has been very, very valuable as, with a very large environment, it's very hard to get your hands on the environment. Turbonomic does that work for you and really shows you where some of the cost savings can be done. It also helps you with the reporting side. Me being able to see that this machine hasn't been used for a very long time, or seeing that a machine is overused and that it might need more RAM or CPU, et cetera, helps me understand my infrastructure. The cost savings are drastic in the cloud feature in Azure and in AWS. In some of those other areas, I'm able to see what we're using, what we're not using, and how we can change to better fit what we have. It gives us the ability for applications and teams to see the hardware and how it's being used versus how they've been told it's being used. The reporting really helps with that. It shows which application is really using how many resources or the least amount of resources. Some of the gaps between an infrastructure person like myself and an application are filled. It allows us to come to terms by seeing the raw data. This aspect is very important. In the past, it was me saying "I don't think that this application is using that many resources" or "I think this needs more resources." I now have concrete evidence as well as reporting and some different analytics that I can show. It gives me the evidence that I would need to show my application owners proof of what I'm talking about. In terms of the downtime, meantime, and resolution that Turbonomic has been able to show in reports, it has given me an idea of things before things happen. That is important as I would really like to see a machine that needs resources, and get resources to it before we have a problem where we have contention and aspects of that nature. It's been helpful in that regard. Turbonomic has helped us understand where performance risks exist. Turbonomic looks at my environment and at the servers and even at the different hosts and how they're handling traffic and the number of machines that are on them. I can analyze it and it can show me which server or which host needs resources, CPU, or RAM. Even in Azure, in the cloud, I'm able to see which resources are not being used to full capacity and understand where I could scale down some in order to save cost. It is very, very helpful in assessing performance risk by navigating underlying causes and actions. The reason why it's helpful is because if there's a machine that's overrunning the CPU, I can run reports every week to get an idea of machines that would need CPU, RAM, or additional resources. Those resources could be added by Turbonomic - not so much by me - on a scheduled basis. I personally don't have to do it. It actually gives me a little bit of my life back. It helps me to get resources added without me physically having to touch each and every resource myself. Turbonomic has helped to reduce performance degradation in the same way as it's able to see the resources and see what it needs and add them before a problem occurs. It follows the trends. It sees the trends of what's happening and it's able to add or take away those resources. For example, we discuss when we need to do certain disaster recovery tests. Over the years, Turbo will be able to see, for example, around this time of year that certain people ramp up certain resources in an environment, and then it will add the resources as required. Another time of year, it will realize these resources are not being used as much, and it takes those resources away. In this way, it saves money and time while letting us know where we are. We've saved a great deal of time using this product when I consider how I'd have to multiply myself and people like me who would have to add resources to devices or take resources away. We've saved hundreds of hours. Most of the time those hours would have to be after hours as well, which are more valuable to me as that's my personal time. Those saved hours are across months, not years. I would consider the number of resources that Turbonomic is adding and taking away and the placement (if I had to do it all myself) would end up being hundreds of hours monthly that would be added without the help of Turbonomic. It helps us to meet SLAs mainly due to the fact that we're able to keep the servers going and to keep the servers in an environment, to keep them to where (if we need to add resources) we can add them at any given time. It will keep our SLAs where they need to be. If we were to have downtime due to the fact that we had to add resources or take resources away and it was an emergency, then that would prevent us from meeting our SLAs. We also use it to monitor Azure and to monitor our machines in terms of the resources that are out there and the cost involved. In a lot of cases, it does a better job of giving us cost information than Azure itself does. We're able to see the cost per machine. We're able to see the unattached volume and storage that we are paying for. It gives us a great level of insight. Turbonomic gives us the time to be able to focus on innovation and ongoing modernization. Some of the tasks that it does are tasks that I would not necessarily have to do. It's very helpful in that I know that the resources are there where they need to be and it gives me an idea of what changes need to be made or what suggestions it's making. Even if I don't take them, I'm able to get a good idea of some best practices through Turbonomic. One of the ways that Turbonomic does to help bring new resources to market is that we are now able to see the resources (or at least monitor the resources) before they get out to the general public within our environment. We saw immediate value from the product in the test environment. We set it up in a small test environment and we started with just placement and we could tell that the placement was being handled more efficiently than what VMware was doing. There was value for us in placement alone. Then, after we left the placement, we began to look at the resources and there were resources. We immediately began to see a change in the environment. It has made the application and performance better, mainly due to the fact that we are able to give resources and take resources away based on what the need is. Our expenses, definitely, have been in a better place based on the savings that we've been able to make in the cloud and on-prem. Turbonomic has been very helpful in that regard. We've been able to see the savings easily based on the reports in Turbonomic. That, and just seeing the machines that are not being used to capacity allows us to set everything up so it runs a bit more efficiently.
Pramod-Talekar - PeerSpot reviewer
Allows customers to manage SAN and NAS data within a single storage solution
The tool's most valuable features are the SnapLock and SnapMirror features. If something goes wrong with the data, we can restore it. This isn't a mirror; we store data in different locations. If there's an issue on the primary site, we can retrieve data from the secondary site. Multiprotocol support in NetApp Cloud Volumes ONTAP is beneficial because it allows customers to manage SAN and NAS data within a single storage solution. This feature eliminates the need to purchase different types of storage.
Sachin Vinay - PeerSpot reviewer
Leverage disaster recovery with reliable support and cost-effective future-proof features
Zerto is straightforward to implement because it only requires the installation of an agent on the VMs designated for migration. A service, typically a VM, must also be deployed at the disaster recovery location. This entire process is simple and can be completed within three days. Zerto's near-synchronous replication occurs every minute, allowing for highly granular recovery points. This means that even if interruptions or malware disruptions occur within that minute, Zerto can restore to the last known good state, effectively recovering the entire setup to the latest backup. This capability ensures high data security and minimizes potential data loss. One of the main benefits of implementing Zerto is its data compression, which significantly reduces the load on our IPsec VPN. Zerto compresses data by 80 percent before transmitting it across the VPN, minimizing the data transferred between geographically dispersed locations. This compression and subsequent decompression at the destination alleviate the strain on the VPN, preventing overload and ensuring efficient data synchronization. Zerto simplifies malware protection by integrating it into its disaster recovery and synchronization features. This comprehensive approach eliminates the need for separate antivirus setups in virtual machines and applications. It streamlines our security measures and removes the need for additional software or solutions, resulting in an excellent return on investment. Zerto's single-click recovery solution offers exceptional recovery speed. Through the user interface, a single click allows for a complete restoration from the most recent backup within two to three minutes, enabling rapid recovery and minimal downtime. Zerto's Recovery Time Objective is excellent. In the past, if a virtual machine crashed, we would recover it from a snapshot, which could take one to two hours. With Zerto, the recovery process takes only five minutes, and users are typically unaware of any disruption. This allows us to restore everything quickly and efficiently. Zerto has significantly reduced our downtime. When malware affects our data, Zerto immediately notifies us and helps us protect other applications, even those not yet implemented with Zerto. By monitoring these applications, we can quickly identify and address any potential malware spread, minimizing downtime across our systems. Zerto significantly reduces downtime and associated costs during disruptions. Our services are unified, so in the event of a disruption without Zerto, even a half-day disruption would necessitate offline procedures. This would lead to increased manpower, service delays, and substantial financial losses due to interrupted admissions and other critical processes. By unifying service processes, Zerto minimizes the impact of outages. Zerto streamlines our disaster recovery testing across multiple locations by enabling efficient failover testing without disrupting live services. Traditionally, DR testing required downtime of critical systems, but Zerto's replication and failover capabilities allow us to test in parallel with live operations. This non-disruptive approach ensures continuous service availability while validating our DR plan, even in scenarios like malware attacks, by creating a separate testing environment that mirrors the live setup. This comprehensive testing provides confidence in our ability to handle real-world incidents effectively. This saves us over 60 percent of the time. Zerto streamlines system administration tasks by automating many processes, thereby reducing the workload for multiple administrators. This allows them to focus on other university services that require attention and effectively reallocate support resources from automated tasks to those requiring more dedicated management. Zerto is used exclusively for our critical services, providing up to a 70 percent improvement in our IT resilience.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"I have the ability to automate things similar to the Orchestrator stuff. I do have the ability to have it do some balancing, and if it sees some different performance metrics that I've set not being met, it'll actually move some of my virtual machines from, let's say, one host to another. It is sort of an automation tool that helps me. Basically, I specify the metric, and if I get a certain host or something being over-utilized, it'll automatically move the virtual machines around for me. It basically has to snap into my vCenter and then it can make adjustments and move my virtual machines around. It also has some very nice reporting tools built around virtual machines. It tells you how much storage, memory, or CPU is being used monthly, and then it gives you a very nice way to be able to send out billing structure to your end users who use servers within your environment."
"With over 2500 ESX VMs, including 1500+ XenDesktop VDI desktops, hosted over two datacentres and 80+ vSphere hosts, firefighting has become something of the past."
"In our organization, optimizing application performance is a continuous process that is beyond human scale. We would not be able to do the number of actions that Turbonomic takes on a daily, weekly, and monthly basis. It is humanly impossible with the little micro adjustments that it can make. That is a huge differentiator. If you just figure each action could take anywhere very conservatively from five to 10 minutes to act upon, then you multiply that out by thousands of actions every month, it is easily something where you could say, "I am saving a couple of FTEs.""
"The primary features we have focused on are reporting and optimization."
"It also brings up a list of machines and if something is under-provisioned and needs more compute power it will tell you, 'This server needs more compute power, and we suggest you raise it up to this level.' It will even automatically do it for you. In Azure, you don't have to actually go into the cloud provider to resize. You can just say, 'Apply these resizes,' and Turbonomic uses some back-end APIs to make the changes for you."
"The most important feature to us is an objective measurement of VM headroom per cluster. In addition, the ability to check for the right-sizing of VMs."
"The recommendation of the family types is a huge help because it has saved us a lot of money. We use it primarily for that. Another thing that Turbonomic provides us with is a single platform that manages the full application stack and that's something I really like."
"The solution has a good optimization feature."
"There is unified storage, which provides flexibility. It is set up perfectly for performance and provisioning. We are able to monitor everything using a separate application. It provides error and critical warnings that allow us to take immediate action through ONTAP. We are able to manage everything, log a case, and follow up with the support team, who can fix it. That is how it is unified."
"The ability to see things going back and forth has been quite useful."
"CVO gives us the ability to access data as quickly as possible, which is critical because of the mission set we handle. Some things cannot wait. For example, we tried having the data in the cloud itself, but it took too long for us to retrieve it from cold or deep storage. If we have it ONTAP or on-prem, it's so much easier to pull it within minutes."
"One of the most valuable features is its similarity to the physical app, which makes it familiar. It's almost identical to a real NetApp, which means you can run all of the associated NetApp processes and services with it. Otherwise, we would definitely have to deploy some hardware on a site somewhere, which could be a challenge in terms of CapEx."
"The Cloud Manager application that's on the NetApp cloud site is easy to use. You can set up and schedule replications from there, so you don't have to go into the ONTAP system. Another feature we've recently started using is the scheduled power off. We started with one client and have been slowly implementing it with others. We can cut costs by not having the VM run all the time. It's only on when it's doing replication, but it powers off after."
"The most valuable features are that it's reliable, simple, and performs well."
"NetApp's XCP Migration Tool... was pretty awesome. It replicated the data faster than any other tool that I've seen. That was a big help."
"The solution’s unified file and block-storage access across our infrastructure is invaluable. Without it, we can't do what we do."
"The ease of failover and test environments has proven invaluable."
"We are in the process of switching over our production data center and Zerto has been a true time-saver that has cost us zero downtime."
"For us, the most valuable features are the quick upload time and how the sync works... We have VMware SRM and Veeam, and they have been pretty slow and sluggish."
"Continuous replication is the primary feature we use now because we originally purchased Zerto. I'm starting to utilize the long-term retention and instantaneous file restoration features, which have been introduced since the original purchase in 2015. Initially, we deployed Zerto as a second data storage point, but ultimately it will probably facilitate some of the migration of my workloads up to the cloud. It's evolving with the network and how we deliver computation."
"The low SLA times are valuable. It is very easy to use with a straightforward user setup."
"The most valuable feature of Zerto is the quick recovery time."
"I like the less than one-minute RPO, the ability to IP customize during failovers, and the cloning feature that I can use to clone VMs over at the target location. As part of the automation failover, if we need to change an IP when it fails over to the other data center, Zerto will handle that; there's no need for manual intervention. As far as the cloning, we use that to do quick testing of a VM in the remote data center for lift-and-shift processors."
"Failover using Zerto is simply a one-button click, and it does everything else in restoring the VMs at a different datacenter (recovery site)."
 

Cons

"Some features are only available via changes to the deployment YAML, and it would be better to have them in the UI."
"Turbonomic can modernize the look and feel, making it more user-friendly to access and obtain information."
"It would be nice for them to have a way to do something with physical machines, but I know that is not their strength Thankfully, the majority of our environment is virtual, but it would be nice to see this type of technology across some other platforms. It would be nice to have capacity planning across physical machines."
"There are a few things that we did notice. It does kind of seem to run away from itself a little bit. It does seem to have a mind of its own sometimes. It goes out there and just kind of goes crazy. There needs to be something that kind of throttles things back a little bit. I have personally seen where we've been working on things, then pulled servers out of the VMware cluster and found that Turbonomic was still trying to ship resources to and from that node. So, there has to be some kind of throttling or ability for it to not be so buggy in that area. Because we've pulled nodes out of a cluster into maintenance mode, then brought it back up, and it tried to put workloads on that outside of a cluster. There may be something that is available for this, but it seems very kludgy to me."
"In Azure, it's not what you're using. You purchase the whole 8 TB disk and you pay for it. It doesn't matter how much you're using. So something that I've asked for from Turbonomic is recommendations based on disk utilization. In the example of the 8 TB disk where only 200 GBs are being used, based on the history, there should be a recommendation like, "You can safely use a 500 GB disk." That would create a lot of savings."
"The reporting needs to be improved. It's important for us to know and be able to look back on what happened and why certain decisions were made, and we want to use a custom report for this."
"The GUI and policy creation have room for improvement. There should be a better view of some of the numbers that are provided and easier to access. And policy creation should have it easier to identify groups."
"Recovering resources when they're not needed is not as optimized as it could be."
"There is room for improvement with the capacity. There's a very hard limit to how many disks you can have and how much space you can have. That is something they should work to fix, because it's limiting. Right now, the limit is about 360 terabytes or 36 disks."
"I rate the scalability a five out of ten."
"The data tiering needs improvement. E.g., moving hard data to faster disks."
"I think the challenge now is more in terms of keeping an air gap. The notion that it is in the cloud, easy to break, etc. The challenge now is mostly about the air gap and how we can protect that in the cloud."
"We would like to have support for high availability in multi-regions."
"I would like to have more management tools. They are difficult to work with, so I would like them to be a bit more user-friendly."
"When it comes to a critical or a read-write-intensive application, it doesn't provide the performance that some applications require, especially for SAP. The SAP HANA database has a write-latency of less than 2 milliseconds and the CVO solution does not fit there. It could be used for other databases, where the requirements are not so demanding, especially when it comes to write-latency."
"If they could include clustering together multiple physical Cloud Volumes ONTAP devices as an option, that could be helpful."
"Zerto generates many false positive alerts, which is annoying. I still have thousands of alerts in my inbox, and those are false alerts. When I check there's actually no problem."
"I would like to see some graphical improvements in Zerto's interface. There's an option to export a list of all of our servers, but the information isn't presented the way we want. We want it in a specified sequence broken down by region, etc. We can't manipulate the data when we export it. Maybe they could change it to look more like an Excel sheet, and we can customize the graphics and data. We suggested these improvements to Zerto through their portal."
"The technical support is hit or miss."
"There are quite a few elements in the long-term retention areas that I wish were better. The bio-level recovery indexing of backups is the area I struggle with the most. That's probably because I desire to do tasks that ordinary users wouldn't do with the solution. The standard medium to large customer would probably never ask for anything like I ask for, so I think it's pretty good the way it is. I'm excited to see some of the new improvements coming in the 9.5 version. Some of the streamlines and how the product presents itself for some of the recovery features could be better."
"It would be nice if we were able to purchase single licenses for Zerto. As it is now, scaling requires that we purchase a multi-pack."
"Zerto's documentation is outdated. I'm finding it hard to find documents related to my questions. Their documentation is bad."
"There can be a bit more logging. It seems a bit harder to find logs for test restores and all that. If they had a way to email the results of a test restore, that would be excellent."
"We had a situation where we had to relicense VMs once they were moved over. We later found out that that feature is built-in, but it's not easy to find. The way it's done is that you have to go to the target site to turn it on. If that were explained a little bit better up front, that would be helpful."
 

Pricing and Cost Advice

"What I can advise is to trial the product, taking advantage of the Turbonomic pre-sales implemention support and kickstart training."
"In the last year, Turbonomic has reduced our cloud costs by $94,000."
"When we have expanded our licensing, it has always been easy to make an ROI-based decision. So, it's reasonably priced. We would like to have it cheaper, but we get more benefit from it than we pay for it. At the end of the day, that's all you can hope for."
"It was an annual buy-in. You basically purchase it based on your host type stuff. The buy-in was about 20K, and the annual maintenance is about $3,000 a year."
"Everybody tells me the pricing is high. But the ROIs are great."
"If you're a super-small business, it may be a little bit pricey for you... But in large, enterprise companies where money is, maybe, less of an issue, Turbonomic is not that expensive. I can't imagine why any big company would not buy it, for what it does."
"I'm not involved in any of the billing, but my understanding is that is fairly expensive."
"The pricing is in line with the other solutions that we have. It's not a bargain software, nor is it overly expensive."
"We purchased the product directly from NetApp."
"The standard pricing is online. Pricing depends. If you're using the PayGo model, then it's just the normal costs on the Microsoft page. If you're using Bring Your Own License, which is what we're doing, then you get with your sales contact at NetApp and start figuring out what price is the best, in the end, for your company."
"We find the pricing to be favorable due to the educational sector we belong to."
"In addition to the standard licensing fees, there are fees for Azure, the VMs themselves and for data transfer."
"Our licensing is based on a yearly subscription. That is an additional cost, but because of the storage efficiencies that the NetApp gives, even with the additional cost of the NetApp license, you still end up saving money versus straight Azure native for storage. It's definitely worth it."
"Compared to other storage vendors, NetApp, is not always able to compete with their pricing. Yet, we acknowledge the ease of use ONTAP brings with the AWS integration."
"Our licensing costs are folded into the hardware purchases and I have never differentiated between the two."
"They allow a special price if you are working closely with them. Since we have a lot of NetApp systems, we got some kind of discount. That's something they do for other customers, not just for us. The price was fair. In addition to the licensing fees, you're paying Amazon for your usage..."
"When they changed their licensing model, pricing might have gotten a little more expensive for some use cases, but it has been pretty straightforward."
"Zerto is very cost-effective. We get really great value for the cost of the service."
"Pricing is adequate at the standard of the product, but there could be "always" some improvement. We would like to see a consumption model that would charge in a DR scenario, where you're failing over and consuming those resources, instead of a per-protected-node model."
"The pricing is slightly above average, but the immediate and comprehensive support makes the price acceptable."
"You need to figure out how many critical servers and applications you have in your environment so you will know how many Zerto licenses to buy."
"They should adjust the pricing because I feel its price is too much. If they reduce the price, there will be more users and customers."
"I do not like the current pricing model because the product has been divided into different components and they are charging for them individually."
"It is cost-effective."
report
Use our free recommendation engine to learn which Cloud Migration solutions are best for your needs.
847,862 professionals have used our research since 2012.
 

Comparison Review

it_user159711 - PeerSpot reviewer
Nov 9, 2014
VMware SRM vs. Veeam vs. Zerto
Disaster recovery planning is something that seems challenging for all businesses. Virtualization in addition to its operational flexibility, and cost reduction benefits, has helped companies improve their DR posture. Virtualization has made it easier to move machines from production to…
 

Top Industries

By visitors reading reviews
Financial Services Firm
15%
Computer Software Company
13%
Manufacturing Company
10%
Insurance Company
7%
Educational Organization
51%
Manufacturing Company
10%
Computer Software Company
8%
Financial Services Firm
7%
Computer Software Company
22%
Financial Services Firm
11%
Manufacturing Company
8%
Healthcare Company
8%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
 

Questions from the Community

What is your experience regarding pricing and costs for Turbonomic?
It offers different scenarios. It provides more capabilities than many other tools available. Typically, its price is...
What needs improvement with Turbonomic?
The implementation could be enhanced.
What is your primary use case for Turbonomic?
We use IBM Turbonomic to automate our cloud operations, including monitoring, consolidating dashboards, and reporting...
What do you like most about NetApp Cloud Volumes ONTAP?
So a lot of these licenses are at the rate that is required for capacity. So they're they're able to reduce the licen...
What advice do you have for others considering Oracle Data Guard?
Ik fluister:VM Host Oracle en DataGuard hebben we per toeval vervangen door Zerto :-) tijdens de Zerto implementatie ...
What do you like most about Zerto?
Its ability to roll back if the VM or the server that you are recovering does not come up right is also valuable. You...
What is your experience regarding pricing and costs for Zerto?
The setup is somewhat expensive. I'd rate the pricing seven out of ten.
 

Also Known As

Turbonomic, VMTurbo Operations Manager
ONTAP Cloud, CVO, NetApp CVO
Zerto Virtual Replication
 

Interactive Demo

Demo not available
Demo not available
 

Overview

 

Sample Customers

IBM, J.B. Hunt, BBC, The Capita Group, SulAmérica, Rabobank, PROS, ThinkON, O.C. Tanner Co.
1. Accenture 2. Acer 3. Adidas 4. Aetna 5. AIG 6. Apple 7. Bank of America 8. Barclays 9. Bayer 10. Berkshire Hathaway 11. BNP Paribas 12. Cisco 13. Coca-Cola 14. Comcast 15.ConocoPhillips 16. CVS Health 17. Dell 18. Deutsche Bank 19. eBay 20. Eli Lilly 21. FedEx 22. Ford 23. Freescale Semiconductor 24. General Electric 25. Google 26. Honeywell 27. IBM 28. Intel 29. Intuit 30. JPMorgan Chase 31. Kellogg's 32. KeyCorp 33. Liberty Mutual 34. L'Oréal 35. Mastercard
United Airlines, HCA, XPO Logistics, TaxSlayer, McKesson, Insight Global, American Airlines, Tencate, Aaron’s, Grey’s County, Kingston Technologies
Find out what your peers are saying about NetApp Cloud Volumes ONTAP vs. Zerto and other solutions. Updated: April 2025.
847,862 professionals have used our research since 2012.