A lot of our focus area has been around capacity planning that includes virtual machine rightsizing and then construction for failover and resiliency-type models. The other area that is important to us is looking at data in motion, data at rest, and data in transit.
By implementing Zerto, we wanted to be able to go ahead and focus a lot on workload migration and disaster recovery.
I can quickly restore data by reverting anything with more or less a nightly backup. I can pretty much have the data through recovery checkpoints, and each of the checkpoints can only be around five seconds apart.
When I need to work a lot with VPGs, it has a lot of capabilities for that. Monitoring is also very important for us. We do work with Splunk, and I am looking a lot around for logs, metrics, and traces. The capabilities that I get are system throughput, and CPU and RAM input/output.
I have used Zerto for immutable data copies. I have pretty much followed a 3-2-1 strategy. We have three copies of production data and two backup copies. We have two different media and then one off-site copy. It has this offering there.
It's helping very much in terms of the malware. They have a ransomware protection capability.
I have used other solutions jointly with Zerto. What is happening is that they have a focus on isolating and locking with a cyber resiliency vault, and what I have been doing more or less around the vault is working with the Delinea Privileged Access Manager solution. So, some areas have intersections with other tools in our stack. I would love to continue seeing more use cases out of Zerto so that I do not have to defer this anywhere else.
It has enabled us to do disaster recovery (DR) in the cloud, rather than in a physical data center. I think of it as a cloud migration tool. Having DR in the cloud is very important for our organization. I use it with Microsoft Azure.
With Zerto, I have seen five-second near-synchronous replication, so there are thousands of checkpoints in one day, and then afterward, I can have a periodic backup. I can space it out between twelve-hour snapshots. We can have one to three checkpoints per day. I can recover to the state seconds before any sort of attack, and I can utilize Zerto's in-built orchestration and automation. I could easily fail over the entire site without any sort of disruption. Those are the things I see very much in terms of positives. There is a lot of information that it can gather with synchronous replication. The other thing is that I have seen other disaster and backup service offerings, and they very much focus on getting a container image installed or some sort of binary file and then deployment from there afterward.
I find it easy to migrate the data. Once somebody understands how Zerto works, particularly around areas for analytics and automation, with the reference architecture, they will be able to quickly deploy it.
I see a lot of visibility in terms of proactive management with SLA monitoring, run metrics, and other things. We are able to test infrastructure using live and personalized data. It, in turn, becomes very much of a team effort.
Zerto provides complete visibility in terms of storage and consumption data. We get to know the capacity and application volumes. I can also address compliance aspects, such as PCI DSS which is important for us as part of the RPO.
They have an intelligent, predictive infrastructure, so I can just pretty much determine the required compute storage and other server networking resources, whether it is on-premises or in the cloud.
It also saves recovery time. We pretty much monitor that information. In terms of time savings, we are able to ensure that we can set up a backup quickly, figure out the integration details with the use of APIs, and meet our requirements around client security. Afterward, there is the cost consideration. Better documentation on the restoration process would be helpful.
Ransomware is one area where we are using Zerto. If we were utilizing another solution, that might have only been AWS-specific, and we might have not gotten much assistance in proceeding with their public cloud vendor as a result. We might have to figure out what we can do around working with an XDR or another mode of ingesting that data for any vulnerabilities and how to focus on encryption thereafter. If we were to consider another vendor, some of them may not have support for Azure. They might be AWS-focused.
Zerto has helped to reduce our organization's DR testing. We can create failover tests seamlessly, and we can do this routinely. We are able to save time and look at how we can discern between RTO and RPO.
Zerto has not reduced the number of staff involved in overall backup and DR management. Our team size is still roughly the same. We have not seen our headcount change as a result, but we do not need to hire external consultants to support a project.
If I wanted to focus on operational recovery, which may be recovering instances in the database with a 15-second data loss, there are systems administrators designed to take care of that. With Zerto's offering, someone can utilize the Zerto solution as opposed to depending on any sort of manual human intervention.
The continuation to the public cloud has been especially helpful where I can pretty much work with things like hosts and clusters as part of the data center.
Zerto has near-synchronous replication. I like it very much. They had an acquisition and are now a part of HPE. I see it very much as a robust solution.
A slight disadvantage of Zerto is that it requires the Windows Server operating system as the base OS. Over time, I would like to see more offerings in that regard. There should be more deployment options other than just the Windows operating system.
The implementation is very quick and painless, but it would be good to have more information that is not case-sensitive. In the server portal, some fields are case-sensitive. It took some time for me to understand initially.
If a VPG goes down and an application host is not responding, I want to have a little bit more flexibility to automatically point the recovery to other hosts. I would like to see a little bit more flexibility to automatically sustain two applications in their most optimal state. If the VPG is going down and any of the recovery hosts are in maintenance mode, there should be a way for maximum flexibility so that it can automatically utilize Zerto to point that recovery to other hosts.
I want some more information about how to work with bare metal drives. I have been doing some work in capacity planning for using MDM and FormFactor cable and then looking at system throughput, App latency, and a lot of scripts in Linux. I would like to have a little bit more information for anybody needing to work with bare metal drives.
I have been using Zerto for several years.
I have not seen any service disruption that impacted us. If anything like that were to occur, they would communicate it ahead of time.
It is scalable. We have more than 20,000 endpoints.
I do reach out to Zerto, and if there are any questions, we have a ticket in-house, so everyone is working on reviewing it at the same time. I would rate their support a nine out of ten. There are no negatives.
We were not using a similar solution.
By bringing in Zerto, some legacy work has been discontinued. There is operational recovery, application migration, and application cloning. These are the three areas where Zerto has helped us.
We have a cloud version. It is a public cloud.
Its initial deployment was straightforward. I have been trying to focus on capabilities and encryption and how a long-term retention repository works, at least looking at the data capture. Another thing is utilizing some information with APIs and cloud scaling. I have broken down a lot of my use cases, and we have Zerto on the public cloud. Based on that, I was able to figure out how to work with features like compute as well as storage.
Its implementation took about two to three months. In terms of maintenance, it requires maintenance. We focus a lot on metrics such as RTO and RPO monitoring. Somebody can also put it in maintenance mode operation.
We had Zerto representatives, and we also had work done in-house.
I work with a team. Other colleagues are also involved in the effort. We have a team of around ten employees.
We did look at a few other vendors' offerings, but we decided on Zerto. Our organization has a partnership with them, and the other thing was that there were a few industry events, and they were able to effectively make a pitch. Their demonstration was very effective. It was also something in which the client was interested in.
To those looking to implement Zerto in their organization, I would advise creating use cases of their own and then trying to see how Zerto effectively helps them. A few areas where they can work are gathering information with the virtual machine rightsizing and being able to go ahead and create resiliency models. Afterward, they can look at compliance. For us, PCI DSS and locating the public cloud environment being used, which in our case was Microsoft Azure, were important. After they have created use cases on their own, they can come to Zerto and see how they are able to effectively handle it. If they are able to think through what they need, they can come up with specific questions and then get Zerto to effectively deliver.
I would rate Zerto a nine out of ten.