We use it to back up all of our data. The only servers that I do not have on here are my VoIP servers. I have a SQL database. I have an ERP server. I've also got a couple of file servers and a couple of domain controllers. I also back up my Hyper-V host machines. So, pretty much everything that we have is backed up here.
We are on the latest version, which is 660268.
It has reduced our admin time by approximately 95%. In the previous backup utility that we used, we were basically manually running a command every night to copy and paste anything that had a newer date than the previous day. On top of the fact that we had to manually launch that every night, we also had to make sure that we covered every drive that was going to be involved for that particular night's changes. Now, I can add servers and just plug them in there. Everything is already pre-configured, so I just add the machine.
Previously, we were using a tape backup. As our quantity of data expanded, when we would do the backup process at night, it was quite cumbersome, and very often, it would extend into the better part of the next day. If someone realized at 9:00 that there is a file that got corrupted or was deleted, they would notify me, and I would tell them that I will restore that, but I have to wait until the last night's job completes. After the job is complete, I had to go back and find the tape that had the copy of what they were looking for. Very often, that person would be waiting until the end of that day before they could resume whatever tasks they were trying to do, whereas now, it doesn't matter what's going on, on the Rapid Recovery side. My backups occur every hour, and it takes a snapshot. In a worst-case scenario, they may have to go back and repeat whatever they lost in that hour's time, but I can restore files for them almost immediately, and they can get right back to it. It has made a huge difference in that regard because previously, sometimes, they had to wait the entire day before I could get in and actually restore with the tape that had the data that they were missing.
It enables us to recover complete systems, applications, and data with little to no disruption to the work environment. I personally have not had to do that, but my counterpart has had to recover an entire server. He lost his exchange server at one point, and although, it did take him a couple of hours to get everything back up and running efficiently, he was able to back it up to the last good backup. Within an hour or two, he was able to get everything back up and running for users without a whole lot of data loss.
I use incremental backups daily. When I first bring a brand new machine, I take a base image, and then every subsequent backup going forward is incremental. The full base images are obviously redundant, and they fill up your repository quite quickly. Prior to adding additional storage, we didn't have a whole lot of wiggle room. We just didn't have the space to take another entire snapshot of the data. Incremental backups kind of do the dedupe and keep everything at the most efficient level of data so that you have everything you need, but you don't have a lot of fluff in there. You don't have extra copies of the same thing that you can grab from four different backup files. It is there, deduped. You just have your one pristine copy, which helps keep your storage really streamlined so that you're only using what you need versus a lot of excesses.
Incremental backups have reduced storage costs. We have roughly about 7 terabytes of storage, and while storage is not extremely expensive these days, it still adds up. So, if I were needing to have multiple copies of 7 terabytes at a time, it will cost you serious coin to have enough devices to hold that much in addition to buying licensing for that much. If all of your data sits in 7 terabytes, why would you want to pay licensing for 10 terabytes? That just doesn't make good business sense.
Incremental backups help to reduce the impact of our production environment or network resources. We replicate over the WAN. While we do have adequate bandwidth, if we were having to replicate base images consistently, it is obviously going to be pulling some of our resources for replication, whereas the incremental backups really replicate in a matter of minutes. We might have a couple of minutes here and a couple of minutes there when a replication job is happening, but they're pretty seamless. I don't know if anyone even notices. If we were replicating, it would be a consistent drain on our bandwidth until that huge file is replicated. So, incremental backups just keep everything running smoothly, quickly, and efficiently.
It affects the peace of mind when it comes to knowing our backups are completing. I do get an email if ever, for one reason or another, a job is not running. I typically reboot all of my servers over the weekend, and on one of my servers, when the agent did not start for any reason, I received an email that said that this job was missed because the agent is offline. So, I logged in and manually started it. There was no issue, and everything was up and running again. Similarly, my SQL backups will email me and tell me that the logs are truncated. No news is good news, but I know that if there is a blip of any sort, I'm going to get a notification email alerting me to take a look so that I can nip it in the bud right away. It is reassuring to have this communication.
It has been our experience quite often that files get deleted, but you do not notice that for a very long period of time. So, if something is deleted and no one caught it for six months, it's extremely important to know that you are going to be able to still recover that data.
We're kind of restructuring things, so we're not doing it currently, but we have another business that is owned by the same owners. They also use Rapid Recovery, but they are about four miles from here. We replicate. So, we have a copy on DL4000, and then we also have a copy on a SAN. We then replicate across the WAN to a core in their location, so we've got multiple copies. Primarily, we've started thinking about worst-case scenario disasters. Even though we have DL4000 and we have the raw data on a SAN, but in case a tornado goes through and wipes out our building, both copies would up in the clouds. So, we started doing replication across the WAN. Once you get the original configuration set up, it is pretty much set it and forget it. I do look at it every day and check for errors and the likes of that, but for the most part, it is pretty self-supporting. It doesn't require a whole lot of administration, which is really beneficial because you have enough other things going on throughout the day. You don't have to babysit it all the time.
It includes deduplication, replication, and virtual standby without having to pay extra. From a storage standpoint, the dedupe was important because we did not want to run into a situation where the data size was growing exponentially. In terms of replication, we wanted to be able to ensure that we had multiple copies in the event of a disaster. We wanted to make sure that we had a game plan for the worst-case scenario, and that it was something that we can trust and would fit the bill if we were facing that kind of scenario. We feel confident that it would do what it says it's going to do. We, fortunately, didn't have to rely on virtual standby. However, one of my counterparts in another sector of our business has used it, and it has worked out very well for him. Just seeing his experience and knowing that if we were facing a similar dilemma, it would work is immeasurable.