There aren’t any challenges. Regarding the additional features, Cove is limited to disaster recovery with Azure. It would be great to make disaster recovery compatible with AWS.
RMM Manager at a computer software company with 201-500 employees
MSP
Top 20
2023-09-06T13:30:28Z
Sep 6, 2023
N-able Cove Data Protection for Microsoft 365 is an area with shortcomings that need improvement. The amount of time to get to the support engineers of the solution is too much, so the support team needs to arrange for a verbal or quick chat with the product's users.
They have been improving their interface and adding new things to it. Based on what we were using before, what we're using now is very advanced. It's a regular backup and restore solution. Everything I needed up till now is there. It's very easy to use. It's quite straightforward so far. I haven't run into any roadblocks, and I was able to do whatever had to be done. We've done only partial recovery to see if it's okay and whatever we had to do looked fine. The only thing that would be good, but I'm not sure if they already have, is that in the event of a disaster, the recovery process could be a little bit longer if I have to recover 2, 3, or 4 TB of data. It could take 2 or 3 days to recover that kind of data. It would be good if they could offer a service where I can say, "This is my server, and I have to recover all the data from this drive." They can have a service where they put everything on an external hard drive and ship it to you overnight for you to restore quickly. That's the only thing that would be useful. For small amounts of data, recovery is easy, but when it's large amounts of data, it takes forever. So, if they can have a service where they put our data on a hard drive and ship it to us as fast as possible, it would be great. Even if there's a fee associated with it, it's fine. Everybody would be willing to pay that just to increase the speed of a large recovery. If you've got to recover 5, 6, 10, 20, or 100 GB of data, you don't need that, but for large amounts of data, it would be important to have that type of service.
I have some issues with the agent failing on workstations. I've had to completely uninstall several of them, delete everything, and start over to get them to work. That's been the biggest source of my problems recently. The problem is that when we delete one, we lose the backup. Consequently, it means we're without a backup unless we have a local copy. When we delete the agent, there's the problem of how to go back and restore it. I haven't had to deal with it yet, because I haven't had a failure that occurred while I was doing that. I had a workstation that started failing, and we couldn't get the services to run. I don't know what caused it. I had to reinstall the agent, which didn't work. I had to go into the machine and delete everything and load another agent onto it. Once I did that, I was able to make it work, but 90% of the time I never have an issue. During the other 10% of the time when I have a problem, it's a mess. The ability to recover to a different workstation or a different data point is a little bit clumsy. It could use some work.
Learn what your peers think about N-able Cove Data Protection. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
For the MSP side, they could have more of a "security user" that can go in and only see certain clients. If you give somebody access as a technician, they can see all the clients. There are other minor things, like GUI or other user permissions that would be nice to have on some level, but there's nothing I would drastically change to the product because it works so well. It's rare for me to not want a lot of improvements, but when something works this well, I wouldn't want any major changes.
Systems Admin at a computer software company with 51-200 employees
Real User
2022-06-06T08:06:00Z
Jun 6, 2022
One area I don't like has to do with the agent that goes on the system. Deploying it is a piece of cake, but something I have noticed is that if a system stays offline for some length of time, say for a week or so, I may have to go back in and reinstall the agent to get it back in business. I don't know what's causing that. That's the only issue I have had.
We're really pretty impressed and happy with the product. In full disclosure, we're also a Datto reseller. There is an area of improvement that has to do with a Datto comparison. We do have Datto as our backup and disaster recovery for servers. If we wanted to move Cove into the server arena, having a way to spin up the restores in the cloud, as opposed to having to download them first to some local storage and spinning things up and testing them out would be better. The Datto solution, for example, has got everything in the cloud. You can spin up, you can test servers, restores, and more, all outside of the network. Whereas with Cove, while we haven't done a full restore yet, from what I can tell, we cannot test restores without downloading the backup image from the cloud. Therefore, a disaster recovery console would be an improvement for the product.
This solution is not very good for image restores, but really excellent for files, databases and System State restores. For normal restores you use the browser - this is SUPER easy and works really fast and very well. For image restores you need to create a USB stick and embed the motherboard drivers into the boot image, which is a bit of a pain. It then recognised the first drive on the SATA controller as drive 1 and not any NVMe drives, even if the NVMe drive is the boot drive, so you have to be VERY careful not to overwrite the wrong drive during a restore. I found it safest to physically disconnect any drives you do not wish to accidentally restore to. A graphical interface showing make, model and volume names names (and not only drive numbers) would solve this issue. The solution also does not allow users to enable or disable backups when a laptop is using mobile data. You have to open the browser and click on Cancel to stop the backup from running. You can however throttle backups during certain hours, which is useful. Sometimes, remote users will connect via their mobile phone and it will use their data to perform the backup, which is very costly. If there was a way to enable and disable the backup when using mobile data, they would not have this complaint. In fairness, this would be an issue with most backup systems. Lastly, when your On-Premises Storage Node storage is full, you are required to add another On-Premises Storage Node. I would have liked a feature to add another drive to the original Storage Node and just including it in the Node.
Systems Analyst at a tech services company with 1-10 employees
Real User
2021-04-13T13:19:00Z
Apr 13, 2021
I know on the backup side it runs extremely well. The recovery side, the restore side, could be a little more optimized, however, the amount of time that we spend in restore mode is maybe a couple of weeks out of five years. On the other hand, backups happen every night. They happen all the time. We get a new customer, we have to onboard them, and they give us a couple of options for onboarding and all of them are excellent. That said, in most cases, we're not onboarding a terabyte right out of the get-go. Currently you can't dump the files that were backed up. You have to use the web interface and you can only see 30 files at a crack. If I'm looking for a particular file, it would be easier for me just to dump down the catalogs and suck them into a spreadsheet and do my cut and slice in that way. I'd be able to figure out "Oh, this file changed on this day. Therefore, I want this version." This is critical, as the customer is not only telling me, they're going to tell me Mary Sue left on the 12th and the last day was the day she broke it, or Mary Sue was working on that before she left and I'm not sure when she last made the change. I can't pin it to any particular day which means I either have to sift through it from the web interface or I have to reload. That means I will have to download one or more files manually and then compare them that way. If I could get the catalogs dropped to me in a CSV format, that would be very, very helpful. As it is now, it's not only cumbersome, it's also a slow drawn-out process.
Commercially, they offer the product in two different formats. There is the full imaging backup, and there is also an alternative. You can pay for simple data backups and pay by the gigabyte that is consumed. Unfortunately, you cannot have those two products in the same dashboard. So, I have to switch between dashboards to look at: * All the servers being imaged. * All the private laptops who have their "My Documents" folders backed up. That is a bit of a hassle, but it is not a deal breaker. It would be very nice if it was all on the same dashboard. I check our clients for the imaging product (the expensive one) every morning. I check the people who are paying us for data-only backup once a week. Therefore, once a week, I have to log out of portal A and log onto portal B to check if it's all good, then I log back onto portal A. It would be nice if I didn't have to do that, but it's certainly not something that keeps me awake at night. We don't use the solution’s automated recovery testing because SolarWinds made me cross. When they released it, I went, "Oh, well, that's quite good." Because if you use the system, then it supposedly spins up, and on the portal, it gives you a screenshot of the booted device. So, I phoned up, and I said, "Oh, that's really quite cool. How much is that?" They said, "No, no, no. It's all included in your license." I went, "Okay then," and went and deployed it on about half the fleet. One of the options that our customers have is they can pay us a small amount every month for us to test the recovery just to prove that it's viable, and I thought, "Well, this will do that for us. Nice." Then, in the next invoice, we got a charge for it. While It was not a huge amount, I took offense at the fact that we were told that it would be a no extra cost option that was part of our license, but it turns out that it's chargeable. Therefore, we haven't used it since.
An area for improvement that would really work out well would be if there were a little bit more of an elegant handshake relationship between SolarWinds RMM and the PCs that are being backed up, to advise regarding "up" status. We all expect servers to be on all the time; we never have a problem with servers. But when I look at my desktop status, using the color bars filter, I can see a dozen systems that haven't backed up in a while. Because of COVID, some of these systems may be off. It would be awesome if there was some sort of indication that the system is on, some sort of a "heartbeat" functionality, to see if the system is on. If the system hasn't reported in, that might be tied in with the heartbeat. But if it's tied in with the RMM, and the RMM is reporting that it's online and it's showing that it's failing, it should tell us online. Then we would see that it's failing and that it may need attention. And that would be more "glue" for sticking with SolarWinds or moving to SolarWinds, to have exactly that functionality. Currently, what we have to do is swipe the name, copy it, put it into the RMM, do a quick search, and then I know it's offline. I have to do that with each one of them. That's the most time-consuming part of the solution. If they could improve that and provide a heartbeat, it would be an amazing, 100 percent solution. Since RMM is an agent that feeds back that a machine is alive and on, I don't see any reason why they can't either tap into that one feature or build the same exact polling within the backup agent, to update right away and say the system is online or offline.
Vice President of Managed Services at Entré Computer Solutions
Reseller
2020-10-04T06:40:00Z
Oct 4, 2020
The reporting feature and functionality need improvement. We would like to see a little bit more detailed reporting that offers more CEO or C-level focused reporting options.
Sr. Network/System Administrator Support at S & L Computer Services, Inc.
MSP
2020-10-01T09:58:00Z
Oct 1, 2020
We've never even had to consider anything else for any situation for our customers. It restores well. It's hard to say anything about improvement because we're just so happy with it. Their support people are second to none. The one thing that could use some improvement is their Linux backup. Their Linux backup us a files/folders backup and you are not able to to a system restore. I have another product that I use for our Linux servers, but it would be nice if they had that flexibility on the Linux side.
A better default view on my dashboard would be great. There is a lot of useless information there that it pulls up. They could present the dashboard slightly better, in terms of the extra information after the first five columns. The first five columns are awesome. After that, I don't care about the rest, and there are another seven things after that. You can customize it, and I do have my own customized dashboard, but it doesn't give me any option to make that the default view. They could work a little bit on how they present you with your landing page. The first time I log in to this from any login window, I want a page that's a little bit more useful. This one gives me great info as to if my backup is good, up and running, or if it's had a certain number of errors. But after that, it tells me stuff like my product, which I do all-in for all our customers, so I don't care. It tells me my profile and I usually do a manual setup for most customers that's documented on my documentation system, which is also with SolarWinds. So I don't care about my profile version. All that stuff which is extra, that I really don't care about, is on this default view, and they don't let me save my custom view as my landing page. I have to go and find it again. It's deep down inside a menu at the very bottom and I can't make it go anywhere else. Another point to be aware of is that the initial cloud backup, if you've got more than a terabyte of data, can take quite some time, because it's completely dependent on the customer's internet speed. That is one thing that we have run into. When I asked SolarWinds about that they noted they already have a solution for that.
Director/Principal Consultant at a tech services company with 51-200 employees
Consultant
2020-06-04T09:41:22Z
Jun 4, 2020
Integration with a hybrid cloud is something that I found complicated. Ultimately, we backed away from this approach because of the difficulty that we were having. It may have been related to the firewall setting or other things, but we did not figure it out because it was too complicated for the amount of time that we had budgeted to work on it. There have been a couple of times when we noticed that it is consuming too much CPU time, although we have been able to mitigate that.
Cove Data Protection, from N-able, is a comprehensive solution designed to safeguard critical business data. It offers a range of features including backup and recovery, disaster recovery, and endpoint protection. With automated backups and flexible scheduling options, it ensures data is protected and easily recoverable.
The solution also includes advanced security measures such as encryption and ransomware detection to prevent unauthorized access and data breaches. Cove Data Protection...
There aren’t any challenges. Regarding the additional features, Cove is limited to disaster recovery with Azure. It would be great to make disaster recovery compatible with AWS.
N-able Cove Data Protection for Microsoft 365 is an area with shortcomings that need improvement. The amount of time to get to the support engineers of the solution is too much, so the support team needs to arrange for a verbal or quick chat with the product's users.
They have been improving their interface and adding new things to it. Based on what we were using before, what we're using now is very advanced. It's a regular backup and restore solution. Everything I needed up till now is there. It's very easy to use. It's quite straightforward so far. I haven't run into any roadblocks, and I was able to do whatever had to be done. We've done only partial recovery to see if it's okay and whatever we had to do looked fine. The only thing that would be good, but I'm not sure if they already have, is that in the event of a disaster, the recovery process could be a little bit longer if I have to recover 2, 3, or 4 TB of data. It could take 2 or 3 days to recover that kind of data. It would be good if they could offer a service where I can say, "This is my server, and I have to recover all the data from this drive." They can have a service where they put everything on an external hard drive and ship it to you overnight for you to restore quickly. That's the only thing that would be useful. For small amounts of data, recovery is easy, but when it's large amounts of data, it takes forever. So, if they can have a service where they put our data on a hard drive and ship it to us as fast as possible, it would be great. Even if there's a fee associated with it, it's fine. Everybody would be willing to pay that just to increase the speed of a large recovery. If you've got to recover 5, 6, 10, 20, or 100 GB of data, you don't need that, but for large amounts of data, it would be important to have that type of service.
Having the licensing available for partners to be able to take advantage of testing without paying would make a big difference.
A feature I'd like to see would be a more customizable admin console.
I have some issues with the agent failing on workstations. I've had to completely uninstall several of them, delete everything, and start over to get them to work. That's been the biggest source of my problems recently. The problem is that when we delete one, we lose the backup. Consequently, it means we're without a backup unless we have a local copy. When we delete the agent, there's the problem of how to go back and restore it. I haven't had to deal with it yet, because I haven't had a failure that occurred while I was doing that. I had a workstation that started failing, and we couldn't get the services to run. I don't know what caused it. I had to reinstall the agent, which didn't work. I had to go into the machine and delete everything and load another agent onto it. Once I did that, I was able to make it work, but 90% of the time I never have an issue. During the other 10% of the time when I have a problem, it's a mess. The ability to recover to a different workstation or a different data point is a little bit clumsy. It could use some work.
For the MSP side, they could have more of a "security user" that can go in and only see certain clients. If you give somebody access as a technician, they can see all the clients. There are other minor things, like GUI or other user permissions that would be nice to have on some level, but there's nothing I would drastically change to the product because it works so well. It's rare for me to not want a lot of improvements, but when something works this well, I wouldn't want any major changes.
One area I don't like has to do with the agent that goes on the system. Deploying it is a piece of cake, but something I have noticed is that if a system stays offline for some length of time, say for a week or so, I may have to go back in and reinstall the agent to get it back in business. I don't know what's causing that. That's the only issue I have had.
We're really pretty impressed and happy with the product. In full disclosure, we're also a Datto reseller. There is an area of improvement that has to do with a Datto comparison. We do have Datto as our backup and disaster recovery for servers. If we wanted to move Cove into the server arena, having a way to spin up the restores in the cloud, as opposed to having to download them first to some local storage and spinning things up and testing them out would be better. The Datto solution, for example, has got everything in the cloud. You can spin up, you can test servers, restores, and more, all outside of the network. Whereas with Cove, while we haven't done a full restore yet, from what I can tell, we cannot test restores without downloading the backup image from the cloud. Therefore, a disaster recovery console would be an improvement for the product.
This solution is not very good for image restores, but really excellent for files, databases and System State restores. For normal restores you use the browser - this is SUPER easy and works really fast and very well. For image restores you need to create a USB stick and embed the motherboard drivers into the boot image, which is a bit of a pain. It then recognised the first drive on the SATA controller as drive 1 and not any NVMe drives, even if the NVMe drive is the boot drive, so you have to be VERY careful not to overwrite the wrong drive during a restore. I found it safest to physically disconnect any drives you do not wish to accidentally restore to. A graphical interface showing make, model and volume names names (and not only drive numbers) would solve this issue. The solution also does not allow users to enable or disable backups when a laptop is using mobile data. You have to open the browser and click on Cancel to stop the backup from running. You can however throttle backups during certain hours, which is useful. Sometimes, remote users will connect via their mobile phone and it will use their data to perform the backup, which is very costly. If there was a way to enable and disable the backup when using mobile data, they would not have this complaint. In fairness, this would be an issue with most backup systems. Lastly, when your On-Premises Storage Node storage is full, you are required to add another On-Premises Storage Node. I would have liked a feature to add another drive to the original Storage Node and just including it in the Node.
I know on the backup side it runs extremely well. The recovery side, the restore side, could be a little more optimized, however, the amount of time that we spend in restore mode is maybe a couple of weeks out of five years. On the other hand, backups happen every night. They happen all the time. We get a new customer, we have to onboard them, and they give us a couple of options for onboarding and all of them are excellent. That said, in most cases, we're not onboarding a terabyte right out of the get-go. Currently you can't dump the files that were backed up. You have to use the web interface and you can only see 30 files at a crack. If I'm looking for a particular file, it would be easier for me just to dump down the catalogs and suck them into a spreadsheet and do my cut and slice in that way. I'd be able to figure out "Oh, this file changed on this day. Therefore, I want this version." This is critical, as the customer is not only telling me, they're going to tell me Mary Sue left on the 12th and the last day was the day she broke it, or Mary Sue was working on that before she left and I'm not sure when she last made the change. I can't pin it to any particular day which means I either have to sift through it from the web interface or I have to reload. That means I will have to download one or more files manually and then compare them that way. If I could get the catalogs dropped to me in a CSV format, that would be very, very helpful. As it is now, it's not only cumbersome, it's also a slow drawn-out process.
Commercially, they offer the product in two different formats. There is the full imaging backup, and there is also an alternative. You can pay for simple data backups and pay by the gigabyte that is consumed. Unfortunately, you cannot have those two products in the same dashboard. So, I have to switch between dashboards to look at: * All the servers being imaged. * All the private laptops who have their "My Documents" folders backed up. That is a bit of a hassle, but it is not a deal breaker. It would be very nice if it was all on the same dashboard. I check our clients for the imaging product (the expensive one) every morning. I check the people who are paying us for data-only backup once a week. Therefore, once a week, I have to log out of portal A and log onto portal B to check if it's all good, then I log back onto portal A. It would be nice if I didn't have to do that, but it's certainly not something that keeps me awake at night. We don't use the solution’s automated recovery testing because SolarWinds made me cross. When they released it, I went, "Oh, well, that's quite good." Because if you use the system, then it supposedly spins up, and on the portal, it gives you a screenshot of the booted device. So, I phoned up, and I said, "Oh, that's really quite cool. How much is that?" They said, "No, no, no. It's all included in your license." I went, "Okay then," and went and deployed it on about half the fleet. One of the options that our customers have is they can pay us a small amount every month for us to test the recovery just to prove that it's viable, and I thought, "Well, this will do that for us. Nice." Then, in the next invoice, we got a charge for it. While It was not a huge amount, I took offense at the fact that we were told that it would be a no extra cost option that was part of our license, but it turns out that it's chargeable. Therefore, we haven't used it since.
An area for improvement that would really work out well would be if there were a little bit more of an elegant handshake relationship between SolarWinds RMM and the PCs that are being backed up, to advise regarding "up" status. We all expect servers to be on all the time; we never have a problem with servers. But when I look at my desktop status, using the color bars filter, I can see a dozen systems that haven't backed up in a while. Because of COVID, some of these systems may be off. It would be awesome if there was some sort of indication that the system is on, some sort of a "heartbeat" functionality, to see if the system is on. If the system hasn't reported in, that might be tied in with the heartbeat. But if it's tied in with the RMM, and the RMM is reporting that it's online and it's showing that it's failing, it should tell us online. Then we would see that it's failing and that it may need attention. And that would be more "glue" for sticking with SolarWinds or moving to SolarWinds, to have exactly that functionality. Currently, what we have to do is swipe the name, copy it, put it into the RMM, do a quick search, and then I know it's offline. I have to do that with each one of them. That's the most time-consuming part of the solution. If they could improve that and provide a heartbeat, it would be an amazing, 100 percent solution. Since RMM is an agent that feeds back that a machine is alive and on, I don't see any reason why they can't either tap into that one feature or build the same exact polling within the backup agent, to update right away and say the system is online or offline.
The reporting feature and functionality need improvement. We would like to see a little bit more detailed reporting that offers more CEO or C-level focused reporting options.
We've never even had to consider anything else for any situation for our customers. It restores well. It's hard to say anything about improvement because we're just so happy with it. Their support people are second to none. The one thing that could use some improvement is their Linux backup. Their Linux backup us a files/folders backup and you are not able to to a system restore. I have another product that I use for our Linux servers, but it would be nice if they had that flexibility on the Linux side.
A better default view on my dashboard would be great. There is a lot of useless information there that it pulls up. They could present the dashboard slightly better, in terms of the extra information after the first five columns. The first five columns are awesome. After that, I don't care about the rest, and there are another seven things after that. You can customize it, and I do have my own customized dashboard, but it doesn't give me any option to make that the default view. They could work a little bit on how they present you with your landing page. The first time I log in to this from any login window, I want a page that's a little bit more useful. This one gives me great info as to if my backup is good, up and running, or if it's had a certain number of errors. But after that, it tells me stuff like my product, which I do all-in for all our customers, so I don't care. It tells me my profile and I usually do a manual setup for most customers that's documented on my documentation system, which is also with SolarWinds. So I don't care about my profile version. All that stuff which is extra, that I really don't care about, is on this default view, and they don't let me save my custom view as my landing page. I have to go and find it again. It's deep down inside a menu at the very bottom and I can't make it go anywhere else. Another point to be aware of is that the initial cloud backup, if you've got more than a terabyte of data, can take quite some time, because it's completely dependent on the customer's internet speed. That is one thing that we have run into. When I asked SolarWinds about that they noted they already have a solution for that.
Integration with a hybrid cloud is something that I found complicated. Ultimately, we backed away from this approach because of the difficulty that we were having. It may have been related to the firewall setting or other things, but we did not figure it out because it was too complicated for the amount of time that we had budgeted to work on it. There have been a couple of times when we noticed that it is consuming too much CPU time, although we have been able to mitigate that.
We would like to have better reporting.