NetApp has a nasty way of dealing with the license for the product's on-premises virtual NetApp appliance that you need in your whole architecture, and it is not directly linked to NetApp Cloud Backup. It is not a feature that I want to change in NetApp Cloud Backup because the cloud backup service is architectured in a way that makes it available to users, so it looks pretty good to me. When you go to the other side, you can see what you want to have on-premises for the main side in order to have backup in the cloud via NetApp Cloud Backup. With the on-premises part, you need to have either a physical NetApp or a virtual NetApp. You only have two options for primary storage from the network. In the physical cases, it is pretty straightforward as you have pricing and something like a grid with your NetApp. The sales representatives explain how much it will cost you to have some physical NetApp storage based on performance, types, and so on. The support contracts are for three or five years. When it comes to the virtual NetApp, you can run on top of an existing VMware tool or a recent cluster; it gets nasty because they do not let you change the licensed server. Once you deploy a virtual NetApp with a licensed server that you have on-premises, you migrate off from that, but NetApp does not let you change the licensed server, which should be really simple.
Co-founder & Chief Architect at Prescriptive Data Solutions
Reseller
Top 10
2021-12-10T19:44:39Z
Dec 10, 2021
An area for improvement would be to add some kind of integration and reporting within NetApp or allow the customer to automatically see reports in the dashboard.
One area that can be improved is around how we define the different KPIs. In particular, the business KPIs. I have my own in-house application for the business KPIs, so for example, with our policies around retention, which is a period of seven years, I have to read these parameters from other applications and I need them to integrate well. NetApp Cloud Backup Manager should help to get this integrated seamlessly with other applications, meaning that it will populate the data around the different parameters. These parameters could be things like the retention period, the backup schedule, or anything. It might be an ITSM ticket, where it's a workflow that is triggered somewhere, and the ITSM ticket has been created for a particular environment like my development environment, an INT environment, or a UAT environment. This kind of process needs to integrate well with my own application, and there are some challenges. For example, if it allows for consuming of RESTful APIs, that's how we will usually integrate, but there are certain challenges when it comes to integrating with our own application around KPIs, whether it's business KPIs or technical KPIs. What I want is to populate that data from my own applications. So we have have the headroom in the KPI, and we have the throughput, the volumes, the transactions per second, etc., which are all defined. And these are the global parameters. They affect all the lines of business. It's a central application that is consumed by most of the lines of business and it's all around the KPIs. Earlier, it used to be based on Quest Foglight, which is an application that was taken up and customized. It was made in-house as a core service, and used as a core building block. But our use of Quest Foglight has become a bit outdated. There is no more support available, and it's been there as a kind of legacy application for more than ten years now in the organization. And now it get down to the question: Is this an investment or will we need to divest ourselves of it? So there has to be an option to remediate it out. In that case, one possibility is to integrate the existing application and it gets completely decommissioned. Here it would help if there were some better ways of defining or handling the KPIs in the Cloud Manager, so that most of the parameters are not defined directly by me. Those will be the global parameters that are defined across all the lines of business. There are some integration challenges when it comes to this, and I've spoken to the support team who say they have the REST APIs, but the integration still isn't going as smooth as it could be. Most of the time, when things aren't working out, we need dedicated engineers to be put in for the entire integration. And then it becomes more of a challenge on top of everything. So if the Cloud Manager isn't being fed all the kinds of parameters from the backup strategy around the ITSM and incident tickets, or backup schedules, or anything related to the backup policies, then it takes a while. Ideally, I would want it to be read directly from our in-house applications. And this is more to do with our kind of product processes; that is, it's not our own choice to decide. The risk management team has mandated this as part of the compliance, that we have to strictly enforce the KPIs, the headroom, and the rest of the global parameters which are defined for the different lines of business. So if my retention period changes from seven years to, let's say, 10 years or 15 years, then those rules have to be strictly enforced. Ultimately, we would like better support for ITSM. The ITSM tools like ServiceNow or BMC Remedy are already adding multiple new features, so they have to be upgraded over a period of time, and that means NetApp has to provision for that and factor it in. Some of the AI-based capabilities are there now, and those things have to be incorporated somehow. One last thing is that NetApp could provide better flash storage. Since they're already on block storage and are doing well in that segment, it makes sense that they will have to step up when it comes to flash array storage and so on. I have been evaluating NetApp's flash array storage solutions versus some others like Toshiba's flash array and Fujitsu's storage array, which are quite cost-effective.
Data backup involves copying and moving data from its primary location to a secondary location from which it can later be retrieved in case the primary data storage location experiences some kind of failure or disaster.
NetApp has a nasty way of dealing with the license for the product's on-premises virtual NetApp appliance that you need in your whole architecture, and it is not directly linked to NetApp Cloud Backup. It is not a feature that I want to change in NetApp Cloud Backup because the cloud backup service is architectured in a way that makes it available to users, so it looks pretty good to me. When you go to the other side, you can see what you want to have on-premises for the main side in order to have backup in the cloud via NetApp Cloud Backup. With the on-premises part, you need to have either a physical NetApp or a virtual NetApp. You only have two options for primary storage from the network. In the physical cases, it is pretty straightforward as you have pricing and something like a grid with your NetApp. The sales representatives explain how much it will cost you to have some physical NetApp storage based on performance, types, and so on. The support contracts are for three or five years. When it comes to the virtual NetApp, you can run on top of an existing VMware tool or a recent cluster; it gets nasty because they do not let you change the licensed server. Once you deploy a virtual NetApp with a licensed server that you have on-premises, you migrate off from that, but NetApp does not let you change the licensed server, which should be really simple.
NetApp Cloud Backup could improve by being easier to use. Veeam solution is easier to use.
An area for improvement would be to add some kind of integration and reporting within NetApp or allow the customer to automatically see reports in the dashboard.
One area that can be improved is around how we define the different KPIs. In particular, the business KPIs. I have my own in-house application for the business KPIs, so for example, with our policies around retention, which is a period of seven years, I have to read these parameters from other applications and I need them to integrate well. NetApp Cloud Backup Manager should help to get this integrated seamlessly with other applications, meaning that it will populate the data around the different parameters. These parameters could be things like the retention period, the backup schedule, or anything. It might be an ITSM ticket, where it's a workflow that is triggered somewhere, and the ITSM ticket has been created for a particular environment like my development environment, an INT environment, or a UAT environment. This kind of process needs to integrate well with my own application, and there are some challenges. For example, if it allows for consuming of RESTful APIs, that's how we will usually integrate, but there are certain challenges when it comes to integrating with our own application around KPIs, whether it's business KPIs or technical KPIs. What I want is to populate that data from my own applications. So we have have the headroom in the KPI, and we have the throughput, the volumes, the transactions per second, etc., which are all defined. And these are the global parameters. They affect all the lines of business. It's a central application that is consumed by most of the lines of business and it's all around the KPIs. Earlier, it used to be based on Quest Foglight, which is an application that was taken up and customized. It was made in-house as a core service, and used as a core building block. But our use of Quest Foglight has become a bit outdated. There is no more support available, and it's been there as a kind of legacy application for more than ten years now in the organization. And now it get down to the question: Is this an investment or will we need to divest ourselves of it? So there has to be an option to remediate it out. In that case, one possibility is to integrate the existing application and it gets completely decommissioned. Here it would help if there were some better ways of defining or handling the KPIs in the Cloud Manager, so that most of the parameters are not defined directly by me. Those will be the global parameters that are defined across all the lines of business. There are some integration challenges when it comes to this, and I've spoken to the support team who say they have the REST APIs, but the integration still isn't going as smooth as it could be. Most of the time, when things aren't working out, we need dedicated engineers to be put in for the entire integration. And then it becomes more of a challenge on top of everything. So if the Cloud Manager isn't being fed all the kinds of parameters from the backup strategy around the ITSM and incident tickets, or backup schedules, or anything related to the backup policies, then it takes a while. Ideally, I would want it to be read directly from our in-house applications. And this is more to do with our kind of product processes; that is, it's not our own choice to decide. The risk management team has mandated this as part of the compliance, that we have to strictly enforce the KPIs, the headroom, and the rest of the global parameters which are defined for the different lines of business. So if my retention period changes from seven years to, let's say, 10 years or 15 years, then those rules have to be strictly enforced. Ultimately, we would like better support for ITSM. The ITSM tools like ServiceNow or BMC Remedy are already adding multiple new features, so they have to be upgraded over a period of time, and that means NetApp has to provision for that and factor it in. Some of the AI-based capabilities are there now, and those things have to be incorporated somehow. One last thing is that NetApp could provide better flash storage. Since they're already on block storage and are doing well in that segment, it makes sense that they will have to step up when it comes to flash array storage and so on. I have been evaluating NetApp's flash array storage solutions versus some others like Toshiba's flash array and Fujitsu's storage array, which are quite cost-effective.