There's not much scope for improvement. I think the solution is more restricted with the underlying cloud. The performance of the single instances depends on the performance of the underlying cloud resource.
The cost needs improvement. Cost should go down. If you have a company with many servers, then the cost is down. However, if you're in a situation where you only need it for one function, then the cost can be overwhelming.
Learn what your peers think about NetApp Cloud Volumes ONTAP. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
Enterprise Architect - Office of the CTO at a manufacturing company with 10,001+ employees
Real User
2022-04-11T16:32:00Z
Apr 11, 2022
Something we would like to see is the ability to better manage the setup and tie it to our configuration management database. We manage our whole IT infrastructure out of that database.
Assistant VP at a insurance company with 1,001-5,000 employees
Real User
2022-02-14T20:22:00Z
Feb 14, 2022
The only issue I can think of is metrics, but I think they have improved that in the newer versions already. There should be an easy place to see all your metrics.
Principal Enterprise Architect at Wolters Kluwer Legal & Regulatory Nederland
Real User
2021-12-23T13:46:00Z
Dec 23, 2021
The only area for improvement would be some guidance in terms of the future products that NetApp is planning on releasing. I would like to see communication around that or advice such as, "Hey, the world is moving towards this particular trend, and NetApp can help you do that." I do get promotional emails from NetApp, but customer-specific advice would be helpful, based on our use cases.
They definitely need to stay more on top of security vulnerabilities. Our security team is constantly finding Java vulnerabilities and SQL vulnerabilities. Our security team always wants the latest security update, and it takes a while for NetApp to stay up to speed with that. That would be my biggest complaint.
Senior Analyst at a comms service provider with 5,001-10,000 employees
Real User
2020-11-02T06:18:00Z
Nov 2, 2020
I think this is more of a limitation of how it operates in Azure, but the solution is affected by this limitation. There's something about how the different availability zones, the different regions, operate in Azure. It's very difficult to set up complete fault tolerance using multiple CVO nodes and have one node in one region and one node in another region. This is not something that I have dug into myself. I am hearing about this from other IT nerds.
It definitely needs improvement with respect to clustering and with respect to more collaborative integrations with Azure. Right now, we have very limited functionalities with Azure, except for storage. If CVO could be integrated with Azure that would help. When there is any sort of maintenance happening in the cloud, it disrupts the service in Cloud Volumes ONTAP. If those could be rectified, that would be really good news because it would reduce the administrative overhead my team and I are facing.
Infrastructure Architect at a transportation company with 10,001+ employees
Real User
2020-10-22T05:36:00Z
Oct 22, 2020
I'm very happy with the solution, the only thing that needs improvement is the web services API. It could be a little bit more straightforward. That's my only issue with it. It can get pretty complex.
Infrastructure Consultant - Storage, Global Infrastructure Services at a insurance company with 10,001+ employees
Consultant
2020-10-11T08:58:00Z
Oct 11, 2020
If they could include clustering together multiple physical Cloud Volumes ONTAP devices as an option, that could be helpful. The ease of data migration between devices could be improved somewhat. There is already some flexibility which is better than just migrating the data. However, that could potentially be further improved.
Senior System Analyst at Baxter International Inc.
Real User
2020-07-09T06:27:00Z
Jul 9, 2020
They don't provide training documentation where we can learn about the back-end architecture and how it works. I have needed this type of documentation for Cloud Manager, its AWS integration, and managing the on-premise back-end. We would also like to learn about future enhancements from documentation.
There are a few bugs in the system that they need to improve on the UI part. Specifically, its integration of NetApp Cloud Manager with CVO, which is something they are already working on. They will probably provide a SaaS offering for Cloud Manager. We want to be able to add more than six disks in aggregate, but there is a limit of the number of disks in aggregate. In GCP, they provide less by limiting the sixth disk in aggregate. In Azure, the same solution provides 12 disks in an aggregate versus GCP where it is just half that amount. They should bump up the disk in aggregate requirement so we don't have to migrate the aggregate from one to another when the capacities are full.
Currently, Cloud Volumes ONTAP is not a high-availability solution. When you deploy the solution it comes in single-node. It supports a single-node deployment in Google Cloud Platform, but with other cloud providers like Microsoft Azure and Amazon it does offers dual controllers deployment models. However, the RAID protection level isn't quite well designed since it is laid out at the RAID 0 level. So even though you have a dual-controller deployment in place, you do not have high-availability and fault-tolerance in place during a component failure. NetApp has said it will come out with HA as well, but even if they come out with HA, the way CVO data protection is quite different than a traditional NetApp storage system. Hence, in my opinion It needs to be improvised with RAID protection level on CVO to have better redundancy in place. In addition to it, when it comes to critical demanding workload or read-write-intensive application, it doesn't provide the expected performance that some of apps/DBs require for example SAP HANA Database. The SAP HANA database has a write-latency of less than 2 milliseconds and the CVO solution does not quite fit there. However, It could be quite well worked with other databases, where the requirements are not so stringent or high demanding for write-latency. I don't know if NetApp has done some PoCs or evaluation with the SAP HANA databases so they are certified to use with. A last thing, it is an unmanaged solution, it means someone who has no storage background or technical experience for them it's quite challenging to manage the Cloud Volumes ONTAP. They may need a NetApp managed-service model so the NetApp support team can help them to maintain or manage or troubleshoot their environment. When you deploy the solution to a customer environment, you shouldn't expect they will have some storage experience. They might be software or application developers but this product would require them to upgrade their knowledge on the storage track. In my opinion NetApp should consider selling the solution with some add-on services model for example CVO Manage Model support service models to support and manage customer CVO infrastructure.
Storage Engineer at a media company with 5,001-10,000 employees
Real User
2020-07-02T10:06:00Z
Jul 2, 2020
There is room for improvement with the capacity. There's a very hard limit to how many disks you can have and how much space you can have. That is something they should work to fix, because it's limiting. Right now, the limit is about 360 terabytes or 36 disks.
Sr. Systems Architect at a media company with 10,001+ employees
Real User
2020-07-02T10:06:00Z
Jul 2, 2020
One area for improvement is monitoring. Since we are using turn-on and turn-off, based on a schedule, it becomes a little bit difficult to monitor the instance and the replications, etc. If NetApp could implement a feature to monitor it more effectively, that would be helpful. Also, I would like to see more aggressive management of the aggregate space. On the Cloud Volumes ONTAP that we use for offsite backup copies, most of the data sits in S3. There are also the EBS volumes on the Cloud Volumes ONTAP itself. Sometimes what happens is that the aggregate size just stays the same. If it allocates 8 terabytes initially, it just stays at 8 terabytes for a long time, even though we're only using 20 percent of that 8 terabytes. NetApp could undersize that more aggressively.
I would like to see better integration with Active IQ. I know they're making strides for that, and some of the tools are being mimicked in Active IQ now so that I can look at the same information. If the footprint looks right and the GUI looks the same to us, it'll be more effective for us down the road in the long-term. Encryption is very important for us going forward because we sometimes store data out of the country, and sometimes overseas. We are looking forward to more in terms of encryption, including the inline encryption for SnapMirror and things of that nature.
Systems Engineer at a healthcare company with 1,001-5,000 employees
Real User
2019-11-05T05:28:00Z
Nov 5, 2019
I suspect ONTAP will just end up being a portion that runs on StorageGRID. Ultimately, everything will be object-based, then you'll just have a little dock of ONTAP that will do your NFS and CIFS.
Senior Systems Engineer at Cedars-Sinai Medical Center
Real User
2019-11-05T05:28:00Z
Nov 5, 2019
I think the challenge now is more in terms of keeping an air gap. The notion that it is in the cloud, easy to break, etc. The challenge now is mostly about the air gap and how we can protect that in the cloud.
The inclusion of onboard key management in CBL would simplify the way we have to do our security. Multipathing for iSCSI LUNs is difficult to deal with from the client-side and I'd love to see a single entry point that can be moved around within the cluster to simplify the client configuration.
Storage Admin at a comms service provider with 5,001-10,000 employees
Real User
2019-11-05T05:27:00Z
Nov 5, 2019
In terms of improvement, I would like to see the Azure NetApp Files have the capability of doing SnapMirrors. Azure NetApp Files is an AFF system and it's not used in any of the Microsoft resources. It's basically NetApp hardware, so the best performance you can achieve, but the only reason we can't use that right now is because of the region that it's available in. The second was the SnapMirror capability that we didn't have that we heavily rely on right now.
Storage Specialist at a comms service provider with 1,001-5,000 employees
Real User
2019-11-05T05:27:00Z
Nov 5, 2019
Maybe I need more speed, but so far, I don't have any feedback for improvements. I would like to see something from NetApp about backups. I know that NetApp offers some backup for Office 365, but I would like to see something from NetApp for more backup solutions.
Consultant at I.T. Blueprint Solutions Consulting Inc.
Consultant
2019-11-05T05:27:00Z
Nov 5, 2019
Some of the area's that need improvement are: * Cloud sync * Cloud Volume ONTAP * Deployment for the cloud manager These areas need to be streamlined. They are basic configuration error states to acquire late provisioning. I would like to see the ability to present CIFS files that have been SnapMirrorroed to the Cloud Volume ONTAP and the ability to serve them similarly to OneDrive or Web interfaces. We are talking about DR cases, customers who are trying to streamline their environments. In the case of DR, users can easily access that data. Today, without running it as file services fully and presenting it through some third party solution, there is no easy way for an end-user to access the appropriate data. This means that we have to build the whole infrastructure for the end-user to be able to open their work files. The integration wizard requires a bit of streamlining. There are small things that misconfigure or repeat the deployment that will create errors, specifically in Azure. As an example, you cannot reuse that administrator name, because that object is created in Azure, and it will not let you create it again. So, when the first deployment fails and we deploy for a second time, we have to use a new administration name. Additionally, it requires connectivity from NetApp to register the products and the customer is notified that Network access is not allowed, which creates a problem. This issue occurs during the time of deployment, but it isn't clear why your environment is not deploying successfully. For this reason, more documentation is needed in explaining and clarification steps of how it needs to be done.
Solutions Architect at a tech services company with 201-500 employees
Real User
2019-11-05T05:27:00Z
Nov 5, 2019
I would some wizards or best practices following how to secure CVO, inherit to the Cloud Manager. I thought that was a good place to be able to put stuff like that in there. I would like some more performance matrices to know what it is doing. It has some matrices inherent to the Cloud Volumes ONTAP. But inside Cloud Manager, it would also be nice to see. You can have a little Snapshot, then drill down if you go a little deeper. This is where I would like to see changes, primarily around security and performance matrices.
I would like this solution to be brought to all the three major players. Right now it's supported only on AWS and Azure. They should bring it to Google as well because we would like to have flexibility in choosing the underlying cloud storage provider.
Right now, we're using StorageGRID. Obviously, it is a challenge. Anything that you're writing to the cloud or when you get things from the cloud, it is a challenge. When we implemented StorageGRID, like nodes and things like that, we implemented it on our bare-metal. So the issue is that they're trying to implement features, like erasure coding and things like that, and it is a huge challenge. It's still a challenge because we have a fine node bare-metal Docker implementation, so if you lose a node for some reason, then it's like it stops to read from it or write to it. This is because of limitations within the infrastructure and within ONTAP. How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically. It needs better reporting. Right now, we had to put everything one to the other just to figure out what could be the issue. We get a random error saying, "This is an error," and we have to literally dig into it, look to people, lock files, look through our loads, and look through the Docker lock files, then verify, "Okay, this is the issue." We just want it to be better in alerting and error handling reports. Once you get an error, you don't want to sit trying to figure out what that error means in the first two hours. It should be fixable right away. Then, right away you are trying to work on it, trying to get it done. That's where we see the drawbacks. Overall, the product is good and serves a purpose, but as an administrator and architect, nothing is perfect.
Lead Storage Engineer at a insurance company with 5,001-10,000 employees
Real User
2019-07-23T14:14:00Z
Jul 23, 2019
Some of the licensing is a little kludgey. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgey. They're working on it right now. We're working with them on straightening it out. We're moving a grid environment to Azure and the way it was set up is that we have eight SVMs, which are virtual environments. Each of those has its own CIFS servers, all their CIFS and NFS mounts. The reason they're independent of one another is that different groups of business got pulled together, so they had specific CIFS share names and you can't have the same name in the same server more than once on the network. You can't have CIFS share called "Data" in the same SVM. We have eight SVMs because of the way the data was labeled in the paths. God forbid you change a path because that breaks everything in every application all down the line. It gives you the ability to port existing applications from on-prem into cloud and/or from on-prem into fibre infrastructure. But that ability wasn't there in Cloud Volumes ONTAP because they assume that it was going to be a new market and they licensed it for a single SVM per instance built out in the cloud. They were figuring: New market and new people coming to this, not people porting these massive old-volume infrastructures. In our DR infrastructure we have 60 SVMs. That's not how they build out the new environments. We're working with them to improve that and they're making strides. The licensing is the only thing that I can see they can improve on and they're working on it, so I wouldn't even knock them on that.
Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair. My guess is that those will be the next challenges they have to face. One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have. That was something of a challenge for us: where to use HA disks and where to use Cloud Volumes ONTAP in that environment, instead of just using Cloud Volumes ONTAP.
Senior Manager, IT CloudX at Mellanox Technologies
Real User
2019-07-14T10:21:00Z
Jul 14, 2019
The DR has room for improvement. For example, we now have NetApp in Western Europe and we would like to back up the information to another region. It's impossible. We need to bring up an additional NetApp in that other region and create a Cloud Manager automation to copy the data. So we do that once, at night, to another region and then shut down the destination. It's good because it's using Cloud Manager and its automation, but I would prefer it to be a more integrated solution like it was in the NetApp solution about a year ago. I would like to see something like AltaVault but in the cloud.
Sr. Manager at a tech vendor with 10,001+ employees
Real User
2019-07-02T11:47:00Z
Jul 2, 2019
They're making the right improvements right now. The only issue we had lately was that outside our VPC we could not reach the virtual IP, the floating IP. I heard that they have fixed that as well. That's a good advantage.
Project Development Coordinator at ALIMENTOS ITALIA
Real User
2019-02-20T17:30:00Z
Feb 20, 2019
It does not have tutorials in languages such as French, German, Spanish. Adding these options would improve the support service of NetApp Cloud Volumes ONTAP.
El producto mantiene un enfoque arraigado en los procesos no alternativos. Esto logra dificultad en la conexión de la vulnerabilidad en los procesos. Me gustaría verlos mejorar la perspectiva de inicio y búsqueda en los paneles. Esto permitiría una mejor visualización de los contenidos que se capturan en la herramienta.
Storage Supervisor at a energy/utilities company with 10,001+ employees
Real User
2018-12-19T07:16:00Z
Dec 19, 2018
The key feature, that we'd like to see in that is the ability to sync between regions within the AWS and Azure regions. We could use the cloud sync service, but we'd really like that native functionality within the cloud volume service.
Senior System Engineer at a tech vendor with 1,001-5,000 employees
Real User
2018-12-11T08:30:00Z
Dec 11, 2018
AWS has come into the picture, so we want to move into AWS. Therefore, we don't want to do anything more on on-premise anymore. NetApp has to come up with a cloud version. I would like to have more management tools. They are difficult to work with, so I would like them to be a bit more user-friendly.
NetApp could certainly improve on the support side. They do not need to improve so much on the product side for now, because we have procured a high end system.
Director of Applications at Coast Capital Savings Credit Union
Real User
2018-12-05T07:52:00Z
Dec 5, 2018
The navigation on some of the configuration parameters is a bit cumbersome, making the learning curve on functions somewhat steep. I would like them to make upgrading simpler. I would like it if they could offer a simpler upgrade guide which you can generate through their website, because their current version is full of dozens of complicated CLI commands.
Systems Programmer at a university with 10,001+ employees
Real User
2018-10-24T09:09:00Z
Oct 24, 2018
I would like to see more cloud integration. NetApp had nothing for cloud integration about three or four years back and then, all of a sudden, they got it going and got it going quickly, catching up with the competition. They've done a very good job. NetApp's website has seen phenomenal changes, so I greatly appreciate that.
Just more scale out. It can only do two nodes. One SVM, which would be okay as long as I can scale easily. It needs to be matured with more capabilities.
NetApp Cloud Volumes ONTAP is an efficient storage management solution for managing and storing data in the cloud. It offers seamless integration with cloud providers, advanced data replication capabilities, and high data protection. With reliable performance, it is ideal for industries like healthcare and finance.
NetApp Cloud Volumes ONTAP should improve its support.
There's not much scope for improvement. I think the solution is more restricted with the underlying cloud. The performance of the single instances depends on the performance of the underlying cloud resource.
I would want more visibility and data analytics where we can see anomalies within the shares within the GUI.
Some monitoring issues require improvement. The auto alerting and monitoring should be better in the next release.
We've been dealing with general pre-requisite infrastructure configuration challenges. Once those are out of the way, it is easy.
The cost needs improvement. Cost should go down. If you have a company with many servers, then the cost is down. However, if you're in a situation where you only need it for one function, then the cost can be overwhelming.
The solution could be better when we're connecting to our S3 side of the house. Right now, it doesn't see it, and I'm not sure why.
I would like to see more information about Cloud Volumes ONTAP using Google Cloud Platform on NetApp's website.
Something we would like to see is the ability to better manage the setup and tie it to our configuration management database. We manage our whole IT infrastructure out of that database.
The encryption and deduplication features still have a lot of room for improvement.
The only issue I can think of is metrics, but I think they have improved that in the newer versions already. There should be an easy place to see all your metrics.
The only area for improvement would be some guidance in terms of the future products that NetApp is planning on releasing. I would like to see communication around that or advice such as, "Hey, the world is moving towards this particular trend, and NetApp can help you do that." I do get promotional emails from NetApp, but customer-specific advice would be helpful, based on our use cases.
They definitely need to stay more on top of security vulnerabilities. Our security team is constantly finding Java vulnerabilities and SQL vulnerabilities. Our security team always wants the latest security update, and it takes a while for NetApp to stay up to speed with that. That would be my biggest complaint.
I think this is more of a limitation of how it operates in Azure, but the solution is affected by this limitation. There's something about how the different availability zones, the different regions, operate in Azure. It's very difficult to set up complete fault tolerance using multiple CVO nodes and have one node in one region and one node in another region. This is not something that I have dug into myself. I am hearing about this from other IT nerds.
It definitely needs improvement with respect to clustering and with respect to more collaborative integrations with Azure. Right now, we have very limited functionalities with Azure, except for storage. If CVO could be integrated with Azure that would help. When there is any sort of maintenance happening in the cloud, it disrupts the service in Cloud Volumes ONTAP. If those could be rectified, that would be really good news because it would reduce the administrative overhead my team and I are facing.
I'm very happy with the solution, the only thing that needs improvement is the web services API. It could be a little bit more straightforward. That's my only issue with it. It can get pretty complex.
If they could include clustering together multiple physical Cloud Volumes ONTAP devices as an option, that could be helpful. The ease of data migration between devices could be improved somewhat. There is already some flexibility which is better than just migrating the data. However, that could potentially be further improved.
They don't provide training documentation where we can learn about the back-end architecture and how it works. I have needed this type of documentation for Cloud Manager, its AWS integration, and managing the on-premise back-end. We would also like to learn about future enhancements from documentation.
There are a few bugs in the system that they need to improve on the UI part. Specifically, its integration of NetApp Cloud Manager with CVO, which is something they are already working on. They will probably provide a SaaS offering for Cloud Manager. We want to be able to add more than six disks in aggregate, but there is a limit of the number of disks in aggregate. In GCP, they provide less by limiting the sixth disk in aggregate. In Azure, the same solution provides 12 disks in an aggregate versus GCP where it is just half that amount. They should bump up the disk in aggregate requirement so we don't have to migrate the aggregate from one to another when the capacities are full.
Currently, Cloud Volumes ONTAP is not a high-availability solution. When you deploy the solution it comes in single-node. It supports a single-node deployment in Google Cloud Platform, but with other cloud providers like Microsoft Azure and Amazon it does offers dual controllers deployment models. However, the RAID protection level isn't quite well designed since it is laid out at the RAID 0 level. So even though you have a dual-controller deployment in place, you do not have high-availability and fault-tolerance in place during a component failure. NetApp has said it will come out with HA as well, but even if they come out with HA, the way CVO data protection is quite different than a traditional NetApp storage system. Hence, in my opinion It needs to be improvised with RAID protection level on CVO to have better redundancy in place. In addition to it, when it comes to critical demanding workload or read-write-intensive application, it doesn't provide the expected performance that some of apps/DBs require for example SAP HANA Database. The SAP HANA database has a write-latency of less than 2 milliseconds and the CVO solution does not quite fit there. However, It could be quite well worked with other databases, where the requirements are not so stringent or high demanding for write-latency. I don't know if NetApp has done some PoCs or evaluation with the SAP HANA databases so they are certified to use with. A last thing, it is an unmanaged solution, it means someone who has no storage background or technical experience for them it's quite challenging to manage the Cloud Volumes ONTAP. They may need a NetApp managed-service model so the NetApp support team can help them to maintain or manage or troubleshoot their environment. When you deploy the solution to a customer environment, you shouldn't expect they will have some storage experience. They might be software or application developers but this product would require them to upgrade their knowledge on the storage track. In my opinion NetApp should consider selling the solution with some add-on services model for example CVO Manage Model support service models to support and manage customer CVO infrastructure.
There is room for improvement with the capacity. There's a very hard limit to how many disks you can have and how much space you can have. That is something they should work to fix, because it's limiting. Right now, the limit is about 360 terabytes or 36 disks.
One area for improvement is monitoring. Since we are using turn-on and turn-off, based on a schedule, it becomes a little bit difficult to monitor the instance and the replications, etc. If NetApp could implement a feature to monitor it more effectively, that would be helpful. Also, I would like to see more aggressive management of the aggregate space. On the Cloud Volumes ONTAP that we use for offsite backup copies, most of the data sits in S3. There are also the EBS volumes on the Cloud Volumes ONTAP itself. Sometimes what happens is that the aggregate size just stays the same. If it allocates 8 terabytes initially, it just stays at 8 terabytes for a long time, even though we're only using 20 percent of that 8 terabytes. NetApp could undersize that more aggressively.
I would like to see better integration with Active IQ. I know they're making strides for that, and some of the tools are being mimicked in Active IQ now so that I can look at the same information. If the footprint looks right and the GUI looks the same to us, it'll be more effective for us down the road in the long-term. Encryption is very important for us going forward because we sometimes store data out of the country, and sometimes overseas. We are looking forward to more in terms of encryption, including the inline encryption for SnapMirror and things of that nature.
I suspect ONTAP will just end up being a portion that runs on StorageGRID. Ultimately, everything will be object-based, then you'll just have a little dock of ONTAP that will do your NFS and CIFS.
NetApp CVO needs to have more exposure and mature further before it will have greater acceptance.
In the next release, I would like to see more options on the dashboard. Local support needs improvement.
I think the challenge now is more in terms of keeping an air gap. The notion that it is in the cloud, easy to break, etc. The challenge now is mostly about the air gap and how we can protect that in the cloud.
Only AWS and Azure public clouds are currently available from China, and I would like to see support for Aliyun (Alibaba Cloud).
The inclusion of onboard key management in CBL would simplify the way we have to do our security. Multipathing for iSCSI LUNs is difficult to deal with from the client-side and I'd love to see a single entry point that can be moved around within the cluster to simplify the client configuration.
We would like to have support for high availability in multi-regions. There is no support for Microsoft Azure.
In terms of improvement, I would like to see the Azure NetApp Files have the capability of doing SnapMirrors. Azure NetApp Files is an AFF system and it's not used in any of the Microsoft resources. It's basically NetApp hardware, so the best performance you can achieve, but the only reason we can't use that right now is because of the region that it's available in. The second was the SnapMirror capability that we didn't have that we heavily rely on right now.
Maybe I need more speed, but so far, I don't have any feedback for improvements. I would like to see something from NetApp about backups. I know that NetApp offers some backup for Office 365, but I would like to see something from NetApp for more backup solutions.
Some of the area's that need improvement are: * Cloud sync * Cloud Volume ONTAP * Deployment for the cloud manager These areas need to be streamlined. They are basic configuration error states to acquire late provisioning. I would like to see the ability to present CIFS files that have been SnapMirrorroed to the Cloud Volume ONTAP and the ability to serve them similarly to OneDrive or Web interfaces. We are talking about DR cases, customers who are trying to streamline their environments. In the case of DR, users can easily access that data. Today, without running it as file services fully and presenting it through some third party solution, there is no easy way for an end-user to access the appropriate data. This means that we have to build the whole infrastructure for the end-user to be able to open their work files. The integration wizard requires a bit of streamlining. There are small things that misconfigure or repeat the deployment that will create errors, specifically in Azure. As an example, you cannot reuse that administrator name, because that object is created in Azure, and it will not let you create it again. So, when the first deployment fails and we deploy for a second time, we have to use a new administration name. Additionally, it requires connectivity from NetApp to register the products and the customer is notified that Network access is not allowed, which creates a problem. This issue occurs during the time of deployment, but it isn't clear why your environment is not deploying successfully. For this reason, more documentation is needed in explaining and clarification steps of how it needs to be done.
I would some wizards or best practices following how to secure CVO, inherit to the Cloud Manager. I thought that was a good place to be able to put stuff like that in there. I would like some more performance matrices to know what it is doing. It has some matrices inherent to the Cloud Volumes ONTAP. But inside Cloud Manager, it would also be nice to see. You can have a little Snapshot, then drill down if you go a little deeper. This is where I would like to see changes, primarily around security and performance matrices.
I would like this solution to be brought to all the three major players. Right now it's supported only on AWS and Azure. They should bring it to Google as well because we would like to have flexibility in choosing the underlying cloud storage provider.
Right now, we're using StorageGRID. Obviously, it is a challenge. Anything that you're writing to the cloud or when you get things from the cloud, it is a challenge. When we implemented StorageGRID, like nodes and things like that, we implemented it on our bare-metal. So the issue is that they're trying to implement features, like erasure coding and things like that, and it is a huge challenge. It's still a challenge because we have a fine node bare-metal Docker implementation, so if you lose a node for some reason, then it's like it stops to read from it or write to it. This is because of limitations within the infrastructure and within ONTAP. How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically. It needs better reporting. Right now, we had to put everything one to the other just to figure out what could be the issue. We get a random error saying, "This is an error," and we have to literally dig into it, look to people, lock files, look through our loads, and look through the Docker lock files, then verify, "Okay, this is the issue." We just want it to be better in alerting and error handling reports. Once you get an error, you don't want to sit trying to figure out what that error means in the first two hours. It should be fixable right away. Then, right away you are trying to work on it, trying to get it done. That's where we see the drawbacks. Overall, the product is good and serves a purpose, but as an administrator and architect, nothing is perfect.
Some of the licensing is a little kludgey. We just created an HA environment in Azure and their licensing for SVMs per node is a little kludgey. They're working on it right now. We're working with them on straightening it out. We're moving a grid environment to Azure and the way it was set up is that we have eight SVMs, which are virtual environments. Each of those has its own CIFS servers, all their CIFS and NFS mounts. The reason they're independent of one another is that different groups of business got pulled together, so they had specific CIFS share names and you can't have the same name in the same server more than once on the network. You can't have CIFS share called "Data" in the same SVM. We have eight SVMs because of the way the data was labeled in the paths. God forbid you change a path because that breaks everything in every application all down the line. It gives you the ability to port existing applications from on-prem into cloud and/or from on-prem into fibre infrastructure. But that ability wasn't there in Cloud Volumes ONTAP because they assume that it was going to be a new market and they licensed it for a single SVM per instance built out in the cloud. They were figuring: New market and new people coming to this, not people porting these massive old-volume infrastructures. In our DR infrastructure we have 60 SVMs. That's not how they build out the new environments. We're working with them to improve that and they're making strides. The licensing is the only thing that I can see they can improve on and they're working on it, so I wouldn't even knock them on that.
Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair. My guess is that those will be the next challenges they have to face. One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have. That was something of a challenge for us: where to use HA disks and where to use Cloud Volumes ONTAP in that environment, instead of just using Cloud Volumes ONTAP.
The DR has room for improvement. For example, we now have NetApp in Western Europe and we would like to back up the information to another region. It's impossible. We need to bring up an additional NetApp in that other region and create a Cloud Manager automation to copy the data. So we do that once, at night, to another region and then shut down the destination. It's good because it's using Cloud Manager and its automation, but I would prefer it to be a more integrated solution like it was in the NetApp solution about a year ago. I would like to see something like AltaVault but in the cloud.
They're making the right improvements right now. The only issue we had lately was that outside our VPC we could not reach the virtual IP, the floating IP. I heard that they have fixed that as well. That's a good advantage.
It does not have tutorials in languages such as French, German, Spanish. Adding these options would improve the support service of NetApp Cloud Volumes ONTAP.
El producto mantiene un enfoque arraigado en los procesos no alternativos. Esto logra dificultad en la conexión de la vulnerabilidad en los procesos. Me gustaría verlos mejorar la perspectiva de inicio y búsqueda en los paneles. Esto permitiría una mejor visualización de los contenidos que se capturan en la herramienta.
The key feature, that we'd like to see in that is the ability to sync between regions within the AWS and Azure regions. We could use the cloud sync service, but we'd really like that native functionality within the cloud volume service.
AWS has come into the picture, so we want to move into AWS. Therefore, we don't want to do anything more on on-premise anymore. NetApp has to come up with a cloud version. I would like to have more management tools. They are difficult to work with, so I would like them to be a bit more user-friendly.
The data tiering needs improvement. E.g., moving hard data to faster disks.
NetApp could certainly improve on the support side. They do not need to improve so much on the product side for now, because we have procured a high end system.
The navigation on some of the configuration parameters is a bit cumbersome, making the learning curve on functions somewhat steep. I would like them to make upgrading simpler. I would like it if they could offer a simpler upgrade guide which you can generate through their website, because their current version is full of dozens of complicated CLI commands.
I would like to see more cloud integration. NetApp had nothing for cloud integration about three or four years back and then, all of a sudden, they got it going and got it going quickly, catching up with the competition. They've done a very good job. NetApp's website has seen phenomenal changes, so I greatly appreciate that.
Just more scale out. It can only do two nodes. One SVM, which would be okay as long as I can scale easily. It needs to be matured with more capabilities.