The primary use case is to move age old data to the cloud.
It is deployed on the cloud.
The primary use case is to move age old data to the cloud.
It is deployed on the cloud.
The tool saves us time and money. Now, it's easy to retrieve data back, where you can go back and look at the statistics to study them. Because my company is focused on healthcare, there's no time limit on the retention of information. It's infinite. So, instead of having all our data on tapes and things, which takes many hours to try to retrieve information back. This is a good solution.
The migration is seamless. Basically, we shouldn't be spending a whole lot budget-wise. We would like to have something reasonable. What's happening right now is when we try to develop a cloud solution, we don't see the fine print. Then, at the end of the day, we are getting a long bill that says, "Okay, this is that, that is what." So, we don't want those unanticipated costs.
We use the solution’s inline encryption using SnapMirror. We did get Geoaudits and things like that. In other words, everything put together is a security. It's not like just storage talking to the cloud, it's everything else too: network, PCs, clients, etc. It's a cumulative effort to secure. That's where we are trying to make sure there are no vulnerabilities. Any vulnerabilities are addressed right away and fixed.
The solution’s Snapshot copies and thin clones in terms of operational recovery are good. Snapshot copies are pretty much the write-in time data backups. Obviously, critical data is snapshotted more frequently, and even clients and end users find it easier to restore whatever they need if it's file-based, statical, etc.
The solution’s Snapshot copies and thin clones have affected our application development speed positively. They have affected us in a very positive way. From Snapshots, copies, clones, and things, they were able to develop applications, doing pretty much in-house development. They were able to roll it out first in the test environment of the R&D department. The R&D department uses it a lot. It's easy for them because they can simulate production issues while they are still in production. So, they love it. We create and clone for them all the time.
The solution helped reduced our company's data footprint in the cloud. They're reducing it by two petabytes of data in the cloud. All of the tape data, they are now writing to the cloud. It's like we have almost reached the capacity that we bought even before we knew we were going to reach it. So it's good. It reduces labor, because with less tapes, you don't have to go around buying tapes and maintaining those tapes, then sending them offsite, etc. All that has been eliminated.
Right now, we're using StorageGRID. Obviously, it is a challenge. Anything that you're writing to the cloud or when you get things from the cloud, it is a challenge. When we implemented StorageGRID, like nodes and things like that, we implemented it on our bare-metal. So the issue is that they're trying to implement features, like erasure coding and things like that, and it is a huge challenge. It's still a challenge because we have a fine node bare-metal Docker implementation, so if you lose a node for some reason, then it's like it stops to read from it or write to it. This is because of limitations within the infrastructure and within ONTAP.
How it handles erasure coding. I feel it the improvement should be there. Basically, it should be seamless. You don't want to have an underlying hardware issue or something, then suddenly there's no reads or writes. Luckily, it's at a replication site, so our main production site is still working and writing to it. But, the replication site has stopped right now while we try to bring that node back. Since we implemented in bare-metal, not in appliance, we had to go back to the original vendor. They didn't send it in time, and we had a hardware memory issue. Then, we had a hard disk issue, which brought the node down physically.
It needs better reporting. Right now, we had to put everything one to the other just to figure out what could be the issue. We get a random error saying, "This is an error," and we have to literally dig into it, look to people, lock files, look through our loads, and look through the Docker lock files, then verify, "Okay, this is the issue." We just want it to be better in alerting and error handling reports. Once you get an error, you don't want to sit trying to figure out what that error means in the first two hours. It should be fixable right away. Then, right away you are trying to work on it, trying to get it done. That's where we see the drawbacks. Overall, the product is good and serves a purpose, but as an administrator and architect, nothing is perfect.
There's always room for improvement. Overall, it's still stable.
60 percent of our tape data is sitting in the cloud now.
There's a limitation to scalability. Right now, when you want to expand the initial architecture, we have to add additional loads just so it can handle the data without hurting the performance. Then, we have to go back and request for more licensing. It adds to our licensing, thus adding to the cost. In regards to scalability, unless you have a five to six year plan ahead, we can't say, "Great, we have run out of space. Okay, let's try to increase space." It's not like increasing volume.
Unless a much more experienced person comes, I think the print and tech guy is only reading what he sees on the website. He pulls up their code or whatever, because what we see when we open a case is already there is an automatic case that's opened. We see typical questionnaires, but nothing pertaining to the case. For example, you run out of space or high nodes, the technical support is sitting there asking us something else. Nothing to do with high nodes and the volume being down or offline. It's not relevant. It is a generalized thing. You have to sit down and explain to them, "This has nothing to do with the questions you're asking. It's out of context, so you might want to look again and get back with the proper input." That's a pain.
However, the minute we say, "It's very critical," we see a good, solid SME on the line who is helping us.
I'm not experienced as many of my colleagues. They're really frustrated. We did convey this concern to our account person and have seen a lot of change.
The company has always been a NetApp shop even before I entered the company. We continue to use it because of the good products. We do market research, obviously. We do see good products, and every year there is improvement. When we want to do hardware upgrades, it's still very good. The way we are trying to develop, it's very seamless for us and not a pain.
We have never felt, "We are done with NetApp. Let's move onto something else." I love to introduce other vendors into the mix, just so it's not a monopoly. We still love NetApp as our primary.
It is a little complex. It's completely different from the regular standard ONTAP, and how you manage and the learning code. Half the time you get confused and try to compare it with a standard cloud. You start to say, "Oh, this feature was here. How come it's not there? That was very good there. How come it's not here?"
We used NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. The configuration wizards and its ability to automate the process was good. We liked it. It's all in one place, so you don't have to go around trying to use multiple tools just to get things worked out. You see what you have on the other side plus what do you have on your end, and you're able to access it.
Mostly, we did it ourselves. When we went to MetroCluster, we used their Professional Services. For the rest of ONTAP, we deployed it ourselves. It is pretty much self-explanatory and has good training.
Cloud is cloud. It's still expensive. Any good solution comes with a price tag. That's where we are looking to see how well we can manage our data in the cloud by trying to optimize the costs.
I do know our licensing cost to some extent, but not fully. E.g., I don't know overall how much we have gone over the budget or where did we put costs down just to maintain licensing on it. That part of it, I don't know.
I know the licensing is a bit on the high-end. That's when we had to downsize our MetroCluster disks and just migrate to disks that were half used. We migrated into those just to reduce maintenance costs.
We use Caringo. It's object storage migration for age old data. It is a cheap solution for us, so that's why we use that. When we compared prices, Caringo was much cheaper.
Once we migrated everything to Caringo, there were challenges because it's another vendor, and then you're working with two different vendors. We started having issues, so now we use StorageGRID.
We chose NetApp because we already had the infrastructure. Adding additional resources and features into the mix is much easier because it's one vendor, and they understand the product. If we needed to add something and improve on the solution, it's much easier.
I would recommend NetApp any day, at any time, because there's so much hard work in it. It's more open and transparent. Nobody is coming from NetApp, saying, "We're going to sell this gimmick." Then, you view all the good stuff but begin to realize, "This is not what they promised." For this reason, I would recommend NetApp.
They make sure the solution fits our needs. It's not, "Okay, we'll go to the customer site and tell what we feel like regarding their products." Even if it fits or not, it doesn't affect that they've gone through the door. A lot of people do that. NetApp makes an assessment, then they make sure, "Okay, it does fit in."
The product: I would give it an eight (out of 10). The company: It's a six (out of 10).
We have not yet implemented the solution to move data between hyperscalers and our on-premises environment. It's just from our NetApps to the cloud, not from the hybrid. The RVM team is planning on that. So, they can have the whole untouched thing put on the cloud rather than being hosted on our data stores.
It is managing services in our production environment that are in Azure. It provides file shares, both NFS and CIFS, that are used by other applications that are also in Azure.
NetApp Cloud Volumes ONTAP is part of the production environment of our company so the entire company, over 5,000 employees globally, is touching it somehow. It's a part of an application that has data that resides on it and they may consume that application.
Cloud Volumes ONTAP is great because of the storage efficiencies that it provides. When you look at the cost of running Azure native storage versus the cost of Cloud Volumes ONTAP, you end up saving money with Cloud Volumes ONTAP. That's a big win because cost is a huge factor when putting workloads in the cloud. We had a cost estimate survey done, a comparison between the two, and I believe that Cloud Volumes ONTAP saves us close to 30 percent compared to the Azure native costs.
Azure pricing is done in a type of a tier. Once you exceed a certain amount of storage, your cost goes down. So the more data you store, the more you're going to end up saving.
The storage efficiencies from the NetApp platform allow you to do inline deduplication and compaction of data. All of this adds up to using less of the disk in Azure, which adds up to savings.
We have two nodes of the NetApp in Azure, which means we have some fault tolerance. That is helpful because Azure just updates stuff when they want to and you're not always able to stop them or schedule it at a later time. Having two CVO nodes is helpful to keep the business up when Azure is doing their maintenance.
The solution provides unified storage no matter what kind of data you have. We were already using the NetApp platform on our on-premise environments, so it's something we're already familiar with in terms of how to manage permissions on different types of volumes, whether it's an NFS export or a CIFS share. We're able to utilize iSCSI data stores if we need to attach a volume directly to a VM. It allows us to continue to do what we're already familiar with in the NetApp environment. Now we can do them in Azure as well.
It enables us to manage our native cloud storage better than if we used the management options provided by the native cloud service. With CVO, all of your data shares and volumes are on the one NetApp platform. Whether you are adjusting share permissions on an NFS export or a CIFS share, you can do it all from within the NetApp management interface. That's much easier than the Azure native, where you may have to go to two or three different screens to do the same stuff.
The storage efficiencies are something that you don't get on native.
Also, because of the NetApp product, we're able to use the SnapMirror function and SnapMirror data from our on-prem environment into Azure. That is super-helpful. SnapMirror allows you to take data that exists on one NetApp, on a physical NetApp storage platform, and copy it over to another NetApp storage platform. It's a solid, proven technology, so we don't worry about whether data is getting lost or corrupted during the SnapMirror. We are also able to throttle back the speed of the SnapMirror to help our network team that is paying for a data circuit. We're still able to copy data into Azure, but we can manage the transfer cost because we can throttle back the SnapMirror. It's just very solid and reliable. It works.
And all of us IT nerds are already familiar with the NetApp platform so there was not a major learning curve to start using it in Azure.
NetApp also has something called Active IQ Unified Manager, and it gives us performance monitoring of the CVO from an external source. There are several people on my team that utilize the CVO and we each have a personal preference for how we look at data. The Active IQ Unified Manager is a product you can get from NetApp because, once you license your CVO, you are entitled to other tools. CVO does have resource performance monitoring built in, but we primarily utilize the Active IQ Unified Manager.
Beyond that, it provides all the great stuff that the NetApp platform can do, but it's just in the cloud.
I think this is more of a limitation of how it operates in Azure, but the solution is affected by this limitation. There's something about how the different availability zones, the different regions, operate in Azure. It's very difficult to set up complete fault tolerance using multiple CVO nodes and have one node in one region and one node in another region. This is not something that I have dug into myself. I am hearing about this from other IT nerds.
We've been using NetApp Cloud Volumes ONTAP for two years.
We had issues with Azure when they did maintenance on the nodes. They just do their maintenance and it's up to us, the customer, to make sure that our applications are up and data is flowing. When Azure does their maintenance, they do maintenance on one node at a time. With the two nodes of the CVO, it can automatically fail over from one node to the node that is staying up. And when the first node comes back online, it will fail back to the first node. We have had issues with everything failing back 100 percent correctly.
We have had tickets open with NetApp to have them look into it and try and resolve it. They've made improvements in some ways, but it's still not 100 percent automated for everything to return back. That's an ongoing thing we have to keep an eye on.
It is definitely scalable. You can add more disk to grow your capacity and you have the ability to add more nodes. There's a limit to how many nodes you can add, but you can definitely scale up.
Tech support is good. A lot of it depends on the technician that you get, but if you're not happy with one technician, you can request that it be escalated or you can request that it just be handled by another technician. They're very eager to help and resolve issues.
We had some issues with permissions and with getting the networking correct. But we had a lot of support from NetApp as well as from Azure. As a result, I would not say the setup was straightforward, but we got the help and the support we needed and you can't ask for more than that.
I've always found NetApp support to be accurate and good with their communications. Rolling out this product in Azure, and working with the IT nerds in our company and with Azure nerds, occasionally it does add another layer of who has to be communicated with and who has to do stuff. But my experience with NetApp is that they are responsive and very determined to get situations resolved.
It took us about a week to get everything ironed out and get both nodes functional.
We had done a PoC with a smaller instance of the CVO and the PoC was pretty straightforward. Once we rolled out the production CVO that has two nodes, that's when it was more complicated. We had a plan for getting it deployed and to decide at what point we would say, "Okay, now it's ready for prime time. Now it's ready to be put into production."
For admin of the solution we have less than 10 people, and they're all storage administrator analysts like me.
Our licensing is based on a yearly subscription. That is an additional cost, but because of the storage efficiencies that the NetApp gives, even with the additional cost of the NetApp license, you still end up saving money versus straight Azure native for storage. It's definitely worth it.
Make sure that you can stay operational when Azure is doing their maintenance. Make sure you fully understand how the failover and the give-back process works, so that you can deal with your maintenance.
The primary use case is for SAP production environments. We are running the shared file systems for our SAP systems on it.
It's helped us to dive into the cloud very fast. We didn't have to change any automations which we already had. We didn't have to change any processes we already had. We were able to adopt it very fast. It was a huge benefit for us to use the same concepts in the cloud as we do on-premise. We're running our environment very efficiently, and it was very helpful that our staff, our operators, didn't have to learn new systems. They have the same processes, all the same knowledge they had before. It was very easy and fast.
We did a comparison, of course, and it was cheaper to have Cloud Volumes ONTAP running with the deduplication and compression, compared to storing everything, for example, on HA disks and have a server running all the time as well. And that was not even for the biggest environment.
The data tiering saves us money because it offloads all the code data to the Blob Storage. However, we use the HA version and data tiering just came to HA with version 9.6 and we are not on 9.6 in our production environment. It's still on RC, the pre-release, and not on GA release. In our testing we have seen that it saves a lot of money, but our production systems are not there yet.
The high availability of the service is a valuable feature. We use the HA version to run two instances. That way there is no downtime for our services when we do any maintenance on the system itself.
For normal upgrades or updates of the system - updates for security fixes, for example - it helps that the systems and that the service itself stay online. For one of our customers, we have 20 systems attached and if we had to ride that customer all the time and say, "Oh, sorry, we have to take your 20 systems down just because we have to do maintenance on your shared file systems," he would not be amused. So that's really a huge benefit.
And there are the usual NetApp benefits we have had over the last ten years or so, like snapshotting, cloning, and deduplication and compression which make it space-efficient on the cloud as well. We've been taking advantage of the data protection provided by the snapshot feature for many years in our on-prem storage systems. We find it very good. And we offload those snapshots as well to other instances, or to other storage systems.
The provisioning capability was challenging the first time we used it. You have to find the right way to deploy but, after the first and second try, it was very easy to automate for us. We are highly automated in our environment so we use the REST API for deployment. We completely deploy the Cloud Volumes ONTAP instance itself automatically, when we have a new customer. Similarly, deployment on the Cloud Volumes ONTAP for the Volumes and access to the Cloud Volumes ONTAP instance are automated as well.
But for that, we still use our on-premise automations with WFA (Workflow Automation). NetApp has a tool which simplifies the automation of NetApp storage systems. We use the same automation for the Cloud Volumes ONTAP instances as we do for our on-premise storage systems. There's no difference, at the end of the day, from the operating system standpoint.
In addition, NetApp's Cloud Manager automation capabilities are very good because, again, it's REST-API-driven, so we can completely automate everything. It has a good overview if you want to just have a look into your environment as well. It's pretty good.
Another feature which gets a lot of attention in our environment is the File Services Solutions in the cloud, because it's a completely, fully-managed service. We don't have to take care of any updates, upgrades, or configurations. We're just using it, deploying volumes and using them. We see that, in some way, as being the future of storage services, for us at least: completely managed.
Scale-up and scale-out could be improved. It would be interesting to have multiple HA pairs on one cluster, for example, or to increase the single instances more, from a performance perspective. It would be good to get more performance out of a single HA pair. My guess is that those will be the next challenges they have to face.
One difficulty is that it has no SAP HANA certification. The asset performance restrictions create challenges with the infrastructure underneath: The disks and stuff like that often have lower latencies than SAP HANA itself has to have. That was something of a challenge for us: where to use HA disks and where to use Cloud Volumes ONTAP in that environment, instead of just using Cloud Volumes ONTAP.
The stability is very good. We haven't had any outages.
Right now, the scalability is sufficient in what it provides for us, but we can see that our customer environments are growing. We can see that it will reach its performance end in around a year or so. They will have to evolve or create some performance improvements or build some scale-up/scale-out capabilities into it.
In terms of increasing our usage, the tiering will be definitely used in production as soon as its GA for Azur. They're already playing with the Ultra SSDs, for performance improvements on the storage system itself. As soon as they become generally available by Microsoft, that will probably a feature we'll go to.
As for end-users, for us they are our customers. But the customers have several hundred or 1,000 users on the system. I don't really know how many end-users are ultimately using it, but we have about ten customers.
Technical support has been very good. The technical people who are responsible for us at NetApp are very good. If we contact them we get direct feedback. We often have direct contact, in our case at least, to the engineers as well. We have direct contacts with NetApp in Tel Aviv.
It's worth mentioning that when we started with Cloud Volumes ONTAP in the past, we did an architecture workshop with them in Tel Aviv, to tell them what our deployments look like in our on-premise environment, and to figure out what possibilities Cloud Volumes ONTAP could provide to us as a service provider. What else could we do on it, other than just running several services? For example: disaster recovery or doing our backups. We did that at a very early stage in the process.
We only used native Azure services. We went with Cloud Volumes ONTAP because it was a natural extension of our NetApp products. We have a huge on-premise storage environment from NetApp and we have been familiar with all the benefits from these storage systems for several years. We wanted to have all the benefits in the cloud, the same as we have on-premise. That's why we evaluated it, and we're in a very early stage with it.
To say the initial setup was complex is too strong. We had to look into it and find the right way to do it. It wasn't that complex, it was just a matter of understanding what was supported and what was not from the SAP side. But as soon as we figured that out, it was very straightforward to figure out how to build our environment.
We had an implementation strategy: Determining what SAP systems and what services we would like to deploy in the cloud. Our strategy was that if Cloud Volumes ONTAP made sense in any use case, we would want to use it because it's, again, highly automated and we could use it with our scripting already. Then we had to look at what is supported by SAP itself. We mixed that together in the end and that gave us our concept.
Our initial deployment took one to two weeks, maximum. It required two people, in total, but it was a mixture of SAP and storage colleagues. In terms of maintenance, it doesn't take any additional people than we already have for our on-premise environment. There was no additional headcount for the cloud environment. It's the same operating team and the same people managing Cloud Volumes ONTAP as well as our on-premise storage systems. It requires almost no maintenance. It just runs and we don't have to take care of updating it every two months or so for security reasons.
We didn't use a third-party.
We have seen return on investment but I don't have the numbers.
The standard pricing is online. Pricing depends. If you're using the PayGo model, then it's just the normal costs on the Microsoft page. If you're using Bring Your Own License, which is what we're doing, then you get with your sales contact at NetApp and start figuring out what price is the best, in the end, for your company. We have an Enterprise Agreement or something similar to that. So we get a different price for it.
In terms of additional costs beyond the standard licensing fees, you have to run instances in Azure, virtual machines and disks. You still have to pay for the Azure disks, and Blob Storage if you're using tiering. What's also important is to know is the network bandwidth. That was the most complicated part in our project, to figure out how much data would be streamed out of our data center into the cloud and how much data would have to be sent back into our data center. It's more challenging than if you have a customer who is running only in Azure. It can be expensive if you don't have an eye on it.
We have a single-vendor strategy.
Don't be afraid of granting permissions because that's one of the most complex parts, but that's Azure. As soon as you've done that, it's easy and straightforward. When you do it the first time you'll think, "Oh, why is it so complicated?" That's native Azure.
The biggest lesson I've learned from using Cloud Volumes ONTAP is that from an optimization standpoint, our on-premise instance was a lot more complex than it had to be. That's was a big lesson because Cloud Volumes ONTAP is a very easy, light, wide service. You just use it and it doesn't require that much configuring. You can just use the standards which come from NetApp and that was something we didn't do with our on-premise environment.
In terms of disaster recovery, we have not used Cloud Volumes ONTAP in production yet. We've tested it to see if we could adopt Cloud Volumes ONTAP for that scenario, to migrate all our offloads or all our storage footprint we have on-premise to Cloud Volumes ONTAP. We're still evaluating it. We've done a lot of cost-comparison, which looks pretty good. But we are still facing a little technical problem because we're a CSP (cloud service provider). We're on the way to having Microsoft fix that. It's a Microsoft issue, not a NetApp Cloud Volumes ONTAP issue.
I would rate the solution at eight out of ten. There are improvements they need to make for scale-up and scale-out.
The solution helps to keep production data.
The tool's most valuable features are the SnapLock and SnapMirror features. If something goes wrong with the data, we can restore it. This isn't a mirror; we store data in different locations. If there's an issue on the primary site, we can retrieve data from the secondary site.
Multiprotocol support in NetApp Cloud Volumes ONTAP is beneficial because it allows customers to manage SAN and NAS data within a single storage solution. This feature eliminates the need to purchase different types of storage.
NetApp Cloud Volumes ONTAP should improve its support.
I have been working with the product for five years.
The solution's performance is good. It depends on your chosen model and configuration, but even the lower-end models perform well. I rate its stability a nine out of ten.
I rate NetApp Cloud Volumes ONTAP's scalability as ten out of ten.
NetApp Cloud Volumes ONTAP's deployment is easy, and I rate it a ten out of ten. It can be completed in half an hour and depends on customer configurations.
The solution's pricing is reasonable.
I've worked in the IT industry for over ten years, dealing with various storage solutions from vendors like HPE and OEMs. The tool stands out due to its unique features and functions that protect and manage customer data.
I rate the overall product a nine out of ten.
Desktop-as-a-service is a PoC that I'm doing for our customers to allow them to use NetApp for their personal, departmental, and profile shares. This connects their desktop-as-a-service that we're building for them.
This is for training. The customer has classrooms that they have set up. They have about 150,000 users coming through. They want to have a way to do a secure, efficient solution that can be repeated after they finish this class, before the next class comes in, and use a NetApp CVO as well as some desktop services off of the AWS.
It is hosted by AWS. Then, it hosted by CVO who sets out some filers, as well Cloud Volumes Manager as well. We were looking at it with Azure as well, because it doesn't matter. We want to do a multicloud with it.
We haven't put it into production yet. However, in the proof of concept, we show the use of it and the how you can take it in Snapshot daily coverage, because we're doing it for a training area. This allows them to return back to where they were. The bigger thing is if they need to reset up for a class, then we can have a goal copied or flip back where they need to be.
It gives a solution for storage one place to go across everything. So, the customer is very familiar with NetApp on-prem. It allows them to gain access to the file piece. It helps them with the training aspect of it, so they don't have to relearn something new. They already know this product. They just have to learn some widgets or what it's like in the cloud to operate and deploy it in different ways.
The customer knows the product. They don't have to train their administrators on how to do things. They are very familiar with that piece of it. Then, the deduplication, compression, and compaction are all things that you would get from moving to a CVO and the cloud itself. That is something that they really enjoy because now they're getting a lot of cost savings off of it. We anticipate cloud cost savings, but it is not in production yet. It should be about a 30 percent savings. If it is a 30 percent or better savings, then it is a big win for the customer and for us.
I would some wizards or best practices following how to secure CVO, inherit to the Cloud Manager. I thought that was a good place to be able to put stuff like that in there.
I would like some more performance matrices to know what it is doing. It has some matrices inherent to the Cloud Volumes ONTAP. But inside Cloud Manager, it would also be nice to see. You can have a little Snapshot, then drill down if you go a little deeper.
This is where I would like to see changes, primarily around security and performance matrices.
It is a good system. It is very stable as far as what I've been using with it. I find that support from it is really good as well. It is something that I would offer to all of my customers.
It is easy to scale. It is inherent to the actual product. It will move to another cloud solution or it can be managed from another cloud solution. So, it's taken down barriers which are sometimes put out by vendors in different ways.
We use NetApp Cloud Manager to get up and running with Cloud Volumes ONTAP. Its configuration wizards and ability to automate the process are easy, simple, and straightforward. If you have any knowledge of storage, even to a very small amount, the wizards will click through and help to guide you through the right things. They make sure you put the right things in. They give some good examples to make sure you follow those examples, which makes it a bit more manageable in the long run.
They use some native things that are inherent to the AWS. They have looked at those things.
NetApp has been one of the first ones that they looked at, and it is the one that they are very happy with today.
Work with your resources in different ways, as far as in NetApp in the partner community. But bigger than that, just ask questions. Everybody seems willing to help move the solution forward. The biggest advice is just ask when you don't know, because there is so much to know.
I would rate the solution as a nine (out of 10).
We're not using inline encryption right now.
We use it to monitor our on-prem and our SnapMirror between one and the other.
It's a single pane of glass where we can see our applications running.
The ability to see things going back and forth has been quite useful.
Its migration capabilities are very good.
The solution could be better when we're connecting to our S3 side of the house. Right now, it doesn't see it, and I'm not sure why.
I've used the solution for a little over a year. Before, it was called Cloud Manager.
The stability is pretty good. There was an instance though where we were trying to delete a CVO instance off of it, and it took me a while to get it to release. It took a while to delete one of the instances since we already had taken it out, and it was asking to delete it. It couldn't connect to it to delete it. We ended up trading a new workspace and then deleting the old workspace.
We only have a few systems in it. We have two on-prem clusters, one CVO instance, and an S3 instance, which I don't have a connection to yet. There's something wrong with the configuration or firewall risk or going into S3. Then, we'll have our FSX one built into it when we get to that point.
Technical support is good. I get all the answers I need. I've never had any trouble with it. A lot of the time, we don't use it, though. Typically, we just Google something.
Of course, it would be ideal if they improved response time.
Positive
We did not use another solution previously. We always used the Cloud Manager, and then Cloud Manager became BlueXP. We've actually used the solution for about two years now, under two different names.
I was involved in the initial setup. When we deployed it, we had to add all the systems to it. It was really easy to set up.
Once we had the workspaces in there, it was really easy just to add systems.
There's another admin that helped me with it who was the primary at the time.
I have not seen any ROI.
The pricing doesn't matter as it comes with the license that we have. It's free of charge with the bundle.
We did not evaluate any other solution.
We haven't done an actual migration from on-prem to the cloud. We're using it to drag and drop Volumes.
I'd rate the solution nine out of ten since I had issues with deleting it, and I had to recreate a workspace.
Cloud Volumes ONTAP is used for disaster recovery right now, and the primary use case for our current clients and environments is CIFS. Most clients use Cloud Volumes ONTAP as a replication destination for CIFS. It's a way to back up their documents and files offsite for disaster recovery. They have VMs that they spin up and connect to.
In most cases, we have not deployed anything that uses the service protocol, like iSCSI or NFS. It's strictly CIFS. We haven't used one solution—matching DR for CIFS volumes—which is a destination that replicates from on-prem to the cloud, but we've done DR tests with that.
The other two instances we're currently running will be the same scenario, but we're not there yet. Right now, they are being used for SnapMirror destinations of CIFS volumes only, and that's all three. We've been running Cloud Volumes ONTAP in Azure as a VM along with a connector. They had one deployed before I took it over, but it's typically done within the NetApp Cloud Manager system. Once we connect to the Azure portal or subscription, we push out the CVO from there.
Our clients see most of the benefits. Cloud Volumes ONTAP provides offsite backups. We used to host our backups on physical infrastructure in a data center or on remote sites. There were a lot of storage costs for replication. By implementing Cloud Volumes ONTAP in the Azure portal, we eliminated the cost of additional hardware and everything you have to maintain on-prem in a physical environment and put it up to the cloud. That was a considerable cost savings for the customer.
Cloud Volumes ONTAP is a massive improvement in terms of manageability. It's easier for customers to perform certain functions from that interface, knowing it sits on a high availability platform. We don't worry about paying all these separate vendors for replication solutions. Other costs are associated with maintaining physical infrastructure in a data center, like electricity or storage space, RAM, and other hardware. It has improved our clients' bottom line because they spend less on disaster recovery.
The Cloud Manager application that's on the NetApp cloud site is easy to use. You can set up and schedule replications from there, so you don't have to go into the ONTAP system. Another feature we've recently started using is the scheduled power off. We started with one client and have been slowly implementing it with others. We can cut costs by not having the VM run all the time. It's only on when it's doing replication, but it powers off after.
Cloud Volumes ONTAP's interface could use an overhaul. Sometimes you have to dig around in Cloud Manager a little bit to find certain things. The layout could be more intuitive.
I haven't been using NetApp Cloud Volumes ONTAP for too long. It has been a little under three years since we started working with it. We were mostly doing a lot with data centers, so we only really started getting into cloud systems about three years ago.
Cloud Volumes ONTAP seems to be fairly stable so far. The only time we have issues is when there is a circuit interruption, but this product has been pretty stable. We haven't had issues with crashes or data getting corrupted. We've had interruptions due to internet problems or leaks between the sites.
These are things we have no control over because they're different providers. That's the only issue that I've seen. But once those come through the actual system itself, it's been fine as far as resiliency, performance, and availability.
We can expand on it as needed. In particular, it's easy to add storage, and storage expansion is probably the feature we utilize the most. We don't mess with any other features, like within the protocols or anything like that. Those are fine, but storage scalability is pretty good.
Our clients' storage needs vary. Typically, it's somewhere in the range of 20 to 30 terabytes, but at least 15 to 30 terabytes. Each client is a little different, but the one that uses the most storage has a capacity of about 30 terabytes.
NetApp technical support is pretty good. We sometimes have to wait a bit, but they're good at resolving issues once they find out what the problem is. They come back with solutions, so I would rate them pretty well.
Deploying Cloud Volumes ONTAP can be complex at times, but I think it's a learning curve. You have to put in many different pieces, and it's not always easy to find the documentation you need on the web. Some parts are straightforward, but sometimes you need to do some digging before deploying.
It really comes down to planning. When implementing, we ensure each case is planned and deployed to the networking part for Azure. We also put together a template. That way, other engineers can follow or use it as a guideline when building it. I make a basic template of the required information, configuration settings, etc.
These were all deployed as part of a much larger project, which included new hardware that was upgraded. The Azure and NetApp Cloud Volumes ONTAP were part of that upgrade experience. It was in conjunction with the client getting a new on-prem NetApp system and other infrastructure, like switches. Once everything was migrated, we implemented the Azure part in Cloud Volumes ONTAP.
We have a small team for handling deployment. I think they have maybe two people. One person could do it, but there is an alternative if somebody is out on vacation. The managed service division covers all the maintenance for our clients. The managed service team takes over all the backend IT work for our clients. Instead of having a full staff, the client pays us to manage the backend of their servers and other infrastructure. As a managed service, we go in and take care of their switching, patches, upgrading, etc.
We do all of the implementations for our clients in-house who are the end-users. We sell them the solution and deploy it for them.
I believe our clients see a return because they don't need to purchase hardware. It's much easier and quicker for them to get additional storage when needed compared to an on-prem system.
They save on costs associated with ordering additional storage for a physical on-prem system versus expanding what you have and you pay a little more in Azure. One client saw significant cost savings on their electricity bill. They reduced their bill by almost half just by shutting these things off.
Our management and salespeople deal with pricing. I'm not part of the price negotiations or anything like that. I work on design and implementation.
I rate Cloud Volumes ONTAP nine out of 10. It's an excellent solution that is cloud-based, so you don't have to worry about leasing or purchasing hardware. All costs of purchasing lines and circuits are upfront. Since this product works over the internet, you only need data access, which most of them have.
Overall, I would say this is better than an on-prem solution that requires physical hardware at remote sites. You don't need to invest in buying or maintaining physical hardware. In this case, you're paying a monthly cost for something. You can decide at any time to stop using it if you don't need it anymore. That's a problem with owning physical infrastructure. You have to dispose of it when you don't need it anymore. Cloud Volumes ONTAP is also easier to manage and upgrade than on-prem systems.
We use it mostly for distributed files. For example, one of our customers is in construction. So they have centralized NetApp storage and set up replication with the Cloud volume ONTAP. Several branch users access the Cloud instance. And whatever work that they do on the Civeo instant gets replicated to the client's data center on the on-premise NetApp storage. And they use GFC with Seavio for a seamless experience.
So a lot of these licenses are at the rate that is required for capacity. So they're they're able to reduce the license consumption and also the consumption of the underlying cloud storage.
So one of the clients had an on-premise 100 terabytes. When that same data went to CVO, he was able to reduce it to somewhere around 20/25 dB. So he was able to reduce using DTO compression, the consumption of this underlying cloud storage. And, obviously, with the licenses, now you need only 25 terabytes instead of 100 terabytes.
There's not much scope for improvement. I think the solution is more restricted with the underlying cloud. The performance of the single instances depends on the performance of the underlying cloud resource.
I have been using NetApp Cloud Volumes ONTAP for two years.
NetApp is stable.
NetApp is scalable.
The technical support team is wonderful.
Positive
It is easy because Blue XP makes it very easy. Just a few clicks. It's a little bit tricky for customers who are very new to Cloud. While they are the traditional customers who use their data center, if they are not well versed with cloud, they might face challenges in setting it up because that's more dependent on public cloud vendor, not the NetApp.Sometimes the internal support renewal is an issue because the NetApp system sees it as an expired license. So when you try to renew it, it is not reflected in the NetApp portal. So the flow of the support could be improved.
For enterprise customers, it's a very cost effective. But in the SMB segment, yeah, pricing is a little bit challenge for your time.
I rate the overall solution a ten out of ten.