What is our primary use case?
We use it primarily to look at both performance and security around our NetApp Filers. I know it can expand to other entities within the NetApp, but we really focus on looking at our Filers and using Cloud Secure to see potential ransomware attacks, unauthorized changes to files, and deletions.
It's a SaaS solution hosted by NetApp.
How has it helped my organization?
Before we started using Cloud Insights, there wasn't really a good solution if something bypassed our firewalls and was sitting in dormant mode. In that situation, it was difficult for us to find it before it was too late. This solution gives us an opportunity to proactively see abnormal behavior before it hits a Zero-day attack.
We used to have a weekly file audit which was scripted, so there wasn't a whole bunch of time lost using that. But the difference is around the gap between when the file audit would last run versus this as a more real-time solution. If we had to collect the data the way this service is doing so, we couldn't really put a value on that because it is a real-time solution.
It also provides a single tool for cloud-based architectures, such as microservices and containers, as well as on-premises infrastructure. We've looked at pointing the data collectors at various entities, whether it's an HCI solution or a NetApp Filer or even some of our cloud instances. The flexibility of being able to look at various resources, whether they're cloud or something that is on-premise, is a huge benefit. Having all those data collectors in one location and being able to compare that information is invaluable.
The solution’s ability to quickly inventory our resources, figure out interdependencies across them, and assemble a topology of our environment—the performance and metrics and dashboards for doing that—is amazing. We're able to see and compare things like latency, and overlay latency between what may seem like two completely separate systems. If they're running, you can do comparisons on them, so when a particular application is running and it's causing influxes on both sides, it helps you dig into queries and application performance overall. The relational portion of this solution is beneficial.
It provides us with advanced analytics around being able to overlay data and compare various storage entities, their latencies, and overall performance, to gauge whether an application is performing well. It helps us see if it's something with policies or an actual, physical limitation of the solution we're using. Being able to look at overall IOPS and then drill down is super-beneficial. And the advanced analytics have definitely helped us with tuning SQL and looking at how we have our SQL LUNs or volumes laid out; whether we want to separate them by log and data, or other options. It helps with load balancing and overall performance of those high-storage, intensive applications.
In addition, the advanced analytics have helped to reduce the time it takes to find performance issues. They've helped with identifying applications that are causing latency and that may be affecting other applications. The analytics help with root cause analysis. One example is looking at nodes that are in high demand and being able to say, "Okay, this node is showing to be in higher demand than another node," and knowing we may need to move some of the load over so that, performance-wise, we have a better plane and everything correlates better. From a troubleshooting perspective, NetApp Cloud Insights saves at least 10 hours of my storage admin's time per week.
It has also helped to right-size workloads. From a helpdesk perspective, we've seen ticket volume go down by at least 10 percent, and that was happening right after a data center move where, obviously, there was a spike. Directly after starting to use Cloud Insights and the associated Active IQ to monitor the storage, it helped bring down the time to identify storage nodes that were heavily affected or needed to have some adjustments made. The right-sizing has only reduced soft costs. The cost perspective of storage is a long-term thing where you say, "Okay, I've been able to move stuff over and have deduplication rates increase." But that takes a while.
What is most valuable?
Cloud Secure is definitely the most valuable feature and being able to see file level activity. It gives real-time alerting on possible ransomware attacks and provides file security review. It helps us to see if something abnormal is happening on the system before it's too late.
Since it is cloud-based, it can be accessed from anywhere. That is good because we can check up on any types of issues affecting our on-prem equipment. Being able to look at that from anywhere has been very efficient.
What needs improvement?
As I went through learning the querying, it could have been a little more intuitive. I'm still fresh into the system.
In a perfect world we would have something built, right out-of-the-box, that can identify what we call "noise," and reduce the amount of data. You're presented with so much data when you first start the data collectors. For example, it brings back a lot of change rates that happen just because of standard computing, like profile changes and that sort of thing. Being able to identify things like that and categorize them and strip it down—and it probably can do that, I just haven't gotten there yet—would be very beneficial. Getting that, out-of-the-box, would be helpful.
There is training that has been provided. I just haven't gone through all of it yet.
For how long have I used the solution?
I have been using NetApp Cloud Insights on and off for about two to three months.
What do I think about the stability of the solution?
The stability has been good. As long as we have good connectivity and communication with the portal, everything works fine. We haven't experienced any downtime.
What do I think about the scalability of the solution?
From what I've seen, there have been no scaling issues. The only limitation when scaling is licensing. As long as you have the money to license it, it will scale out as much as necessary.
Right now, we are just collecting data and we don't have any formal reporting out of it. We have used it more for ad hoc reporting. But the plan is to integrate it with some of our IT compliance summaries to be able to report on access-control and protection analysis, on a quarterly basis.
How are customer service and support?
Their technical support has been great. They understand some of the barriers, some of the things that we've run into, and it's been a very easy interaction. There aren't too many things that can go wrong with the solution. It's been great.
Their turnaround time, depending on the urgency, has been within a couple of hours. We haven't had an incident that involved the security side, but they've been very responsive. We usually open tickets via email for questions around usability.
Which solution did I use previously and why did I switch?
We didn't have anything like this before.
How was the initial setup?
The initial setup was very straightforward. It was very simple. I told it where the data collector should point to, and it just worked, right out-of-the-box. It took less than 15 minutes to set up. Once you put the data collectors in, you assign what they're collecting and that's it. Because it's a product of NetApp, it was very simple.
The deployment, from start to finish, was about an hour-long meeting. There were a couple things that my system admin had to do, deploying acquisition units, but that didn't take long at all. We have deployed a lot of stuff over the last year, and this was very simple in comparison.
I did not require an implementation strategy for this. It's an analytical add-on tool that just worked. I didn't run into the standard security issues or anything like that.
We have approximately six users on it. They're all within IT and provide information to either executive management or others. Their roles vary from my CIO down to my source storage administrators and my security admins. Storage admins and network admins are looking at it from a performance perspective. My security admins are looking at it in terms of ransomware and the access-control side. Those individuals are also responsible for maintenance. It's a tool that did not require additional staffing and was more of a benefit for the individuals in my group.
What about the implementation team?
We worked directly with NetApp. It was a pretty straightforward meeting. They provided the requirements and we implemented it.
What's my experience with pricing, setup cost, and licensing?
Be aware of the capacity licensing and understand how that works, because it is based on capacity. Getting an understanding of that is the biggest thing.
Which other solutions did I evaluate?
We did look at some other options, on-premise solutions for identifying, and we found that there were some problems with them. One of them was Varonis, but they were more security-related towards ransomware and identifying insider threats. With Cloud Insights you got that but you also got the performance enhancements through the ability to see how to tweak or manage performance of storage stuff.
We did do a trial with Varonis and ran into some problems with their collectors causing blocking issues at the time. We didn't feel comfortable with the results of that performance. We didn't see the same performance issues with this solution.
What other advice do I have?
Work with NetApp to get the best support possible.
The biggest lesson I've learned from using the solution has been understanding the relational side of things, how our nodes interact with each other, and how applications can affect multiple nodes in our environment. It's helped us to load balance to make our environment more efficient.
Because it was so easy to deploy I would rate it a 10 out of 10.
Which deployment model are you using for this solution?
Public Cloud
*Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.