We're using it for monitoring and managing all of the NetApp storage in our environment, both in the cloud and on-prem. We are expanding it to use it with our virtualized environment and with our in-house storage environments.
It's allowed us to tag storage with applications and business units, so that when we have a requirement to migrate data for an application, we can quickly identify all of the storage that the application utilizes.
The solution also provides a single tool for cloud-based architecture, such as microservices and containers, as well as on-premise infrastructure. We're not fully utilizing that part of it yet, but we're in the process of starting down that road. It's provided us with the ability to better manage the storage and to report on it to management. We're able to show them, via dashboards and reports, how the storage is being utilized and what the return on investment is.
And when it comes to the solution's ability to quickly inventory our resources, figure out interdependencies across them, and assemble a topology of our environment, it's very powerful. Because it's SaaS, a lot of the upfront work is being done for us. We're spending less time managing the application and more time getting the data we need out of it.
NetApp Cloud Insights advanced analytics for pinpointing problem areas are outstanding. When we have a performance issue for a specific piece of storage, a volume, or an application we can graphically show the business units, the application owners, how the storage is performing relative to the performance that they're seeing on their clients or on their servers. It helps us to pinpoint where issues may be, whether they are with storage, the network, on the clients, or with the application itself.
The advanced analytics have also enabled us to look at trending and detect
- where issues are
- where we would have to do load balancing to keep performance from being degraded
- where we are running out of space in a given data center or storage node.
The advanced analytics help to reduce the time it takes to find performance issues as well because we run reports that look at trending. We can see where performance issues have not yet occurred, but that if we keep going in the direction we're going, we will have performance issues. That way we can proactively take action to ensure that we don't have any problems. We mitigate them before they happen. It's hard to quantify how much time it saves us, because it's very easy to see the problems before they occur, as opposed to dealing with them once they have occurred. It saves the businesses from having issues and keeps the entire environment running more smoothly.
In addition, the solution helped us to right-size workloads. It has shown us where we have issues, where storage growth has exceeded what is manageable. We've then worked with the business units and the application owners to break the data up into more manageable sizes, so that not only do they continue to get the performance that they need, but we're able to manage the storage without being backed into any corners that we can't get out of. Keeping the data broken into smaller pieces allows us to move it around to ensure that they get the performance they need. We're also able to ensure that the applications that require the highest level of performance get it. And those that don't have those high requirements can be put on lower power, with throughput storage that meets their needs, but without consuming horsepower on the fastest storage where they don't need it. Overall, it provides us with better utilization of the infrastructure.
The right-sizing has also reduced costs because we can more intuitively allocate resources and spend money on upgrades only where they're needed. Instead of just throwing more hardware at an environment, we can do it very selectively, based on the needs of the environment, needs that we are able to analyze through the reports that Cloud Insights provides to us. If I had to make an educated guess, it has reduced our costs by 20 to 25 percent.