One of the main features of Spectrum Virtualize is it virtualizes the servers from the storage. We have a very large infrastructure. A major advantage is when you get the aged storage arrays and you have to replace all of those. Last summer, we replaced four petabytes of aged storage arrays. They were old and past end-of-life. But we did that seamlessly, without affecting any of the server applications. There were no system admin times; nothing required at all. It was really quite good for our client. That was perfect for them.
Our team operates four 8-Node IBM Spectrum Virtualize (SVC) clusters (32nodes total), two clusters at one site and two at a sister site with replication between the two sites.
These four clusters have a number of storage arrays behind them to yield a total storage capacity across the four cluster of approximately ~6 petabytes of Platinum, Gold, Silver and Bronze Data Classes. The Storage Cloud uses IBM's EasyTier feature to create different performing storage classes and uses Thin Provisioning to lower our clients requested 8 petabytes of capacity to a much less costly 5 petabytes of consumed capacity. The remaining ~1 petabyte is elasticity which is built into the Cloud.
Our client does not have any classical maintenance windows where systems or storage can be taken down for upgrades, repairs, expansion, or replacement. Servers require 24x7x365 data access (actually 4 days are planned power outages for power testing - but no IT changes are permitted).
So when I say our storage services to our client were provided continuously - non-stop - for 12 months, it means that all customer servers had 100% uninterrupted, online SAN access to their data 24x7x365 (minus the power down). During that time, our team provided on-demand capacity provisioning of about 700TB of new client growth and expansion, updated all cluster softwares, decommissioned ~4 petabytes of aging storage arrays, installed ~5 petabytes of new replacement / expansion arrays, and repaired a couple failed components. It really shows the power, utility, versatility and availability of our Cloud Storage design.
The second feature that it is software-defined. Every year, we select a new release and we get new features. This gives us time to test them out. It's just very amenable to our type of delivery of services to the clients that use storage.
In addition to that, they've been able to add some really cool functions. It started out with the usual stuff, such as thin provisioning. Then they added features like compression. Now they're actually adding transparent cloud tiering, so they can put data up in the cloud, just by taking it off of the SVC and sending to the cloud. This is very, very good for us in being able to put together a roadmap for functions for our clients of what they can do with their data.
There are things that occur when you get to this size and capacity. We're very large, i.e., petabytes. When you get to that sheer volume of the numbers of things, it is too big for people to keep track of. So, it's okay if you have 50 volumes, and I can watch them. I can assign people to watch them. But when you have 5,000, it's not possible anymore. So you need capabilities within the products that would do what that person would've done watching the 50. So, they have this “cognitive IT” sort of thing going for them.
I don't think they've realized they can apply that cognitive concept to that. It is like, "Okay, I'm going to use software to watch all that." Now, they have some of that already with Easy Tier that automatically moves data around. It is perfect. Now I need that same concept extended into other areas.