One of the main features of Spectrum Virtualize is it virtualizes the servers from the storage. We have a very large infrastructure. A major advantage is when you get the aged storage arrays and you have to replace all of those. Last summer, we replaced four petabytes of aged storage arrays. They were old and past end-of-life. But we did that seamlessly, without affecting any of the server applications. There were no system admin times; nothing required at all. It was really quite good for our client. That was perfect for them.
Our team operates four 8-Node IBM Spectrum Virtualize (SVC) clusters (32nodes total), two clusters at one site and two at a sister site with replication between the two sites.
These four clusters have a number of storage arrays behind them to yield a total storage capacity across the four cluster of approximately ~6 petabytes of Platinum, Gold, Silver and Bronze Data Classes. The Storage Cloud uses IBM's EasyTier feature to create different performing storage classes and uses Thin Provisioning to lower our clients requested 8 petabytes of capacity to a much less costly 5 petabytes of consumed capacity. The remaining ~1 petabyte is elasticity which is built into the Cloud.
Our client does not have any classical maintenance windows where systems or storage can be taken down for upgrades, repairs, expansion, or replacement. Servers require 24x7x365 data access (actually 4 days are planned power outages for power testing - but no IT changes are permitted).
So when I say our storage services to our client were provided continuously - non-stop - for 12 months, it means that all customer servers had 100% uninterrupted, online SAN access to their data 24x7x365 (minus the power down). During that time, our team provided on-demand capacity provisioning of about 700TB of new client growth and expansion, updated all cluster softwares, decommissioned ~4 petabytes of aging storage arrays, installed ~5 petabytes of new replacement / expansion arrays, and repaired a couple failed components. It really shows the power, utility, versatility and availability of our Cloud Storage design.
The second feature that it is software-defined. Every year, we select a new release and we get new features. This gives us time to test them out. It's just very amenable to our type of delivery of services to the clients that use storage.
In addition to that, they've been able to add some really cool functions. It started out with the usual stuff, such as thin provisioning. Then they added features like compression. Now they're actually adding transparent cloud tiering, so they can put data up in the cloud, just by taking it off of the SVC and sending to the cloud. This is very, very good for us in being able to put together a roadmap for functions for our clients of what they can do with their data.
There are things that occur when you get to this size and capacity. We're very large, i.e., petabytes. When you get to that sheer volume of the numbers of things, it is too big for people to keep track of. So, it's okay if you have 50 volumes, and I can watch them. I can assign people to watch them. But when you have 5,000, it's not possible anymore. So you need capabilities within the products that would do what that person would've done watching the 50. So, they have this “cognitive IT” sort of thing going for them.
I don't think they've realized they can apply that cognitive concept to that. It is like, "Okay, I'm going to use software to watch all that." Now, they have some of that already with Easy Tier that automatically moves data around. It is perfect. Now I need that same concept extended into other areas.
The stability is very good. We just finished running non-stop for one year. It was perfect, with no down-time at all. Zero. 100% availability.
Scalability is perfect. We're at the top of their scale though. Right now, the largest cluster they use is eight. We have eight very high performance servers running the software. That is good, but we added a second cluster to do more, which has worked quite well. You sort of wonder, “Would it be better to have a 10 node, a 12 node, or a 16 node?”
We do use technical support. They do very good work. Because we're so big, they have a little bit of trouble tracking all the products and making sure we get connected with the reports, but, in general, they are very good.
We did a fairly extensive industry search about five years ago, when we were deploying this call initially. We were not using a different product. It was a new offering; Greenfield. We looked at all the major players in the industry and selected IBM.
When selecting a vendor, first the product has to meet future functionality. It needs to have some stability in the industry. It needs to have some past performance proof-points. Our client is somewhat risk averse, so they don't want to be first. That's very important.
I was involved with the initial setup. It was very straightforward. Delivery of all the services has been very methodical and well defined.
We looked at Hitachi, EMC, and NetApp. The major boys.
If you're going to implement this, we have a highly technically skilled team. We hired them and put them together for this purpose. So, if you're in a traditional environment without this, either hire a service like us to come in and help you get started and get “flying lessons”.
Once you're good and once your team is good, once your staff is up to speed, then they can take over if they want. This usually takes about a year. We offer a one year set of services. Afterwards, they can take over if they want to, or they can keep the service there. Once they're there, they'll never look back. It's one of those kinds of things and it's very good the way it works.