The most valuable feature is probably data migration so we can bring in back-end systems and swap data out. Our end-users, and even our other system administrators, don't have any idea that we've moved storage around.
It's got full features, so we can compress volumes. We can do thin volumes and we can change them on the fly.
Nobody knows that we've migrated that data around. We can ship it off to our DR site. This is all under the hood of Spectrum Virtualize.
We don't have to worry about what type of block is underneath it at the time. It's all being done at that layer.
A feature that is already there, if I remember correctly, is encryption. I think it is coming out, or it is already there.
That is a key management piece. Right now, we're doing an encryption on the back-end flash systems on the V7000s. It's simple, with just a USB key into the controllers.
The integration would be an option that we would like, but I understand that's not how it's going to be implemented.
NPIV is also coming. I'm not exactly sure what benefit it will bring. Initially, that sounded like that was going to be kind of cool. Even though we can migrate data without our end users really knowing it, they do see a path failure, and NPIV would take care of that for us.
The feature that's kind of missing is getting us up to the point where we can help the application owners see where their data is at, understand it, and potentially help us breakout.
We've used easy tiered functions in the pools, so we're trying to help step that storage down. If they can get visibility somehow into that data, help us further break that down, or better tier and separate out their data, that would be helpful.
I know that VMWare has that function, where they are taking multiple tiers themselves and placing subsets of data, as opposed to whole blocks.
For the most part, stability has been really good. Like anything else, the more you use it, the more times you're going to run into a bug. We've certainly done that.
It seems like scalability is OK. I mean, it's kind of hard for us. We're on a four year refresh cycle. So we stay pretty current with the hardware itself. We are adding in things like compression accelerators, and now with the new SV1 nodes, more cache, and we are going to more adapters. Of course, we're always willing to split out workloads onto separate stacks. So it scales pretty well.
Support has been pretty responsive. Software is coming out all the time with PTFs to fix, so staying on top of that is important.
We've run into the seven three code. Specifically, we hit the cache performance bug. We were right on top of that and had to do another upgrade to clear that.
We've also had a bug with the fiber channel cards. Again, by the time we were implementing, it was a known issue. It was not anything that we've really had to wait on, but not something that we were aware of at the time of implementation.
Get a demo of it. If you haven't seen the product and you have not had somebody step you through all of the features and all the things that you can do with it, then I think it would be really tough to see where adding another set of controllers in front of your storage is benefiting you.
You might be thinking, that's just another hop and it's another delay in getting to my data. I think you will see the value of this solution once you:
- See it plugged in
- Understand what's going to come with being able to move, compress, and virtualize your data in one interface
- Are able to manage all the data there, and not worry about the back end-stuff
- Are able to carve up volumes very quickly to the end users
Integration with Spectrum control and a kind of self-service provisioning is good. It is something we're looking at turning over and then deciding about all the data migration that can happen in the back-end. We can look at that request, and then decide. Perhaps we didn't have enough information when we started, and then we can move it on the back-end. We don't have to worry about getting into the weeds, necessarily, from day one.