Our primary use case is DR and backup.
The performance has been pretty good. We have installed three of the TSM large Blueprints in the past couple of years. We are continuing to scale them. It is all disk replication and we are working on eliminating our tape. We have three tape libraries across three data centers and we are continuing to reduce the reliance on tapes, as we move more things into compress, dedupe, and container pool stuff.
From a Spectrum Protect perspective, AIX has been largely our footprint for a long time with TSM, and those are the large Blueprints that we purchased, the 824s with 256GB of memory. We do F900 for the database, and a V5030 large petabyte back-end on each of those. We have gone into the new Blueprints with a Linux OS. That is a change in direction from our management team which has spun us in a little bit of a different direction than the standard stuff we have done, which is fine. Some of our new Blueprints have been built on Linux, and they are more of a medium scale.
If we back out and we start floating up to a 10,000-foot view of data centers, we have 4.5 petabytes of SVC Spectrum Virtualize. We have been using it for about 14 years and been very successful with it. We use Easy Tier with a good healthy mix of flash, in the neighborhood of 400 terabytes. Spinners, 10K drives, 15K drives are all but gone in our data center at this point. As far as server OS, we are an AIX pSeries shop for our big iron. VMware for our x86 virtualization, and hypervisor choice across UCS Dell.
It is used in two data centers in northwest Arkansas, and looked at as a single data center. We own our own dark fiber between the two. We do a stretch cluster topology across a couple of different clusters in that environment, and support everything with VDisk mirroring between the two.