The primary use case of this solution is for virtualization.
The deployment model used was on-premises.
EMC XtremIO most interesting characteristic? Predictability.
Last week, thanks to Tech Field Day Extra, I attended a presentation from the EMC’s XtremIO team. Some of my concerns about this array are still there but there is no doubt that this product is maturing very quickly and enhancements are released almost on a monthly basis… and it’s clear that it has something to say.
A rant about All Flash
In these days, contrary to the general (and Gartner?) thinking, I’m developing the idea that considering All Flash Arrays a separate category is a totally non sense (
you can also find an interesting post from Chris Evans about this topic). Flash memory is only a media and storage should be always categorized looking at its characteristics, features and functionalities. For example, I could build a USB-keys based array at home, it’s AFA after all… but would you dare saving your primary data into it? Will it be fast? (you don’t have to answer, of course!)
The fact that a vendor uses Flash, Disks, RAM or a combination of them to deliver its promises is only a consequence of designing choices and we have to look at the architecture (both hardware/software) as a whole to understand its real world positioning. Resiliency, availability, data services, performance, scalability, power consumption and so on, are the characteristics you still have to consider to evaluate if an array is good for a job or another.
Back to XtremIO
In this particular case, If we go back and look deeply into XtremIO design we will find that the system is equipped with plenty of RAM which is heavily leveraged to grant better constant performance and the highest predictability. In fact, looking at the charts shown during the presentation (around minute 14 of the video below), you’ll find that the system, no matter the workload, delivers constant latency well under the 1ms barrier.
The product, which has finally received updates enabling all common data services expected on a modern storage array (replication is still missing though), doesn’t shine for power consumption, used rack space or other kinds of efficiencies (at this time it’s also impossibile to mix different type of disks for example). But again, granting first class performance and predictability is always the result of a give-and-take.
XtremIO is based on a scale-out architecture with a redundant infiniband backend. Different configurations are available starting from a single brick (a dual controller system and its tray populated with 12 eMLC drives, out of the 25 available) up to a six-brick configuration for a total of 90TB (usable capacity before deduplication/compression). No one gave me prices… but you know, if you ask the price you can’t afford it (and, of course, they are very careful to that because $/GB really depends on the size of the array and deduplication ratio you can obtain from your data).
Why it is important
XtremIO is strongly focused on performance and on how it’s delivered. From this point of view it clearly targets traditional enterprise tier 1 applications and it can be considered a good competitor in that space. It clearly needs some improvements here and there but EMC is showing all its power with the impressive quantity of enhancements that are continuously added.
You know what? From my point of view, the worst part of EMC XtremIO story is that there isn’t a simple and transparent migration path from the VMAX/VNX, which would be of great help for the end user (and EMC salesforce)…
First published here.
The primary use case of this solution is for virtualization.
The deployment model used was on-premises.
I like the deduplication and auto-tiering features.
The product could be improved by reducing the pricing and having better organization in their technical support team.
This solution is stable. I would give it five out of six, not 100% only 90%.
The scalability of this solution is good.
The technical support is good but they are not well organized.
The initial setup was straightforward.
There are costs in addition to the standard licensing fees.
I am a partner for Dell EMC.
My complaints are not about the features of this solution, it's more about the pricing and the support.
I would rate this solution an eight out of ten.
Originally posted at vcdx133.com.
Today I completed the initial performance testing of my EMC XtremIO PoC system. I wanted to take a shot at it myself before the EMC SMEs come in to tune and optimise the configuration. In a single word, “Wow!” This is the first time I have witnessed 400,000 IOPS in any kind of enterprise lab. I look forward to seeing what additional tricks the experts can make my “X-bricks” perform.
Business Requirement for XtremIO
I can imagine people reading this and asking, “Why? It is so expensive!”. Well, the organisation I work for uses monolithic storage (EMC Symmetrix VMAX) which has been sized for capacity, and after 2 years of use we are feeling the impact of performance degradation as we consume the total capacity of the solution. My business requirement is to create a small but powerful “High Performance” cluster of compute, network and storage that will provide low latency, high I/O resources for my business critical applications that are currently suffering. This XtremeIO PoC is an attempt to meet that business requirement; I am also seriously considering hyper-converged infrastructure and server-side flash-cache acceleration as well.
Iometer Test Configuration
Iometer Test Results
We mostly use it for backup because we cannot measure anything and we are afraid to use it for surveillance systems. We were planning to use it mostly for surveillance systems.
The most important thing for the system engineer is to check if there is latency in the IOPS for any run. You cannot measure the number of IOPS or whether or not it is overloaded. You cannot measure anything in EMC about this. Most solutions, especially HP, improved our fall-over performance, with our database and servers. Most servers are HP, but we use EMC now only for backup.
One thing that should be improved is the reporting and monitoring tools. It should use real-time monitoring for storage, IOPS, latency, etc.
The technical support was good, especially for upgrading, which we did four times. It was straightforward and easy.
The initial setup was simple for us.
The initial purchase price was good but when you need to upgrade, it's a different story.
Overall, including the format, support, and technical performance, I would rate it as seven of ten.
The performance very good, and the use case is actually we decided to have all flash a couple years ago and Xtreme IO was one of the vendors that your EMC partner reccomended so there was no discussion of what kind of storage we would buy.
The most valuable features are:
The stability of the product needs improvement.
I am not too impressed with XtremeIO because we had a major failure.
It is very expensive to scale. You have to buy an additional system to extend from one disc, for instance. It is scalable, but extremely expensive to do so.
The technical support was very good, but since the merger with Dell, it is very bad. It went from very good to very bad.
It was very easy and straightforward to setup. We plugged it in, connected, and then started. Simple. Our technical team had no problem with it.
When choosing a vendor, we we value known companies, as a solution. In addition, we value Blue technology.
It is expensive if you need to increase scalability.
EMC (@EMCflash) today announced some new, enhanced, renamed and a rebrand flash solid-state device (SSD) storage portfolio around theme of XtremIO. XtremIO was the startup company with a new all flash SSD storage array that EMC announced they were buying in May 2012.
Since that announcement, Project “X” has been used when referring to the product now known as XtremIO (e.g. all flash new storage array).
Synopsis of announcement
- Product rollout and selective availability of the new all flash SSD array XtremIO
- Rename server-side PCIe ssd flash cards from VFCache to XtremSF
- New XtremSF models including enhanced multi-level cell (eMLC) with larger capacities
- Rename VFCache caching software to XtremSW (enables cache mode vs. target mode)
What was previously announced:
-Buying the company XtremeIO
-Productizing the new all flash array as part of Project “X”
-It would formally announce the new product in 2013 (which is now)
- VFCache and later enhancements during 2012.
Overall, I give an Atta boy and Atta girl to the EMC crew for a Product Defined Announcement (PDA) extending their flash portfolio to complement their different customers and prospects various environment needs. Now let us sit back and watch EMC, NetApp and others step up their flash dance moves to see who will out flash the others in the eXtreme flash games, including software defined storage, software defined data centers, software defined flash, and software defined cache.
Some updates:
http://storageioblog.com/emc-announces-xtremio-general-availability-part/
Flash array with deduplication and compression.
I would like to see improvements in the database workloads. During the testing of database workloads, we found it slow to process I/O requests. This may be due to the compression/deduplication feature available in the product which is still being taken care of by the same controllers.
The product designer should provide a recommendation for which type of workload deduplication/compression will be effective.
This is good to have for VDI, but not for high database workloads though its flash array.
We have been using this solution for the past two years.
There were some stability issues initially, but there aren’t many issues now.
I did not encounter any issues with scalability.
Technical support is good.
We had multiple tier storage without deduplication/compression. We switched due to cost and performance.
The solution is easy to implement and administer.
This solution is good for VDI environments, but not recommended for database workloads.
Several months ago I walked through some of the issues we faced when XtremIO hit the floor and found it not to be exactly what the marketing collateral might present. While the product was very much a 1.0 (in spite of its Gen2 name), EMC Support gave a full-court-press response to the issues, and our account team delivered on additional product. Now it’s 100% production and we live/die by its field performance. So how’s it doing?
For an organized rundown, I’ll hit the high points of Justin Warren’s Storage Field Day 5 (SFD5) review and append a few of my own notes.
Scale-Out vs. Scale-Up: The Impact of Sharing
True to Justin’s review, XtremIO practically scales up. Anything else is disruptive. EMC Support does their best to make up for this situation by readily offering swing hardware, but it’s still an impact. Storage vMotion works for us, but I’m sure spare hardware isn’t the panacea for everyone, especially those with physical servers.
The impact of sharing is key as well. XtremIO sharing everything can mean more than just the good stuff. In April, ours “shared” a panic over the InfiniBand connection when EMC replaced a storage controller to address one bad FC port. I believe they’ve fixed that issue (or widely publicized to their staff how not to swap an SC in a way that leads to panic, until code can protect), but it was production-down for us. Thankfully we were only one foot in, so our key systems kept going on other storage. We’ve seemed to find the InfiniBand exceptions, so I do not think this is a cause for widespread worry. ‘Just stating the facts.
I could elaborate further, but choosing XtremIO means being prepared to swing your data for disruptive activities. If you expect the need to expand, plan for that–rack space, power, connections, etc for the swing hardware, or whatever other method you choose.
Compression: Needed & Coming
This was the deficit that led to us needing four times the XtremIO capacity to meet our Pure POC’s abilities. At the time, we thought Pure achieved a “deduplication” ratio of 4.5 to 1 and were sorely disappointed when XtremIO didn’t. Then we realized it was data “reduction”, which incorporated compression and deduplication. Pure’s dedupe is likely still more efficient since it uses variable block sizes (like EMC Avamar), but variable takes time and post-processing.
When compression comes in the XIOS 3.0 release later this year, I hope to see our data reduction ratio converge with what we saw on Pure. As it stands, we fluctuate around 1.4 to 1 deduplication (which feels like the wrong word–dedupe seems to imply a minimum of 2:1). I choose to ignore the “Overall Efficiency” ratio at the top, as it is a combination of dedupe and thin provisioning savings, the latter of which nearly everyone has. We’ve thin provisioned for nearly 6 years with our outgoing 3PAR, so that wasn’t a selling point; it was an assumption. As a last note on this, Pure Storage asks the pertinent question: “The new release will come with an upgrade to compression for current customers. Can I enable it non-disruptively, or do I have to migrate all my data off and start over?”
Snapshots & Replication
I won’t say much on these items, because we haven’t historically used the first, and other factors have hindered the second. Given that our first EMC CX300 array even had snapshots, the feature arrival in 2.4 was more of an announcement that XtremIO had fully shown up to the starting line of the SAN race (it was competing extremely well in other areas, but was hard to understand the lag here). We may actually use this feature with Veeam’s Backup & Replication product as it offers the ability to do array-level snapshots and transfer them to a backup proxy for offloaded processing.
As for replication, my colleagues and I see it as feature with huge differentiating potential, at least where deduplication ratios are high. VDI or more clone-based deployments with 5:1, 7:1, or even higher ratios could benefit greatly if only unique data blocks were shipped to partnering array(s). For now, VPLEX is that answer (sans the dedupe).
XtremIO > Alternatives? It Depends
As I mentioned in the past, we started this flash journey with a Pure Storage POC. It wasn’t without challenges, or I probably wouldn’t be writing about XtremIO now, but those issues weren’t necessarily as objectively bad or unique to them as I felt at the time. Everyone has caveats and weaknesses. In our case, Pure’s issues with handling large block I/O gave us pause and cause to listen to EMC’s XtremIO claims.
Those claims fleshed out in some ways, but not in others (at least not without more hardware). Both products can make the I/O meters scream with numbers unlikely to be found in daily production, though it’s nice to see the potential. The rubber meets the road when your data is on their box and you see what it does as a result. No assessment tool can tell you that; only field experience can.
If unwavering low-latency metrics are the goal, XtremIO wins the prize. It doesn’t compromise or slow up for anything–the data flies in and out regardless of block size or volume. Is no-compromise ideal? It depends.
Deduplication is the magic sauce that turned us on to Pure, and XtremIO marketing said, “we can do that, too!” Without compromising speed, though, and without post-processing, the result isn’t the same. That’s the point of the compression mentioned earlier.
Then there’s availability arguments. Pure doesn’t have any backup batteries (but stores to NVRAM in flight, so that’s not a deal-breaker), which EMC can point out. EMC uses 23+2 RAID/parity, which Pure is quick to highlight as a weakness. Everyone wants to be able to fail four drives and keep flying, right?
From what I’ve heard, Hitachi will take an entirely different angle
and argue that magic is unnecessary. Just use their 1.6TB and 3.2TB flash drives and swim in the ocean of space. Personally, I think that’s short-sighted, but they’re welcome to that opinion.
Last Thoughts
In production, day to day, notwithstanding our noted glitches, XtremIO delivers. Furthermore, it has the heft of EMC behind it, and the vibe I get is that they don’t seem to be content with second place. Philosophies on sub-components may disagree between vendors, but nothing trips XtremIO’s performance. Is there potential for improvement, efficiencies (esp. data reduction), and even hybrid considerations (why not a little optional post-processing?)? Absolutely. And I’ve met the XtremIO engineers from Israel who aim to do just that. Time will tell.
This article originally appeared here.
Nice real use case, thank you!
Rene,
Great review did you alter any of the hosts setting IE round robin and queue depth. This will help bring down the latency times dramatically.