Even with the fast SSD drives and processing on the controller, there was still a lag on the FC ports. The initial node came with only two FC ports per controller. It was used for multiple ports on the VMAX to spread traffic over several VSANs. For more detail: I had 4 dH2i powerpath servers hitting it, along with 4 vmware clusters 8 host each, on a X1 brick we only had two controllers both with 2 port So a total of 4 FC ports. Compared to the VMAX 20K, where I had 8 ports on vlan 2, 6 ports on vlan 100, 8 ports on vlan 50, so I was able to spread the traffic around between process. I had 2 directors on one VMAX, whereas I had 3 directors on the other VMAX. With only 4 ports on the xtremeIO, the most I could do was send traffic on 2 ports to two different VLANS one on each controller. So my comment was get additional ports, so the DH2I servers don’t hog all the IOPS. Recommend getting the second brick X2 and the matrix switch, then with 8 FC connector can start spreading the traffic. The company had me routing the data thru a fabric switch MDS9500, separate from the main traffic as this was a test. Most of production was on 4 other MDS9500 switches. Monitor of the switch, did not show a bottleneck going to the servers, only on the 4 8GB FC going to the XtremeIO. Connect to different blades on the 9500. Don’t think they have touched it since I left. Nor on the other 8 SAN units.