The integration is not difficult because there is no GUI, but we need to use a PowerShell command. This makes it difficult to monitor and to see the components' statuses. There are strict compatibility methods. You will need to carefully check if your components are compatible.
Founder, Professional Services Director, Lead Architect at Falcon Consulting
Real User
Top 20
2020-08-06T06:44:00Z
Aug 6, 2020
Most of the VMware customers I have been engaging with say they have experienced engagement problems. VMware is trying to sell off the VMware enterprise cost and the whole solution is being sold down to a customer, whereas, usually they would try to intentionally avoid talking about the related Windows license. It's not very eye-catching but it's still sizable and we have implemented several Storage Spaces Direct projects for our customers. Currently, we're working hard to replace some of the special storage and hopefully even replace the harder converged storage, etc. Documentation management could be improved. The reason why this product is not being widely accepted by the public is that it doesn't include intuitive streamline, which makes it far from complete. Storage Spaces Direct is powerful and the performance is amazing, but if you need to deliver a high price-performance ratio and visible performance, you need to have expertise in tuning. Many people will not be able to enjoy this opportunity because it's not included in the training. In this case, you need to have a system integrator. A Complete management UI using a web-based interface would be a beneficial feature. They should also use a direct API — currently, they are only using partial API. Often, our customers need to work with other web-based solutions, so they require full API. I am writing my own API for this reason, and our customers are ok with that, but full API would be a very helpful feature that would lead to much more customer satisfaction. Microsoft should provide support for other channels. Microsoft OS supports other channels, but when they become the S2D storage solution, it's very easy to customize the quality. Technically, you can switch channels and replace your storage, but you will need to store away all of the fiber channel equipment, the hardware, and the cables, which can be very expensive.
Enterprise Architect/Manager at a transportation company with 10,001+ employees
Real User
2020-03-19T13:00:00Z
Mar 19, 2020
There is room for improvement in their network capabilities. Right now, if I'm going with the on-prem Storage Spaces Direct then I need to have a toll switch. They have a requirement: If I'm going for more nodes, they need to have raw traffic — which means FCoE traffic — that can only be through a toll switch. All other OEMs that have hyper-converged do not require a toll switch; I can just plug into a core or distribution. The main reason that people are moving away from the existing, traditional, converged solution is to replace that toll switch.
Actually the technology is heading in the right direction so it is a little difficult to criticize the product itself for what we use it for. I think the online documentation needs a lot of work and so do the sizing tools. Considering what this tool is for, these tools are a very important part of the product. I know what some of the features are that will be coming out because I do have the opportunity to check in with some of the people on the product team. Like for example, it will support thresh clusters, which means that I can have two nodes in one location and two nodes in another location belonging to the same cluster. One more feature beyond that is the ability to converse with the cloud. This adds some processing abilities that are amazing. This type of solution is also something that many of the other competitors cannot say that they have. They just don't have the same capabilities in terms of the reach with the services that Microsoft currently has in the cloud. Microsoft's reach in the cloud is really very extensive.
Learn what your peers think about Microsoft Storage Spaces Direct. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
System Engineer at a tech services company with 11-50 employees
Real User
2020-01-26T09:26:00Z
Jan 26, 2020
With this solution, you have to invest much more in hardware than is required with some other solutions. An example is that costly SSD drives are needed for caching. The overall cost of this solution needs to be reduced. More optimization could be done in terms of mirroring. In order to have 20 terabytes of usable storage, you have to buy about sixty.
Infrastructure Lead at a government with 1,001-5,000 employees
Real User
2019-01-03T20:10:00Z
Jan 3, 2019
RDMA ease of deployment. The performance benefits only came with all the new technology, and not only was RDMA a big requirement, but it was also the most challenging to be fully confident in 100%. We used RoCEv2 and switched to iWaRP a year later. To expand on our challenges, we have the hosts connected via multiple 40GB connections to Cisco 9396 switches with vPC. We had a lot of experience with Fiber Channel in the past, but using ethernet for storage was a change that we didn't have a lot of practical experience with. MS strongly recommends using RDMA and we decided to use RoCEv2. After it was all setup we could see the performance counters confirmed that RDMA was being used, but that doesn't mean that DCB is working 100% correctly. There isn't a lot of great articles published on PFC and DCB configuration end-to-end because it depends on your NICs, Host OS, Switches, etc. Piecing learnings from Mellonix, MS and Cisco documents we believed we had it configured correctly, but we never had 100% confidence that we had and it is very difficult to find a partner willing to put a stamp of certification confirming they believed it was 100% configured correctly (Cisco vPC, DCB/DCBx, LLDP, PFC, SMB multichannel, RDMA, etc. all in the mix). When we experienced some unexplained issues that pointed to intermittent network issues which some errors suggested could be related to RDMA, it was difficult to troubleshoot. When we switched to LACP with vPC (which doesn't work with RDMA/RoCE and so we disabled it) the issues didn't reoccur, but the performance became much less consistent. When we switched to iWarp, the performance was reliably good again and the issues didn't reoccur. It's difficult to be sure where the issue was, my gut says it was PFC configuration on the Cisco switches and with iWarp DCB doesn't need to be 100% because it uses TCP rather then PFC to tolerate certain network conditions. I think we would have seen similar issues with vSAN, but I can't be certain...it may be more tolerant of the edge cases.
Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged or hyper-converged architecture radically simplifies procurement and deployment, while features like caching, storage tiers, and erasure coding, together with the latest hardware innovation like RDMA networking and NVMe drives, deliver unrivaled efficiency and performance.
The integration is not difficult because there is no GUI, but we need to use a PowerShell command. This makes it difficult to monitor and to see the components' statuses. There are strict compatibility methods. You will need to carefully check if your components are compatible.
It is difficult to get a hardware compatibility certification for the solution. In comparison, the process is easy for VMware and VSAN.
The management tool within this solution could be improved. We would also like to be able to access services like Azure when using this solution.
Most of the VMware customers I have been engaging with say they have experienced engagement problems. VMware is trying to sell off the VMware enterprise cost and the whole solution is being sold down to a customer, whereas, usually they would try to intentionally avoid talking about the related Windows license. It's not very eye-catching but it's still sizable and we have implemented several Storage Spaces Direct projects for our customers. Currently, we're working hard to replace some of the special storage and hopefully even replace the harder converged storage, etc. Documentation management could be improved. The reason why this product is not being widely accepted by the public is that it doesn't include intuitive streamline, which makes it far from complete. Storage Spaces Direct is powerful and the performance is amazing, but if you need to deliver a high price-performance ratio and visible performance, you need to have expertise in tuning. Many people will not be able to enjoy this opportunity because it's not included in the training. In this case, you need to have a system integrator. A Complete management UI using a web-based interface would be a beneficial feature. They should also use a direct API — currently, they are only using partial API. Often, our customers need to work with other web-based solutions, so they require full API. I am writing my own API for this reason, and our customers are ok with that, but full API would be a very helpful feature that would lead to much more customer satisfaction. Microsoft should provide support for other channels. Microsoft OS supports other channels, but when they become the S2D storage solution, it's very easy to customize the quality. Technically, you can switch channels and replace your storage, but you will need to store away all of the fiber channel equipment, the hardware, and the cables, which can be very expensive.
There is room for improvement in their network capabilities. Right now, if I'm going with the on-prem Storage Spaces Direct then I need to have a toll switch. They have a requirement: If I'm going for more nodes, they need to have raw traffic — which means FCoE traffic — that can only be through a toll switch. All other OEMs that have hyper-converged do not require a toll switch; I can just plug into a core or distribution. The main reason that people are moving away from the existing, traditional, converged solution is to replace that toll switch.
Actually the technology is heading in the right direction so it is a little difficult to criticize the product itself for what we use it for. I think the online documentation needs a lot of work and so do the sizing tools. Considering what this tool is for, these tools are a very important part of the product. I know what some of the features are that will be coming out because I do have the opportunity to check in with some of the people on the product team. Like for example, it will support thresh clusters, which means that I can have two nodes in one location and two nodes in another location belonging to the same cluster. One more feature beyond that is the ability to converse with the cloud. This adds some processing abilities that are amazing. This type of solution is also something that many of the other competitors cannot say that they have. They just don't have the same capabilities in terms of the reach with the services that Microsoft currently has in the cloud. Microsoft's reach in the cloud is really very extensive.
With this solution, you have to invest much more in hardware than is required with some other solutions. An example is that costly SSD drives are needed for caching. The overall cost of this solution needs to be reduced. More optimization could be done in terms of mirroring. In order to have 20 terabytes of usable storage, you have to buy about sixty.
RDMA ease of deployment. The performance benefits only came with all the new technology, and not only was RDMA a big requirement, but it was also the most challenging to be fully confident in 100%. We used RoCEv2 and switched to iWaRP a year later. To expand on our challenges, we have the hosts connected via multiple 40GB connections to Cisco 9396 switches with vPC. We had a lot of experience with Fiber Channel in the past, but using ethernet for storage was a change that we didn't have a lot of practical experience with. MS strongly recommends using RDMA and we decided to use RoCEv2. After it was all setup we could see the performance counters confirmed that RDMA was being used, but that doesn't mean that DCB is working 100% correctly. There isn't a lot of great articles published on PFC and DCB configuration end-to-end because it depends on your NICs, Host OS, Switches, etc. Piecing learnings from Mellonix, MS and Cisco documents we believed we had it configured correctly, but we never had 100% confidence that we had and it is very difficult to find a partner willing to put a stamp of certification confirming they believed it was 100% configured correctly (Cisco vPC, DCB/DCBx, LLDP, PFC, SMB multichannel, RDMA, etc. all in the mix). When we experienced some unexplained issues that pointed to intermittent network issues which some errors suggested could be related to RDMA, it was difficult to troubleshoot. When we switched to LACP with vPC (which doesn't work with RDMA/RoCE and so we disabled it) the issues didn't reoccur, but the performance became much less consistent. When we switched to iWarp, the performance was reliably good again and the issues didn't reoccur. It's difficult to be sure where the issue was, my gut says it was PFC configuration on the Cisco switches and with iWarp DCB doesn't need to be 100% because it uses TCP rather then PFC to tolerate certain network conditions. I think we would have seen similar issues with vSAN, but I can't be certain...it may be more tolerant of the edge cases.