What is our primary use case?
We use the solution for HA storage for HVs.
We are an SMB that can't be bothered paying tens of thousands of dollars just to get a proper HA storage for a two node failover cluster. Therefore StarWind's vSAN was financially attractive from the start.
The reviews you can find all over the web incentivised us to research this solution deeper (e.g. just check all the great posts vom Kooler on Stackoverflow), leading to us actually implementing it.
We were coming from an S2D implementation which already gave us a ridiculous amount of headaches (bugs, performance stalls, "we know what's best for you" automatisms) in addition to being slow and annoying to administrate/debug while also having the most annoying documentation ever to be created by mankind. These solutions rush out some code and ship it and never fix anything (but break it every few weeks with patches).
The initial tests were easily implementable (without begging some sales folks for a POC, due to a free version being available) as well as proper documentation that you actually like to read due to it having been typed by a tech (and that also isn't behind a pay/registration wall).
As close to perfect as the documentation is, do read the blog posts to the two-node HA setup as well. Some minute details were only found in those.
There are no showstoppers, and not many things in general, just a few hints here and there.
The install itself is easy as pie. The config file is properly documented (you can do most things via GUI, just some things are set in the main config file).
Do help yourself to the iSCSI Powershell commands (Windows defaults from MS) when implementing. It is way more attractive than clicking via GUI.
(New-IscsiTargetPortal, Connect-IscsiTarget etc.)
Some things must be done via GUI though, since iSCSI has been implemented way back and "making scripting available" wasn't that widespread for developers back then. This being a Microsoft topic, not StarWind though. They would have had to make their very own iSCSI implementation otherwise.
For testing, you should use a proper tool like https://docs.microsoft.com/en-...
since "Windows copy from within the VM running on the test setup" can be flakey.
Not as in "the results aren't valid real world performance if you check with the Windows copy within a VM" but rather "non-scientific" since you can't extract much data from that process aside from size/time.
There is a visual bug with a specific part of iSCSI. It's Microsoft's fault, and, as usual, has never been fixed in over 15 years. Just don't panic if you experience it.
https://www.dell.com/support/k...
https://social.technet.microso...
How has it helped my organization?
Price-wise this is very attractive. The support is great (little that we needed due to the good docs) and I would expect you to reach a very good performance just like we did.
The next-best solution from my research back then - that you would actually want to use, so no S2D or anything - would have started at 7X or 8X the price. Since StarWind's solution has served us very well over the last two years already, I would recommend it.
To respond to the "how it has improved our organization," in a nutshell: it has provided very stable and performant storage-level HA which allows for live failover of VMs (as long as the compute doesn't die as well).
What is most valuable?
We like the ability to be installed and used on the same host, so we get the max performance possible.
It is easy to set up and maintain, resulting in a happy admin and low TCO/good ROI.
It offers good performance and stability and also makes for a happy admin.
It provides HA for our failover cluster storage where anything can go wrong up until compute, the thing still chugs along like nothing ever happened.
The upgrades are also very easy to implement. It's basically "click click click" once everything has been shut down and you're back up really quickly.
What needs improvement?
Feature-wise we are only waiting for the release of a "planned disaster" feature that would allow us to patch a hypervisor node without having to take the full storage offline.
ATM (20220609) is still necessary, since taking a node offline without properly activating the maintenance mode on the vSAN would trigger a full sync of the vSAN nodes.
This is fine and a good thing since it ensures data integrity. However, there is something in the making that would ensure integrity without a full sync after a node goes down, which, as stated above, one could "abuse" to patch (and boot = take down) the hypervisors during business hours 😁
Other than that, the thing is rock stable and chugging along without issues.
We are an SMB so we "only" have around 50 VMs on our FO cluster, which is a medium load for SSDs.
If you plan to go more to the "max" side of performance use, do proper testing!
For how long have I used the solution?
I've used the solution for two years.
What do I think about the stability of the solution?
It's rock-stable.
So far, we only had visual issues in a specific place due to Microsoft being a PITA and never fixing any bugs they bring in (this one being over 15 years old).
What do I think about the scalability of the solution?
It scales very easily.
Depending on your setup, their actual SAN might be a better fit for you, however, that's for your very specific case to decide.
How are customer service and support?
We really only had contacted the support for the final checkup after setting it up ourselves, so that they could verify everything is running as it is supposed to.
That was great, so our experience has been great overall.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We previously used Microsoft S2D. It had bad documentation, offered bad performance, and had bad GUI and bad CLI.
How was the initial setup?
The setup is very easy, and they offer very nice documentation and blog posts.
What about the implementation team?
We did the implementation all in-house, self-taught by their great docs.
What was our ROI?
Since our HVs already run with Windows DC we had S2D included in the cost.
However, the operational costs were high. I'd easily say the invested time over a year with S2D would be $20-30K while StarWind's vSAN now took like $500-750 (yes, less than $1K!) over two years. 😎
Even if you add the license cost (which is cheap even with our 3-year premium support package) it is a no-brainer.
What's my experience with pricing, setup cost, and licensing?
There is the "invisible" cost of you reading the docs, however, that comes with every solution. At least with StarWind, you have tech-to-tech documentation that you can actually use.
The pricing is very fair!
Which other solutions did I evaluate?
We looked into VMware and others. They usually had a way higher cost or worse performance or worse GUI/documentation, or all of those things together.
What other advice do I have?
I'd advise others to use the free license to test it. The doc is also public. This is the way.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Excellent product at a snip of the cost of a Hardware SAN.
I required a solution for shared storage on a 2-host cluster and this software was perfect. It is easy to install and no issues since. Software upgrades are seamless too. I spoke with support a couple of times for a sanity check and always get instant answers and advice.
Would definitely recommend this software.