We use it as a primary storage for our Horizon View environment.
The product is great. It runs well.
We use it as a primary storage for our Horizon View environment.
The product is great. It runs well.
It helped us survive power outages in one of our data centers, then continued to function without a hitch.
I would like a better Hardware Certification List (HCL). The HCL should a little easier to deal with.
Making the hardware compatibility not as much of an issue would be a good thing.
It scales well. We have plenty of room to grow. It should be a good long term solution for us.
Technical support has been fantastic. We always get answers quickly whenever we call.
We wanted to give more redundant access to the users' desktops than they previously had. Before, we were on a single SAN which was causing us issues if we had either an issue with the SAN or an issue with our environment when the SAN would go down. By using vSAN, it would allow us to spread our data across multiple data centers on our campus and be more fault tolerant.
It was really straightforward.
We had some help from Venture Technologies, who helped us get it going. They didn't really have to do too much. We figured it out.
We have increased our user productivity. However, being in Higher Education, we don't really measure it.
Give it a look. It will save you time and money.
The most valuable vSAN features are:
We are able to deploy vSAN clusters to remote locations very easily at a fraction of the cost. This saves us time and money. We don’t have to worry about stability issues.
Support for iSCSI access would be great, but this may be supported in the latest versions of vSAN.
We have a few physical servers in our environment and it would be great, if these servers could also access the storage in vSAN. With vSAN iSCSI support, we would be able to connect our physical servers to vSAN as well.
We have been using this solution for two years.
In terms of stability, vSAN is very resilient, self-adapting, and self-healing. In the two years that I’ve worked with vSAN, I haven’t experienced any vSAN stability issues.
There haven't been any issues with scalability. Adding additional storage was as simple as inserting a hard drive into a hard drive bay or adding an additional server node to the data center cluster. That was all we had to do, and vSAN auto-configured everything.
We had a VMware vSAN engineer present to set up our very first vSAN cluster. There was nothing to it, but it was great to have an expert on-site for questions and to provide us with training. Other than that, we have never had to log a support request with VMware for vSAN.
We didn’t use a virtual SAN solution previously. We just used traditional, and very expensive, SAN storage arrays. We moved to vSAN because our budget wasn’t getting any bigger, but our storage requirements were increasing.
The setup was straightforward. It literally took a few mouse clicks to setup vSAN.
You get better value for your money with a vSAN solution than with a traditional SAN with lower TCO.
We looked briefly at alternatives, but nothing stood out like vSAN. Nutanix was another solution, but surprisingly, it would have costed us more.
Get a vSAN specialist to come out and spec your vSAN cluster according to your requirements. Have him configure it and test that it is performing properly.
It's lowered our storage costs while still maintaining High Availability and with easy installation.
Expand the hardware compatibility list – it's pretty short. Definitely also the diagnostic and monitoring could be improved. That stuff is still very new.
We have been using it since it came out in March 2015.
So far so good.
Unsure – all I know is what I read, if it does what it says it does I'm very impressed.
Very good – quality support.
We have three hosts in a cluster, and it was surprisingly easy.
Try it out – that’s the best way to know whether it's right for your organization.
We just started implementation, so it's hard to give our perspective as we're still doing our evaluation. We purchased the product, and we have ten-fold service on it.
If it works out well, storage is our most important element of our infrastructure. We're looking for a stable and high performing solution and think this is it.
I'd like to see support for iSCI. Right now it’s all internal protocols, and they promise it in the next version. They need to support more types of hardware – the list is too narrow.
It's stable, but it's really picky on the hardware. We knew that going in, but the scale was a surprise, not truly as agnostic as we thought it would be. They have a list, and if you deviate a bit, then it won’t support the environment. We had an issue where we deviated slightly, so we probably will have to follow their hardware compatibility list.
Very scalable, it's one of the reasons we bought it. They are in v2.0, and we feel like now it’s mature.
Support is generally good, but a little slow sometimes. You need to stick to their compatibility list if you want their full support.
We were using EMC and we knew we needed something new. Cost is important to look at, because we're nonprofit, as well as the integration with the other VMware products, and the stability of the product too.
Setup is very straightforward.
It’s a good solution – the trend is going towards converged infrastructure. It's all policy based – you can set general policy and then trust VSAN to do everything else.
We use our vSan primarily for our VCF deployment. We run our production workloads on it, mostly for Microsoft SQL databases and various WebSphere and web-based front-end applications.
It performs pretty well for the most part. The older versions had some issues, specifically regarding upgrade paths and the robustness of the product, but in the last two or three versions they've really addressed those issues and brought it up to speed and made it a real enterprise solution.
I would like to see more comprehensive lifecycle management. The current path and process for upgrading or updating the firmware, as well as the storage controller software to interact with that firmware, is fairly manual and not very well documented. A little more time and effort spent on the documentation of the lifecycle management for vSan would be really great.
Currently, it's very stable. Previous versions, which are still active and out there online: upgrade to the new version.
Scalability is slightly limited in that you're pinned by the physical disks in your hosts, but provided that your solution doesn't require you to have specific disk technology, you can get the size you need and expand it out as much as you need to.
I give technical support an A-plus, from my experience. It was perfect, it was awesome. They helped us recover from a very major outage and we would have been down for much longer had they not been involved.
We were on old hardware and we needed to move to a new solution.
It completely removes the need for a storage network and for a storage administrator and all of that infrastructure and the costs that are involved with them. That, right there, is a huge return.
It's great for DevTest and, as long as you're not going to be consuming data at huge rates, it's great for Prod too.
I would rate vSAN as six-and-a-half or seven out of ten, but only because of the major problems we experienced with them a few months ago that led to some big outages. From what I understand, the current version alleviates those issues. If we're evaluating the current version, I would give it an eight.
It would be a ten if there were more robust lifecycle management and a better-documented implementation within vSphere.
We had several servers we used in our VMware cluster, as well as a storage device. The implementation of vSAN reduced the rack space, since we no longer required several slots in the cabinet to rack a storage device. vSAN also made it very easy for us to scale out. Power consumption was also reduced within our datacentre.
I like the scalability and the fact that it reduces your total cost for storage over several years.
The only thing I can think of at this time is to improve the performance monitoring and performance visibility within the GUI. They have already made several improvements in vSAN 6.2, but there's always room for improvement when it comes to monitoring performance.
We had no stability issues.
We had no scalability issues.
VMware technical support provides a great service.
We switched to move towards a software-defined datacentre.
It is very easy to configure and setup. vSAN is already part of vSphere ESXi. You simple need to apply a license and do minor configuration to get it to work.
The first 1-2 years of purchasing vSAN will be expensive. Thereafter, the longer you are running it, the more cost savings you will have.
We looked into several other products, such as Pure Storage and Dell solutions.
Keep it simple, and don’t try and over-complicate things. Make sure to follow VMware best practices when it comes to implementing your vSAN solution. Read those whitepapers and make sure you understand how you want to implement it in your environment.
The reduction in cost of storage: In my most recent deployment, we reduced cost from around $20,000 per TB (CapEx) to less than $1,000 per TB (CapEx). This is not taking into account deduplication/compression or the ability to add disks and scale vertically, not incurring licensing costs, which would drive the cost down further.
Traditional SANs require large up-front costs, and with "forklift" upgrades, you end up spending a very large amount of money initially and then expect to recoup the costs over the lifetime of the array. This is not how vSAN – or any other HCI (hyperconverged infrastructure) product – works. The idea is to have a small initial investment and, with horizontal/vertical scaling, you can grow into the needs of your environment. This can be accomplished several ways, by either adding more disks to each host (vertical scaling) or by adding more nodes to the cluster (horizontal scaling). This allows for much greater flexibility with your storage. Before HCI, you were required to guess how much storage you were going to need, and were stuck with what you guessed at.
Upgrades are also much simpler. Because the system is software-defined, you simply upgrade the software rather than the entire hardware stack. If you want to upgrade the hardware, you would then simply add nodes in, and remove older nodes. It is also possible to create a new cluster and do a swing migration; however, this is similar to older-style upgrades. The point is that there are a lot of options available with HCI systems.
Management of the environments is overall simpler, allowing for during-hours patching with no downtime and little risk; also allowing us to stay more current with patching, reducing the overall risk of the environments.
The worst part of vSAN, as with most VMware products, is that you need to use the vSphere Web Client to interact with it. The vSphere Web Client is slow and clunky, making interacting with the system difficult and often times painful. I have been told that the new version of the web client will be significantly better, but do not have personal experience with it. Other than being difficult to work with, it can cause outage scenarios to take significantly longer to troubleshoot because you waste a lot of time waiting for the client to load information, or just load in general. It is a huge drawback for an otherwise very good product.
I have used it in various deployment scenarios since 2015, or about 1.5 years.
I have observed no stability issues when the product is deployed as instructed. It can and will have stability issues if you do not follow the hardware compatibility list (HCL) or the vSAN Deployment and Sizing Guide.
The product scales easily, up easier than down, due to the need to remove the disks and migrate the data from the nodes you wish to remove from the cluster.
Actual support engineers are excellent; however, opening cases is often difficult/frustrating.
In my current project, the customer previously used EMC VMAX arrays. As detailed elsewhere, the CapEx savings were incredible.
During my current project, initial setup was very complex, though this was by our own choosing and was needlessly complex. In the past, setups were often very straightforward, though you need to verify your design properly, as mentioned.
VMware licensing is per socket for VSAN, like everything else. The platform is very flexible, so be sure to look at all your options.
I was not part of the evaluation process but cost was a major factor, as well as high availability.
Discuss the deployment with VMware sales; I've met several of them and they are generally smart people looking to help get you the best deployment possible.
Performance is the most valuable feature because you are moving the storage closer to the CPU. It’s also cheap. We also evaluated an all-flash array, but even a low-end flash is much more expensive. This is much cheaper.
Concrete benefits would be manageability; we don’t have a storage guy because there is less stuff to deal with.
The savings is not the issue but I can scale my system – I’m building the node for 200 users, but all I will have to do is order another host and it will be configured exactly the same, and they are over-provisioned in terms of memory.
I have been using VMWare since it was a beta test.
I don’t know, but my gut feeling is that it distributes across the hosts, which should be very stable, and it’s all done at the hypervisor level. I don’t think we’ll have any issues.
I think it’s scalable in a linear fashion. We’ve outgrown our low-end SAN and hit a wall. We didn’t have a storage guy so we hit a wall when we hit 180 users and it was thrashing the SAN. With VSAN, that kind of issue – especially using the sizing tool – says that you should be more than fine. We're a small shop so we don’t have any doubt that it will scale to size.
They are the best in class – I hold everyone else to their standard. They solve the problem and work the problem. I’m kind of spoiled because I also get federal support so I get especially good service. I have always found their support to be stellar.
I had an issue a few years ago where my hosts were dropping and I couldn’t connect to them, so for three days I worked with VMWare. I went through four shifts of support staff, and they stayed with me. It was a 72 hour outage and I got back around to my original guy, and he figured it out. They are amazing. They don’t point a finger – with IBM they would hand it off from one guy to another and will never ever tell you that.
We replaced our infrastructure and did a proper POC. It’s cheap enough that we can still use the hosts and hook a SAN in, and everyone will get an SSD at their desks, so most of the cost is infrastructure. I loved it when I heard about it – virtualized storage and a distributed RAID. Makes total sense.
Their licensing gets a bit confusing, it’s hard to get the hang of that.