Try our new research platform with insights from 80,000+ expert users
it_user614595 - PeerSpot reviewer
ICT Network Administrator at a maritime company with 501-1,000 employees
Real User
There is no need to manage separate storage areas in SAN/NAS environments. Storage management comes built-in.

What is most valuable?

The most important feature for us is the converged infrastructure, which is all this tool is about. There is no need to manage separate storage areas in SAN/NAS environments. Storage management comes built-in with the vSAN tool. Storage is managed via policies. Define a policy and apply it to the datastore/virtual machine and the software-defined storage does the rest. These are valuable features.

Scalability and future upgrades are a piece of cake. If you want more IOPS, then add disk groups and/or nodes on the fly. If you want to upgrade the hardware, then add new servers and retire the old ones. No service breaks at all.

The feature that we have not yet implemented but are looking at, is the ability to extend the cluster to our other site in order to handle DR situations.

How has it helped my organization?

Provisioning virtual machines has been simplified, as there is no provisioning/management of the separate storage layer and it is no more in question.

What needs improvement?

The management client, i.e., the Flash-based client, is just not up to the mark. I’m really waiting for the HTML5 client to be fully ready and all the features are implemented to it. This, of course, is not a vSAN issue but a vSphere issue.

Of course as vSAN is tightly embedded into vSphere, it is also managed by the same tool. vSphere management is done via browser, and currently the only supported client is the flash-based one. VMware is rolling out a new HTML5 –based client, but that is a slow process. It began as a Fling and since then, there has been quite a number of releases as new features are added. It is today quite usable, but still not complete yet.

There is also the C# -client, also known as the fat-client, which is to be installed onto a management system. Recent versions of vSphere do not support the C#-client anymore. Thus the browser is the only possibility with current versions.

So, my criticism is aimed towards the current Flash-based client, which is utterly slow, and Flash itself being deprecated technology. The sooner we can get rid of it, the happier we all will be.

For how long have I used the solution?

I have used this solution for around a year.

Buyer's Guide
VMware vSAN
October 2024
Learn what your peers think about VMware vSAN. Get advice and tips from experienced pros sharing their opinions. Updated: October 2024.
816,406 professionals have used our research since 2012.

What do I think about the stability of the solution?

Stability has not been an issue for us. We have not run into any serious software faults. VMware ESXi is a mature product with very few problems and today, vSAN is also getting there.

What do I think about the scalability of the solution?

The scalability of the product is way beyond our needs.

How are customer service and support?

L1 technical support, which I have mostly been dealing with, has been pretty solid, especially the guys in Ireland, who do handle it pretty well, both technically and in reference to the customer service aspect.

Which solution did I use previously and why did I switch?

We did not have any comparable solution previously. We did previously use traditional SAN / NAS environments from where the storage areas were provisioned for the VMware clusters.

How was the initial setup?

The initial setup was quite straightforward. All in all, it took three days to complete the entire process; that included installation of the hardware itself, installation of ESXi onto the hardware, creating the data center and the cluster, configuring the networks and multicasting on the surrounding network infrastructure, defining all the disk groups and networks at the cluster, and finally turning the vSAN on. vSAN was the simplest part of the whole process.

What's my experience with pricing, setup cost, and licensing?

As VMware products are licensed per number of sockets, you need to think this fully through. However, don’t go cheap on the number of hosts. You’ll thank me later.

Which other solutions did I evaluate?

We got presentations both from SimpliVity and Nutanix. No serious evaluation of other products was made. We did evaluate vSAN a couple months before the purchase, so as to get familiar with it, and we do have a lab environment now to play with.

In hindsight, we could have carried out a more-thorough evaluation of vSAN to get a really good feel about it; maybe even run a part of your actual production there for an extended period of time to see all the pros and cons.

What other advice do I have?

Study the VMware Hardware Compatibility List (HCL) carefully with your server hardware provider and make sure all the components/firmware versions are on the HCL; either that or buy predefined hardware, a.k.a. vSAN-ready nodes, from a certified vendor. Always make sure that the hardware and firmware levels are on par with the HCL. You may have to upgrade; for example, you may need to upgrade the disk controller firmware when the updates to ESXi are installed. VMware does a pretty good job here and vCenter tells you that there are inconsistencies. However, you should still be prepared for that in advance, before actually installing the updates.

Don’t go with the minimum number of (storage) nodes, as that won’t give you enough room for a hardware failure during a scheduled maintenance break. For a minimum setup, without advanced options in vSAN 6.5 such as deduplication, compression and when Failures to Tolerate (FTT) = 1, the required number of nodes is three. VMware recommends in best practices a minimum number of four nodes. Do yourself a favour and go with at least that or even five would be good.

When disk groups are designed, it is always better to have more smaller disk groups than a few larger disk groups. This increases your availability, decreases time to heal from disk troubles and gives you an improved performance, as there are more cache devices.

If your budget allows it, then go with the all-flash storage. If not, go with even more disk groups. Our cluster has pretty good performance; although we have spinning disks, the read latency usually stays below 1ms and write latency stays below 2ms.

Plan your network infrastructure carefully, especially that part which handles the vSAN traffic. Go with separate 10G switches and dual interfaces for each server just for vSAN. Handle the virtual machine traffic, migration traffic and management traffic elsewhere. Go with 10G or faster, if you need that. Don’t use 1G for vSAN traffic, unless your environment is really small or is a lab.

Plan your backup / restore strategy really well and test it through. Test restore periodically for both full virtual machines and single files inside virtual machines. To carry out test restore is always important, but with vSAN it is even more so, as all your eggs are in the same basket and there are no more traditional .vmdk files that you can fiddle with. A separate test / lab vSAN cluster would be really good to test various things such as installing updates, restoring backups etc.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user574359 - PeerSpot reviewer
Engagement Cloud Solution Architect - Ericsson Cloud Services at a comms service provider with 11-50 employees
Real User
I can create my own storage policies and prioritize some apps over others.

What is most valuable?

Storage policies and I/O are the most valuable features. The storage policies are useful in my job to create my own policies and prioritize some apps over others, and create high availability for some virtual machines.

How has it helped my organization?

It increases the performance of the virtual machines and reduces the TCO for storage deployment.

What needs improvement?

Hardware compatibility needs to be increased to be able to use more RAID controllers available on the market.

For how long have I used the solution?

I have used it for three years.

What do I think about the stability of the solution?

I have not encountered any stability issues.

What do I think about the scalability of the solution?

I have not encountered any scalability issues.

How are customer service and technical support?

Technical support is 8/10.

Which solution did I use previously and why did I switch?

We previously used another solution. We switched because it reduced the TCO.

What's my experience with pricing, setup cost, and licensing?

Changes have been made in version 6.5.

Which other solutions did I evaluate?

Before choosing this product, we evaluated EMC ScaleIO.

What other advice do I have?

It is easy to design and easy to implement.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are an OEM partner.
PeerSpot user
Buyer's Guide
VMware vSAN
October 2024
Learn what your peers think about VMware vSAN. Get advice and tips from experienced pros sharing their opinions. Updated: October 2024.
816,406 professionals have used our research since 2012.
PeerSpot user
IT Administrator and Sr. VMware Engineer at a retailer with 501-1,000 employees
Real User
It supports two architectures (hybrid and All-Flash), which is useful for all virtualized applications, including business-critical applications.

Originally posted in Spanish at https://www.rhpware.com/2015/02/introduccion-vmware...

The second generation of Virtual SAN is the vSphere 6.0 that comes with sharing the same version number. While the jump from version 1.0 (vSphere 5.5) 6.0 to change really worth it occurred, as this second-generation converged storage integrated into the VMware hypervisor significantly increases performance and features that are based on a much higher performance and increased workloads scale business level, including business-critical applications and Tier 1 capital.



Virtual SAN 6.0 delivers a new architecture based entirely on Flash to deliver high performance and predictable response times below one millisecond in almost all business critical applications level. This is also achieved because in this version doubles the scalability up to 64 nodes per host and up to 200 VMs per host, as well as improvements in technology snapshots and cloning.

Performance characteristics

The hybrid architecture of Virtual SAN 6.0 provides performance improvements of nearly double compared to the previous version and 6.0 Virtual SAN architecture all-flash four times the performance considering the number of IOPS you get in clusters with similar workloads predictable and low latency.

As the hyper convergent architecture is included in the hypervisor efficiently optimizes the ratio of operations of I/O and dramatically minimizes the impact on the CPU, which leads to products from other companies. The distributed architecture based on the hypervisor reduces bottlenecks allowing Virtual SAN move data and run operations I/O in a much more streamlined and very low latencies, without compromising the computing resources of the platform and keeping the consolidation of the VM's. Also the data store Virtual SAN is highly resilient, resulting in preventing data loss in case of physical failure of disk, hosts, network or racks.

The Virtual SAN distributed architecture allows you to scale elastically uninterrupted. Both capacity and performance can be scaled at the same time when a new host is added to a cluster, and can also scale independently simply by adding disks existing hosts.

New capabilities

The major new capabilities of Virtual SAN 6.0 features include:

  • Virtual Architecture SAN All-Flash: Virtual SAN 6.0 has the ability to create an all-flash architecture in which the solid state devices are used wisely and work as write cache. Using PCI-E devices high performance read / write intensive, economical flash storage devices and data persistence is achieved at affordable costs

Virtual SAN 6.0 All-Flash predictable levels of performance achieved up to 100,000 IOPS per host and response times below one millisecond, making it ideal for critical workloads. - See more at: https://www.rhpware.com/2015/02/introduccion-vmware...

Doubling the scalability

This version duplicates the capabilities of the previous version:

  • Scaling up to 64 nodes per cluster
  • Scaling up to 200 VMs per host, both hybrid and All-Flash architectures
  • Size of the virtual disks increased to 62TB

Performance improvements

  • Duplicate IOPS with hybrid architectures. Virtual SAN 6.0 Hybrid achieves more than 4 million IOPS for read-only loads and 1.2 million IOPS for mixed workloads on a cluster of 32 hosts
  • IOPS quadruples with architecture All-Flash: Virtual SAN 6.0 All-Flash achieves up to 100,000 IOPS per host
  • Virtual SAN File System: The new disk format enables more efficient operations and higher performance, plus scalar much simpler way
  • Virtual SAN Snapshots and Clones: highly efficient snapshots and clones are supported with support for up to 32 snapshots per clone per VM and 16,000 snapshots per clone per cluster
  • Fault Tolerance Rack: the Virtual SAN 6.0 Fault Domains allow fault-tolerant level rack and power failures in addition to the disk, network and hardware hosts
  • Support for systems with high-density disks Direct-Attached JBOD: You can manage external storage systems and eliminate the costs associated with architectures based blades or knives
  • Capacity planning: You can make the type scenario analyses "what if" and generate reports on the use and capacity utilization of a Virtual SAN data store when a virtual machine is created with associated storage policies
  • Support for checksum based on hardware: limited checksums based hardware drivers to detect problems of corruption and data integrity support is provided
  • Improving services associated disk: Troubleshooting and associated services is added to the drives to give customers the possibility to identify and fix disks attached directly to hosts:
  • LED fault indicators. magnetic or solid state devices having permanent faults lit LEDs to identify quickly and easily
  • Manual operation LED indicators: this is provided to turn on or off the LED and identify a particular device
  • Mark as SSD drives: you can make devices not recognized as SSDs
  • Mark as local disks: You can dial without recognizing flash drives as local disks to be recognized for vSphere hosts
  • Default Storage Policies: automatically created when Virtual SAN is enabled in a cluster. This default policy is used by the VM's that have no storage policy assigned
  • Evacuation of disks and disk groups: the evacuation data disks or disk groups are removed from the system to prevent the loss of data supports
  • Virtual SAN Health Services: This service is designed to provide bug fixes and generate health reports vSphere administrators about Virtual SAN subsystems 6.0 and its dependencies, such as:
    • Health Cluster
    • Network Health
    • Health Data
    • Limits Health
    • Physical Disk Health

Buyers

Requirements vSphere

Virtual SAN 6.0 requires vCenter Server 6.0. Both the Windows version as visa Virtual SAN can handle. Virtual SAN 6.0 is configurable and monitored exclusively through vSphere Web Client. It also requires a minimum of 3 vSphere hosts with local storage. This amount is not arbitrary, but is used for the cluster meets the fault tolerance requirements of at least one host, a disc network failure.

Storage System Requirements

Disk controllers

Each vSphere host own contribution to the cluster storage Virtual SAN requires a driver disk, which can be SAS, SATA (HBA) or RAID controller. However, a RAID controller must operate in any of the following ways:

  • Pass-through
  • RAID 0

The Pass-through (JBOD or HBA) is the preferred mode settings 6.0 Virtual SAN that enables managing RAID configurations the attributes of storage policies and performance requirements defined in a virtual machine

Magnetic devices

When the hybrid architecture of Virtual SAN 6.0 is used, each vSphere host must have at least one SAS, NL-SAS or SATA disk in order to participate in the Virtual Cluster SAN cluster.

flash devices

In architecture-based flash drives 6.0 Virtual SAN devices can be used as a layer of cache as well as for persistent storage. In hybrid architectures each host must have at least a flash based (SAS, SATA or PCI-E) in order to participate in the Virtual SAN disk cluster.

In the All-flash architecture each vSphere host must have at least one flash based device marked as device capacity and one for performance in order to participate Virtual SAN cluster.

Networking requirements

Network Interface Cards (NIC)

In hybrid architectures Virtual SAN, each vSphere host must have at least one network adapter 1Gb or 10Gb. VMware's recommendation is 10 Gb.

The All-flash architectures only support 10Gb Ethernet NICs. For redundancy and high availability, you can configure NIC Teaming per host. NIC Teaming is not supported for link aggregation (performance).

Virtual Switches

Virtual SAN 6.0 is supported by both VMware vSphere Distributed Switch (VDS) and the vSphere Standard Switch (VSS). Other virtual switches are not supported in this release.

VMkernel network

You must create a VMkernel port on each host for communicating and labelled for Virtual SAN Virtual SAN traffic. This new interface is used for intracluster communications as well as for read and write operations when a vSphere cluster host is the owner of a particular VM but the current data blocks are housed in a remote cluster host.

In this case, the operations of I / O network must travel through the cluster hosts. If this network interface on a vDS is created, you can use the Network feature I / O control to configure shares or reservations for Virtual SAN traffic.

Conclusion

This new second generation Virtual SAN is a storage solution enterprise-class hypervisor level that combines computing resources and storage from the hosts. With its two supported architectures (hybrid and All-Flash) Virtual SAN 6.0 meets the demands of all virtualized applications, including business-critical applications.

Without doubt Virtual SAN 6.0 is a storage solution that realizes the VMWare defined storage software or SDS (Software Defined Storage) offering great benefits to both customers and the vSphere administrators who every day we face new challenges and complexities. It certainly is an architecture that will change the vision of storage systems from now on.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
The solution is simple to manage but redirect-on-write snapshots is needed

Over the past decade VMware has changed the way IT is provisioned through the use of Virtual Machines, but if we want a truly Software-Defined Data Centre we also need to virtualise the storage and the network.

For storage virtualisation VMware has introduced Virtual SAN and Virtual Volumes (expected to be available in 2015), and for network virtualisation NSX. In this, the first of a three part series, we will take a look at Virtual SAN (VSAN).

So why VSAN?

Large Data Centres, built by the likes of Amazon, Google and Facebook, utilise commodity compute, storage and networking hardware (that scale-out rather than scale-up) and a proprietary software layer to massively drive down costs. The economics of IT hardware tend to be the inverse of economies of scale (i.e. the smaller the box you buy the less it costs per unit).

Most organisations, no matter their size, do not have the resources to build their own software layer like Amazon, so this is where VSAN (and vSphere and NSX) come in – VMware provides the software and you bring your hardware of choice.

There are a number of hyper-converged solutions on the market today that can combine compute and storage into a single host that can scale-out as required. None of these are Software-Defined (see What are the pros and cons of Software-Defined Storage?) and typically they use Linux Virtual Machines to provision the storage. VSAN is embedded into ESXi, so you now have the choice of having your hyper-converged storage provisioned from a Virtual Machine or integrated into the hypervisor – I know which I would prefer.

Typical use cases are VDI, Tier 2 and 3 applications, Test, Development and Staging environments, DMZ, Management Clusters, Backup and DR targets and Remote Offices.

VSAN Components

To create a VSAN you need:

  • From 3 to 32 vSphere 5.5 certified hosts
  • For each host a VSAN certified:
    • I/O controller
    • SSD drive or PCIe card
    • Hard disk drive
  • 4 GB to 8GB USB or SD card for ESXi boot
  • VSAN network – GbE or 10 GbE (preferred) for inter-host traffic
    • Layer 2 Multicast must be enabled on physical switches
  • A per socket license for VSAN (also includes licenses for Virtual Distributed Switch and Storage Policies) and vSphere

The host is configured as follows:

  • The controller should use pass-through mode (i.e. no RAID or caching)
  • Disk Groups are created which include one SSD and from 1 to 7 HDDs
  • Five Disk Groups can be configured per host (maximum of 40 drives)
  • The SSD is used as a read/write flash accelerator
  • The HDDs are used for persistent storage
  • The VSAN shared datastore is accessible to all hosts in the cluster

The solution is simple to manage as it is tightly integrated into vSphere, highly resilient as there is zero data loss in the event of hardware failures and highly performant through the use of Read/Write flash acceleration.

VSAN Configuration

The VSAN cluster can grow or shrink non-disruptively with linear performance and capacity scaling – up to 32 hosts, 3,200 VMs, 2M IOPS and 4.4 PBs. Scaling is very granular as single nodes or disks can be added, and there is no dedicated hot-spare disks instead the free space across the cluster acts as a “hot-spare”.

Per-Virtual Machine policies for Availability, Performance and Capacity can be configured as follows:

  • Number of failures to tolerate – How many replicas (0 to 3 – Default 1 equivalent to a Distributed RAID 1 Mirror)
  • Number of disk stripes per object – The higher the number the better the performance (1-12 – Default 1)
  • Object space reservation – How Thickly provisioned the disk is (0-100% – Default 0)
  • Flash read cache reservation – Flash capacity reserved as read cache for the storage object (0-100% – Default 0)

The Read/Write process

Typically a VMDK will exist on two hosts, but the Virtual Machine may or may not be running on one of these. VSAN takes advantage of the fact that 10 GbE latency is an order of magnitude lower than even SSDs therefore there is no real world difference between local and remote IO – the net result is a simplified architecture (which is always a good thing) that does not have the complexity and IO overhead of trying to keep compute and storage on the same host.

All writes are first written to the SSD and to maintain redundancy also immediately written to an SSD in another host. A background process sequentially de-stages the data to the HDDs as efficiently as possible. 70% of the SSD cache is used for Reads and 30% for Writes, so where possible reads are delivered from the SSD cache.

So what improvements would we like to see in the future?

VSAN was released early this year after many years of development, the focus of the initial version is to get the core platform right and deliver a reliable high performance product. I am sure there is an aggressive road-map of product enhancements coming from VMware, but what we would like to see?

The top priorities have to be efficiency technologies like redirect-on-write snapshots, de-duplication and compression along with the ability to have an all-flash datastore with even higher-performance flash used for the cache – all of these would lower the cost of VDI storage even further.

Next up would be a two-node cluster, multiple flash drives per disk group, Parity RAID, and kernel modules for synchronous and asynchronous replication (today vSphere Replication is required which supports asynchronous replication only).

So are we about to see the death of the storage array? I doubt it very much, but there are going to be certain use cases (i.e. VDI) whereby VSAN is clearly the better option. For the foreseeable future I would expect many organisations to adopt a hybrid approach mixing a combination of VSAN with conventional storage arrays – in 5 years time who knows how that mix will be, but one thing is for sure the percentage of storage delivered from the host is only likely to be going up.

Some final thoughts on EVO:RAIL

EVO:RAIL is very similar in concept to the other hyper-converged appliances available today (i.e. it is not a Software-Defined solution). It is built on top of vSphere and VSAN so in essence it cannot do anything that you cannot do with VSAN. Its advantage is simplicity – you order an appliance, plug it in, power it on and you are then ready to start provisioning Virtual Machines.

The downside … it goes against VMware’s and the industries move towards more Software-Defined solutions and all the benefits they provide.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with VMware.
PeerSpot user
reviewer1089270 - PeerSpot reviewer
Solutions Architect at a computer software company with 501-1,000 employees
Real User
Total hyperconverged facility
Pros and Cons
  • "The valuable feature of the solution is the total hyperconverged facility."
  • "The solution functions as the marketing says, as long as you follow certain rules."

What is most valuable?

The valuable feature of the solution is the total hyperconverged facility. And that either it's hyperconverged, or it's standalone with storage arrays.

What needs improvement?

From the implementer side, the solution is very comparable to Nutanix. The only difference is that VMware requires more initial nodes.

For how long have I used the solution?

I've been working with VMware for fifteen years.

What do I think about the scalability of the solution?

Regarding the scalability of the solution, you've got 64 nodes into a stretched cluster for VMware. Nutanix goes a little bit above. The only problem is that due to licensing things, such as when you have Oracle and other things, what you tend to do is multiple clusters in order to avoid licensing costs.

The biggest network I have implemented was 16 nodes.

What other advice do I have?

My advice to others looking into implementing VMware vSAN is to stick to the rules. That's where the problem is. If you don't stick to the rules and prerequisites, you end up having a nightmare.

People have a tendency to take hyper-converged solutions for granted. They function as the marketing says, as long as you follow certain rules. If those rules are not followed, you end up with a slower infrastructure than you ever had before.

I would rate this solution an eight out of ten because it lacks flexibility. Those rules I'm talking to you about, how you have to follow the prerequisites, that is well hidden, is that you can't do what you want. You don't have total freedom. You have to respect the rules and that's why respecting the rules sometimes is a burden.

They always recommend that nodes are the same type, have the same disk structure, and if you change some disk structures, you have to change them on all the nodes. Although somewhere it's understandable, it's a burden. It should not happen.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Head Of Products And Solutions Architect at a government with 201-500 employees
Real User
Responsive technical support, easy to use, and stable
Pros and Cons
  • "The solution is simple to use compared to other solutions, such as Hyperflex, VxRail, and Nutanix"
  • "VMware vSAN needs to improve its features because other solutions have more advanced features."

What is our primary use case?

VMware vSAN is a hyper-converged infrastructure and we use it as a software-defined storage solution for our customers.

What is most valuable?

The solution is simple to use compared to other solutions, such as Cisco Hyperflex, Dell VxRail, and Nutanix

What needs improvement?

VMware vSAN needs to improve its features because other solutions have more advanced features.

For how long have I used the solution?

I have been using VMware vSAN for approximately four years.

What do I think about the stability of the solution?

The solution is quite stable in small and medium environments. However, I do not have experience using the solution in enterprises companies.

What do I think about the scalability of the solution?

We have 31 people in my organization using this solution.

How are customer service and technical support?

Technical has been good in my experience but they could improve.

What's my experience with pricing, setup cost, and licensing?

The price of VMware vSAN is expensive and there is an annual license required.

Which other solutions did I evaluate?

I have evaluated many other solutions, such as Cisco Hyperflex, Dell VxRail, and Nutanix

What other advice do I have?

In my country, Myanmar, both VMware, and Cisco are the most reliable solution for networking and virtualization than other related solutions. Other vendors, such as Nutanix and SimpliVity are quite strange to our IT environments at this time.

I rate VMware vSAN an eight out of ten.

Disclosure: My company has a business relationship with this vendor other than being a customer: partner
PeerSpot user
reviewer1390431 - PeerSpot reviewer
Head Of Network & Technical Support at a financial services firm with 501-1,000 employees
Real User
Good for applications and high availability, and possible to install on-premises yourself
Pros and Cons
  • "The high availability is very good."
  • "The stability needs to be improved."

What is our primary use case?

We primarily use the solution for our applications and its high availability.

What is most valuable?

The high availability is very good.

It's a good place to store our applications.

You can install the solution yourself.

What needs improvement?

The solution isn't as scalable as we would like it to be.

The stability needs to be improved.

The installation process is difficult.

For how long have I used the solution?

I've used the solution for about three months. I installed it around a year or so ago.

What do I think about the stability of the solution?

The stability could be better. We're not really happy with the reliability or performance.

What do I think about the scalability of the solution?

The scalability isn't ideal. A company might have trouble with this aspect of the solution.

We have about 500 users still using the solution.

How was the initial setup?

The installation process isn't easy. It's not straightforward. It's a bit difficult, actually. They could work to make it easier.

We have about 17 people on staff that can handle maintenance tasks.

What about the implementation team?

I handled the installation myself. I did not get help from a consultant or integrator. It was all handled in-house.

What's my experience with pricing, setup cost, and licensing?

We do not currently pay a license fee. I cannot speak to any costs related to having this product in the company.

What other advice do I have?

We're using version seven of the solution. I'm not sure if it is the latest version or not.

I'd rate the solution at a nine out of then.

I would recommend the solution to other users.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
VDI Administrator at a healthcare company with 1,001-5,000 employees
Real User
Easy to predict IOPS needs and we can design for low latency using all-flash
Pros and Cons
  • "it's easy to scale, it's easy to predict IOP needs, and you can design for low latency using all-flash... Also, for setting up new clusters for VDI quickly, it's nice. You don't have to wait on an order for a storage vendor to ship you a system and help you configure it, you do it all yourself. And the sizing guides are pretty straightforward."
  • "I would like to see better performance graphs, maybe something that you can export outside to a different console, and maybe a little bit longer time period. The 18-hour maximum, or 24-hour maximum, is kind of short. Also, the hardware compatibility limitations are a little frustrating sometimes, but as everybody's starting to adopt vSAN more, you get more options for hardware."

What is our primary use case?

We use it for all our virtual desktop storage.

How has it helped my organization?

It's definitely cheaper to buy it piece by piece, instead of an entire shelf at a time.

What is most valuable?

  • It's easy to scale.
  • It's easy to predict IOPS needs.
  • You can design for low latency using all-flash.
  • The whole hyperconverged notion is pretty neat.

Also, for setting up new clusters for VDI quickly, it's nice. You don't have to wait on an order for a storage vendor to ship you a system and help you configure it, you do it all yourself. It's kind of convenient that way. And the sizing guides are pretty straightforward.

What needs improvement?

I would like to see better performance graphs, maybe something that you can export outside to a different console, and maybe a little bit longer time period. The 18-hour maximum, or 24-hour maximum, is kind of short.

Also, the hardware compatibility limitations are a little frustrating sometimes, but as everybody's starting to adopt vSAN more, you get more options for hardware.

For how long have I used the solution?

One to three years.

What do I think about the stability of the solution?

It's stable. We haven't had any major issues.

What do I think about the scalability of the solution?

Scalability is easy. You just buy a node and go.

How are customer service and technical support?

The vSAN technical support guys are great.

Which solution did I use previously and why did I switch?

We chose it because of cost considerations. We already had an enterprise agreement with VMware, so vSAN licensing was included.

How was the initial setup?

There was a small learning curve, but it's pretty straightforward once you understand the basics of how everything works.

Which other solutions did I evaluate?

We did evaluate other vendors initially but this was our second hyperconverged solution. We went with it because of the cost.

What other advice do I have?

Do your homework. Make sure you know what kind of IOPS and latency requirements you need to meet. Picking hardware is not hard anymore. Everybody has an HCL. vSAN has a great list. Just pick what you want and go, it's not that hard.

I rate it at eight out of 10 because nothing is perfect. I'm hard to please. I'm not saying there are growing pains, but vSAN was still new at the time. They didn't have dedupe and compression yet. The performance was pretty good. Most of it was hybrid in the beginning, but now with all-flash, it's speedy, when it needs to be. It's a young product and nobody gets a 10 out of the gate.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros sharing their opinions.
Updated: October 2024
Product Categories
HCI
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros sharing their opinions.