Try our new research platform with insights from 80,000+ expert users
Presales engineering, Data center solution architect at SYSTEC TECHNOLOGY INC.
Reseller
It is easy for deploying and maintenance
Pros and Cons
  • "vSAN has just one datastore. so customers do not need to think where to put their VMs, how to design the physical disk RAID, the LUN size, the LUN mapping, etc. when they use NetApp/EMC/HDS or other storage systems."
  • "vSAN can help customers save on storage system costs, and also save on the human cost."
  • "vSAN is easy for deploying and maintenance, so some customers can do service themselves."
  • "vSAN does not have online dedup. When opening the inline dedupe, the performance will be lower than off inline."
  • "Virtual machines disk size cannot cap more than a single node. For a VDI user, it may not save enough to hold a file server or exchange server on a single node storage space."

What is our primary use case?

We use vSAN as our server virtualization solution for Dell install of our customer base, and vSAN is our primary solution.

How has it helped my organization?

vSAN can help customers save on storage system costs, and also save on the human cost. For an SI (like us), vSAN can save tech service time and easily deploy for maintenance.

VMware vSphere with vSAN HCI system: It is easy to train customers to operate the system even if they have or do not have a VMware operator KB. Most customers can save tech service time via vSAN. vSAN is easy for deploying and maintenance, so some customers can do service themselves.

What is most valuable?

Simple manager with only one datastore. vSAN has just one datastore. so customers do not need to think where to put their VMs, how to design the physical disk RAID, the LUN size, the LUN mapping, etc. when they use NetApp/EMC/HDS or other storage systems.

What needs improvement?

  • Online dedupe
  • VM disk size limitations

vSAN does not have online dedup. When opening the inline dedupe, the performance will be lower than off inline.

Virtual machines disk size cannot cap more than a single node. For a VDI user, it may not save enough to hold a file server or exchange server on a single node storage space.

Buyer's Guide
VMware vSAN
December 2024
Learn what your peers think about VMware vSAN. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
831,265 professionals have used our research since 2012.

For how long have I used the solution?

One to three years.
Disclosure: My company has a business relationship with this vendor other than being a customer: My compay is a SI.
PeerSpot user
Works at a tech services company with 10,001+ employees
Real User
Since the storage space is local to the hosts, it reduces the overall response time and improves the performance
Pros and Cons
  • "It is simple to manage, very easy to implement and troubleshoot in case of any failures."
  • "Since the storage space is local to the hosts, it reduces the overall response time and improves the performance."
  • "Some intelligence can be added to the newest version to provide more flexibility between storage tiers."

What is our primary use case?

Virtual desktop infrastructure (VDI) implementation on vSAN with an environment of about 2000 desktops and 1000 servers.

How has it helped my organization?

Teams required to manage the storage for the entire VDI infrastructure were not required after implementing the vSAN solution. Any seasoned VMware engineer can easily manage the whole vSAN without any issues. 

It is simple to manage, very easy to implement and troubleshoot in case of any failures.

What is most valuable?

  • Hot add
  • Upgrades
  • Ease of management

Any VMware engineer can easily manage vSAN, troubleshoot issues, and perform an upgrade on the vSAN without any downtime. Since the storage space is local to the hosts, it reduces the overall response time and improves the performance.

What needs improvement?

Some storage tiering options can be included, like other mature storage systems. Some intelligence can be added to the newest version to provide more flexibility between storage tiers, like Nutanix, to make this product a true software defined storage product.

For how long have I used the solution?

More than five years.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
VMware vSAN
December 2024
Learn what your peers think about VMware vSAN. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
831,265 professionals have used our research since 2012.
it_user610440 - PeerSpot reviewer
CEO at a tech services company with 51-200 employees
Consultant
Uses the same servers the hypervisor uses.

What is most valuable?

  • Converged solution for shared storage

When configuring a HA vSphere cluster, you need shared storage. Traditionally, one would need a SAN or NAS to provide this kind of HA. Using vSAN, you can use the same servers as the hypervisor uses for the vSAN storage. No SAN or NAS is needed and much less hardware is needed to provide the same HA solution.

How has it helped my organization?

  • No need for additional storage
  • Hypervisor can provide storage as well
  • Integration in a virtualization stack

What needs improvement?

I would like to see improvement in monitoring and performance statistics. When installing the product, it has limited statistics. The default vCenter statistics are available, but deep IOPS/latency and block sizing is absent. You can connect vRealize Operations to vSAN, giving much more information, but this is not available by default.

For how long have I used the solution?

We have been using this solution for two years.

What do I think about the stability of the solution?

I did not encounter any issues with stability.

What do I think about the scalability of the solution?

I did not encounter any issues with scalability. I suggest starting with a four-node cluster.

How are customer service and technical support?

I would give technical support a rating of 7/10.

Which solution did I use previously and why did I switch?

We use this solution along with another solution, so there was no hard switch.

How was the initial setup?

It is easy for a VMware administrator to install.

What's my experience with pricing, setup cost, and licensing?

We use it in a cloud-provider model based on usage. The end user pricing is not known.

What other advice do I have?

Start with a four-node cluster.

Disclosure: My company has a business relationship with this vendor other than being a customer: Cloud Provider (customer using product in a usage model: vCAN)
PeerSpot user
it_user574359 - PeerSpot reviewer
Engagement Cloud Solution Architect - Ericsson Cloud Services at a comms service provider with 11-50 employees
Real User
I can create my own storage policies and prioritize some apps over others.

What is most valuable?

Storage policies and I/O are the most valuable features. The storage policies are useful in my job to create my own policies and prioritize some apps over others, and create high availability for some virtual machines.

How has it helped my organization?

It increases the performance of the virtual machines and reduces the TCO for storage deployment.

What needs improvement?

Hardware compatibility needs to be increased to be able to use more RAID controllers available on the market.

For how long have I used the solution?

I have used it for three years.

What do I think about the stability of the solution?

I have not encountered any stability issues.

What do I think about the scalability of the solution?

I have not encountered any scalability issues.

How are customer service and technical support?

Technical support is 8/10.

Which solution did I use previously and why did I switch?

We previously used another solution. We switched because it reduced the TCO.

What's my experience with pricing, setup cost, and licensing?

Changes have been made in version 6.5.

Which other solutions did I evaluate?

Before choosing this product, we evaluated EMC ScaleIO.

What other advice do I have?

It is easy to design and easy to implement.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are an OEM partner.
PeerSpot user
it_user315612 - PeerSpot reviewer
Cloud Architect Leader at a aerospace/defense firm with 10,001+ employees
Real User
We can scale as needed since we're not required to buy an entire monolithic solution up front, though I'd like to see software-based disk-level encryption in the next release.

Valuable Features

The ability to scale as you need – we can start with a very small footprint as opposed to a monolithic storage solution where you buy the entire solution up front. We use everything – Hitachi, NetApp, but we're using it more and more because we can start small and scale as you need. Cost saving essentially.

Room for Improvement

I would like to software-based disk-level encryption in the next release. We deal a lot with the Department of Defense, and arms and munitions government-regulated stuff, so we would like to see more. From their roadmap, I see its coming but it has been an impediment.

Stability Issues

It's not quite there yet. We've had a few outages that were addressed. It's not 100% there yet -- give it another six months.

Scalability Issues

Scalability is why were using it – especially with v6. Any scalability issues we had, were addressed.

Customer Service and Technical Support

It was excellent. The response time was great, and as we're a large customer so we had no issues.

Initial Setup

Initial setup was not difficult to do at all.

Implementation Team

We implemented on our own.

Other Solutions Considered

We have played with Nutanix but it wasn’t there yet – VSAN is more attractive because it operates kernel level, as opposed to Nutanix.

Picking a vendor also depends on which segment is looking – I run most of the IT stuff and to me peer reviews are very important. Others within our company look to Gartner.

Other Advice

I would say that the main reason its attractive is that you can grow as you need. The other thing that makes it especially attractive is that from an IO perspective, VSAN has the better ability to perform more efficiently because it operates within the hypervisor. It's VMWare specific so that can be a downside. But for pure VMWare shops, VSAN is the best option in my opinion.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
IT Administrator and Sr. VMware Engineer at a retailer with 501-1,000 employees
Real User
It supports two architectures (hybrid and All-Flash), which is useful for all virtualized applications, including business-critical applications.

Originally posted in Spanish at https://www.rhpware.com/2015/02/introduccion-vmware...

The second generation of Virtual SAN is the vSphere 6.0 that comes with sharing the same version number. While the jump from version 1.0 (vSphere 5.5) 6.0 to change really worth it occurred, as this second-generation converged storage integrated into the VMware hypervisor significantly increases performance and features that are based on a much higher performance and increased workloads scale business level, including business-critical applications and Tier 1 capital.



Virtual SAN 6.0 delivers a new architecture based entirely on Flash to deliver high performance and predictable response times below one millisecond in almost all business critical applications level. This is also achieved because in this version doubles the scalability up to 64 nodes per host and up to 200 VMs per host, as well as improvements in technology snapshots and cloning.

Performance characteristics

The hybrid architecture of Virtual SAN 6.0 provides performance improvements of nearly double compared to the previous version and 6.0 Virtual SAN architecture all-flash four times the performance considering the number of IOPS you get in clusters with similar workloads predictable and low latency.

As the hyper convergent architecture is included in the hypervisor efficiently optimizes the ratio of operations of I/O and dramatically minimizes the impact on the CPU, which leads to products from other companies. The distributed architecture based on the hypervisor reduces bottlenecks allowing Virtual SAN move data and run operations I/O in a much more streamlined and very low latencies, without compromising the computing resources of the platform and keeping the consolidation of the VM's. Also the data store Virtual SAN is highly resilient, resulting in preventing data loss in case of physical failure of disk, hosts, network or racks.

The Virtual SAN distributed architecture allows you to scale elastically uninterrupted. Both capacity and performance can be scaled at the same time when a new host is added to a cluster, and can also scale independently simply by adding disks existing hosts.

New capabilities

The major new capabilities of Virtual SAN 6.0 features include:

  • Virtual Architecture SAN All-Flash: Virtual SAN 6.0 has the ability to create an all-flash architecture in which the solid state devices are used wisely and work as write cache. Using PCI-E devices high performance read / write intensive, economical flash storage devices and data persistence is achieved at affordable costs

Virtual SAN 6.0 All-Flash predictable levels of performance achieved up to 100,000 IOPS per host and response times below one millisecond, making it ideal for critical workloads. - See more at: https://www.rhpware.com/2015/02/introduccion-vmware...

Doubling the scalability

This version duplicates the capabilities of the previous version:

  • Scaling up to 64 nodes per cluster
  • Scaling up to 200 VMs per host, both hybrid and All-Flash architectures
  • Size of the virtual disks increased to 62TB

Performance improvements

  • Duplicate IOPS with hybrid architectures. Virtual SAN 6.0 Hybrid achieves more than 4 million IOPS for read-only loads and 1.2 million IOPS for mixed workloads on a cluster of 32 hosts
  • IOPS quadruples with architecture All-Flash: Virtual SAN 6.0 All-Flash achieves up to 100,000 IOPS per host
  • Virtual SAN File System: The new disk format enables more efficient operations and higher performance, plus scalar much simpler way
  • Virtual SAN Snapshots and Clones: highly efficient snapshots and clones are supported with support for up to 32 snapshots per clone per VM and 16,000 snapshots per clone per cluster
  • Fault Tolerance Rack: the Virtual SAN 6.0 Fault Domains allow fault-tolerant level rack and power failures in addition to the disk, network and hardware hosts
  • Support for systems with high-density disks Direct-Attached JBOD: You can manage external storage systems and eliminate the costs associated with architectures based blades or knives
  • Capacity planning: You can make the type scenario analyses "what if" and generate reports on the use and capacity utilization of a Virtual SAN data store when a virtual machine is created with associated storage policies
  • Support for checksum based on hardware: limited checksums based hardware drivers to detect problems of corruption and data integrity support is provided
  • Improving services associated disk: Troubleshooting and associated services is added to the drives to give customers the possibility to identify and fix disks attached directly to hosts:
  • LED fault indicators. magnetic or solid state devices having permanent faults lit LEDs to identify quickly and easily
  • Manual operation LED indicators: this is provided to turn on or off the LED and identify a particular device
  • Mark as SSD drives: you can make devices not recognized as SSDs
  • Mark as local disks: You can dial without recognizing flash drives as local disks to be recognized for vSphere hosts
  • Default Storage Policies: automatically created when Virtual SAN is enabled in a cluster. This default policy is used by the VM's that have no storage policy assigned
  • Evacuation of disks and disk groups: the evacuation data disks or disk groups are removed from the system to prevent the loss of data supports
  • Virtual SAN Health Services: This service is designed to provide bug fixes and generate health reports vSphere administrators about Virtual SAN subsystems 6.0 and its dependencies, such as:
    • Health Cluster
    • Network Health
    • Health Data
    • Limits Health
    • Physical Disk Health

Buyers

Requirements vSphere

Virtual SAN 6.0 requires vCenter Server 6.0. Both the Windows version as visa Virtual SAN can handle. Virtual SAN 6.0 is configurable and monitored exclusively through vSphere Web Client. It also requires a minimum of 3 vSphere hosts with local storage. This amount is not arbitrary, but is used for the cluster meets the fault tolerance requirements of at least one host, a disc network failure.

Storage System Requirements

Disk controllers

Each vSphere host own contribution to the cluster storage Virtual SAN requires a driver disk, which can be SAS, SATA (HBA) or RAID controller. However, a RAID controller must operate in any of the following ways:

  • Pass-through
  • RAID 0

The Pass-through (JBOD or HBA) is the preferred mode settings 6.0 Virtual SAN that enables managing RAID configurations the attributes of storage policies and performance requirements defined in a virtual machine

Magnetic devices

When the hybrid architecture of Virtual SAN 6.0 is used, each vSphere host must have at least one SAS, NL-SAS or SATA disk in order to participate in the Virtual Cluster SAN cluster.

flash devices

In architecture-based flash drives 6.0 Virtual SAN devices can be used as a layer of cache as well as for persistent storage. In hybrid architectures each host must have at least a flash based (SAS, SATA or PCI-E) in order to participate in the Virtual SAN disk cluster.

In the All-flash architecture each vSphere host must have at least one flash based device marked as device capacity and one for performance in order to participate Virtual SAN cluster.

Networking requirements

Network Interface Cards (NIC)

In hybrid architectures Virtual SAN, each vSphere host must have at least one network adapter 1Gb or 10Gb. VMware's recommendation is 10 Gb.

The All-flash architectures only support 10Gb Ethernet NICs. For redundancy and high availability, you can configure NIC Teaming per host. NIC Teaming is not supported for link aggregation (performance).

Virtual Switches

Virtual SAN 6.0 is supported by both VMware vSphere Distributed Switch (VDS) and the vSphere Standard Switch (VSS). Other virtual switches are not supported in this release.

VMkernel network

You must create a VMkernel port on each host for communicating and labelled for Virtual SAN Virtual SAN traffic. This new interface is used for intracluster communications as well as for read and write operations when a vSphere cluster host is the owner of a particular VM but the current data blocks are housed in a remote cluster host.

In this case, the operations of I / O network must travel through the cluster hosts. If this network interface on a vDS is created, you can use the Network feature I / O control to configure shares or reservations for Virtual SAN traffic.

Conclusion

This new second generation Virtual SAN is a storage solution enterprise-class hypervisor level that combines computing resources and storage from the hosts. With its two supported architectures (hybrid and All-Flash) Virtual SAN 6.0 meets the demands of all virtualized applications, including business-critical applications.

Without doubt Virtual SAN 6.0 is a storage solution that realizes the VMWare defined storage software or SDS (Software Defined Storage) offering great benefits to both customers and the vSphere administrators who every day we face new challenges and complexities. It certainly is an architecture that will change the vision of storage systems from now on.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
The solution is simple to manage but redirect-on-write snapshots is needed

Over the past decade VMware has changed the way IT is provisioned through the use of Virtual Machines, but if we want a truly Software-Defined Data Centre we also need to virtualise the storage and the network.

For storage virtualisation VMware has introduced Virtual SAN and Virtual Volumes (expected to be available in 2015), and for network virtualisation NSX. In this, the first of a three part series, we will take a look at Virtual SAN (VSAN).

So why VSAN?

Large Data Centres, built by the likes of Amazon, Google and Facebook, utilise commodity compute, storage and networking hardware (that scale-out rather than scale-up) and a proprietary software layer to massively drive down costs. The economics of IT hardware tend to be the inverse of economies of scale (i.e. the smaller the box you buy the less it costs per unit).

Most organisations, no matter their size, do not have the resources to build their own software layer like Amazon, so this is where VSAN (and vSphere and NSX) come in – VMware provides the software and you bring your hardware of choice.

There are a number of hyper-converged solutions on the market today that can combine compute and storage into a single host that can scale-out as required. None of these are Software-Defined (see What are the pros and cons of Software-Defined Storage?) and typically they use Linux Virtual Machines to provision the storage. VSAN is embedded into ESXi, so you now have the choice of having your hyper-converged storage provisioned from a Virtual Machine or integrated into the hypervisor – I know which I would prefer.

Typical use cases are VDI, Tier 2 and 3 applications, Test, Development and Staging environments, DMZ, Management Clusters, Backup and DR targets and Remote Offices.

VSAN Components

To create a VSAN you need:

  • From 3 to 32 vSphere 5.5 certified hosts
  • For each host a VSAN certified:
    • I/O controller
    • SSD drive or PCIe card
    • Hard disk drive
  • 4 GB to 8GB USB or SD card for ESXi boot
  • VSAN network – GbE or 10 GbE (preferred) for inter-host traffic
    • Layer 2 Multicast must be enabled on physical switches
  • A per socket license for VSAN (also includes licenses for Virtual Distributed Switch and Storage Policies) and vSphere

The host is configured as follows:

  • The controller should use pass-through mode (i.e. no RAID or caching)
  • Disk Groups are created which include one SSD and from 1 to 7 HDDs
  • Five Disk Groups can be configured per host (maximum of 40 drives)
  • The SSD is used as a read/write flash accelerator
  • The HDDs are used for persistent storage
  • The VSAN shared datastore is accessible to all hosts in the cluster

The solution is simple to manage as it is tightly integrated into vSphere, highly resilient as there is zero data loss in the event of hardware failures and highly performant through the use of Read/Write flash acceleration.

VSAN Configuration

The VSAN cluster can grow or shrink non-disruptively with linear performance and capacity scaling – up to 32 hosts, 3,200 VMs, 2M IOPS and 4.4 PBs. Scaling is very granular as single nodes or disks can be added, and there is no dedicated hot-spare disks instead the free space across the cluster acts as a “hot-spare”.

Per-Virtual Machine policies for Availability, Performance and Capacity can be configured as follows:

  • Number of failures to tolerate – How many replicas (0 to 3 – Default 1 equivalent to a Distributed RAID 1 Mirror)
  • Number of disk stripes per object – The higher the number the better the performance (1-12 – Default 1)
  • Object space reservation – How Thickly provisioned the disk is (0-100% – Default 0)
  • Flash read cache reservation – Flash capacity reserved as read cache for the storage object (0-100% – Default 0)

The Read/Write process

Typically a VMDK will exist on two hosts, but the Virtual Machine may or may not be running on one of these. VSAN takes advantage of the fact that 10 GbE latency is an order of magnitude lower than even SSDs therefore there is no real world difference between local and remote IO – the net result is a simplified architecture (which is always a good thing) that does not have the complexity and IO overhead of trying to keep compute and storage on the same host.

All writes are first written to the SSD and to maintain redundancy also immediately written to an SSD in another host. A background process sequentially de-stages the data to the HDDs as efficiently as possible. 70% of the SSD cache is used for Reads and 30% for Writes, so where possible reads are delivered from the SSD cache.

So what improvements would we like to see in the future?

VSAN was released early this year after many years of development, the focus of the initial version is to get the core platform right and deliver a reliable high performance product. I am sure there is an aggressive road-map of product enhancements coming from VMware, but what we would like to see?

The top priorities have to be efficiency technologies like redirect-on-write snapshots, de-duplication and compression along with the ability to have an all-flash datastore with even higher-performance flash used for the cache – all of these would lower the cost of VDI storage even further.

Next up would be a two-node cluster, multiple flash drives per disk group, Parity RAID, and kernel modules for synchronous and asynchronous replication (today vSphere Replication is required which supports asynchronous replication only).

So are we about to see the death of the storage array? I doubt it very much, but there are going to be certain use cases (i.e. VDI) whereby VSAN is clearly the better option. For the foreseeable future I would expect many organisations to adopt a hybrid approach mixing a combination of VSAN with conventional storage arrays – in 5 years time who knows how that mix will be, but one thing is for sure the percentage of storage delivered from the host is only likely to be going up.

Some final thoughts on EVO:RAIL

EVO:RAIL is very similar in concept to the other hyper-converged appliances available today (i.e. it is not a Software-Defined solution). It is built on top of vSphere and VSAN so in essence it cannot do anything that you cannot do with VSAN. Its advantage is simplicity – you order an appliance, plug it in, power it on and you are then ready to start provisioning Virtual Machines.

The downside … it goes against VMware’s and the industries move towards more Software-Defined solutions and all the benefits they provide.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with VMware.
PeerSpot user
Technical Specialist at NTT Security
Real User
Top 20
Has worked well for two years, but requires a minimum of nodes for maintenance mode
Pros and Cons
  • "The most valuable thing about vSAN is that all of its features have been working well for us for the past two years. We haven't had an issue with them."
  • "When designing the implementation for vSAN, I have noticed that it requires a minimum of six nodes, and this creates a problem when it comes to maintenance. If, out of the six nodes, I put one node in maintenance mode, then vSAN does not create other VM components."

What is our primary use case?

We are an implementation partner for VMware vSAN and we use it alongside our hyperconverged infrastructure solutions with products such as Nutanix, HyperFlex, and SimpliVity. It is currently implemented in key areas off-site for over seven customers.

What is most valuable?

The most valuable thing about vSAN is that all of its features have been working well for us for the past two years. We haven't had an issue with them.

What needs improvement?

When designing the implementation for vSAN, I have noticed that it requires a minimum of six nodes, and this creates a problem when it comes to maintenance. If, out of the six nodes, I put one node in maintenance mode, then vSAN does not create other VM components. I think the reason for this is that the minimum configuration is a six node arrangement. If any one of the six nodes is put into maintenance mode, we're simply unable to create a VM, but if there are seven nodes in that cluster, then we are able to put one under maintenance. That's one thing that should be looked at.

More generally, the features of vSAN as we see them are dependent on the quality of the storage, since each different storage technology has its own separate features that go along with it.

For how long have I used the solution?

I have been working with VMware vSAN for at least two years. 

What do I think about the stability of the solution?

It is a stable product, especially now that we have it fully implemented. However, if any two or three of the nodes go away, vSAN goes down. I think we've had a few VMs where the data has been lost for this reason. I guess that the way it works would be similar to other technologies, but that's what we have observed.

What do I think about the scalability of the solution?

You can increase the compute capability as well as the disk storage, so it is scalable.

How are customer service and support?

I've already escalated the issue regarding the six nodes, which I've mentioned. This has been escalated to VMware and they know that it is a limitation, because apparently it is normal behavior for any nodes that are put in maintenance mode.

How was the initial setup?

The setup is easy.

What other advice do I have?

We have been working with vSAN for the last two years, and we haven't seen too many issues overall, but because of the troubles we have faced with the fact that vSAN doesn't let you put a node in maintenance mode unless you have six or more nodes, I would rate VMware vSAN a six out of ten.

Disclosure: My company has a business relationship with this vendor other than being a customer: Implementer
PeerSpot user
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros sharing their opinions.
Updated: December 2024
Product Categories
HCI
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros sharing their opinions.