VMware vSAN is a hyper-converged infrastructure and we use it as a software-defined storage solution for our customers.
IT Administrator and Sr. VMware Engineer at a retailer with 501-1,000 employees
It supports two architectures (hybrid and All-Flash), which is useful for all virtualized applications, including business-critical applications.
Originally posted in Spanish at https://www.rhpware.com/2015/02/introduccion-vmware...
The second generation of Virtual SAN is the vSphere 6.0 that comes with sharing the same version number. While the jump from version 1.0 (vSphere 5.5) 6.0 to change really worth it occurred, as this second-generation converged storage integrated into the VMware hypervisor significantly increases performance and features that are based on a much higher performance and increased workloads scale business level, including business-critical applications and Tier 1 capital.
Virtual SAN 6.0 delivers a new architecture based entirely on Flash to deliver high performance and predictable response times below one millisecond in almost all business critical applications level. This is also achieved because in this version doubles the scalability up to 64 nodes per host and up to 200 VMs per host, as well as improvements in technology snapshots and cloning.
Performance characteristics
The hybrid architecture of Virtual SAN 6.0 provides performance improvements of nearly double compared to the previous version and 6.0 Virtual SAN architecture all-flash four times the performance considering the number of IOPS you get in clusters with similar workloads predictable and low latency.
As the hyper convergent architecture is included in the hypervisor efficiently optimizes the ratio of operations of I/O and dramatically minimizes the impact on the CPU, which leads to products from other companies. The distributed architecture based on the hypervisor reduces bottlenecks allowing Virtual SAN move data and run operations I/O in a much more streamlined and very low latencies, without compromising the computing resources of the platform and keeping the consolidation of the VM's. Also the data store Virtual SAN is highly resilient, resulting in preventing data loss in case of physical failure of disk, hosts, network or racks.
The Virtual SAN distributed architecture allows you to scale elastically uninterrupted. Both capacity and performance can be scaled at the same time when a new host is added to a cluster, and can also scale independently simply by adding disks existing hosts.
New capabilities
The major new capabilities of Virtual SAN 6.0 features include:
- Virtual Architecture SAN All-Flash: Virtual SAN 6.0 has the ability to create an all-flash architecture in which the solid state devices are used wisely and work as write cache. Using PCI-E devices high performance read / write intensive, economical flash storage devices and data persistence is achieved at affordable costs
Virtual SAN 6.0 All-Flash predictable levels of performance achieved up to 100,000 IOPS per host and response times below one millisecond, making it ideal for critical workloads. - See more at: https://www.rhpware.com/2015/02/introduccion-vmware...
Doubling the scalability
This version duplicates the capabilities of the previous version:
- Scaling up to 64 nodes per cluster
- Scaling up to 200 VMs per host, both hybrid and All-Flash architectures
- Size of the virtual disks increased to 62TB
Performance improvements
- Duplicate IOPS with hybrid architectures. Virtual SAN 6.0 Hybrid achieves more than 4 million IOPS for read-only loads and 1.2 million IOPS for mixed workloads on a cluster of 32 hosts
- IOPS quadruples with architecture All-Flash: Virtual SAN 6.0 All-Flash achieves up to 100,000 IOPS per host
- Virtual SAN File System: The new disk format enables more efficient operations and higher performance, plus scalar much simpler way
- Virtual SAN Snapshots and Clones: highly efficient snapshots and clones are supported with support for up to 32 snapshots per clone per VM and 16,000 snapshots per clone per cluster
- Fault Tolerance Rack: the Virtual SAN 6.0 Fault Domains allow fault-tolerant level rack and power failures in addition to the disk, network and hardware hosts
- Support for systems with high-density disks Direct-Attached JBOD: You can manage external storage systems and eliminate the costs associated with architectures based blades or knives
- Capacity planning: You can make the type scenario analyses "what if" and generate reports on the use and capacity utilization of a Virtual SAN data store when a virtual machine is created with associated storage policies Support for checksum based on hardware: limited checksums based hardware drivers to detect problems of corruption and data integrity support is provided
- Improving services associated disk: Troubleshooting and associated services is added to the drives to give customers the possibility to identify and fix disks attached directly to hosts:
- LED fault indicators. magnetic or solid state devices having permanent faults lit LEDs to identify quickly and easily
- Manual operation LED indicators: this is provided to turn on or off the LED and identify a particular device
- Mark as SSD drives: you can make devices not recognized as SSDs
- Mark as local disks: You can dial without recognizing flash drives as local disks to be recognized for vSphere hosts Default Storage Policies: automatically created when Virtual SAN is enabled in a cluster. This default policy is used by the VM's that have no storage policy assigned
- Evacuation of disks and disk groups: the evacuation data disks or disk groups are removed from the system to prevent the loss of data supports
- Virtual SAN Health Services: This service is designed to provide bug fixes and generate health reports vSphere administrators about Virtual SAN subsystems 6.0 and its dependencies, such as:
- Health Cluster
- Network Health
- Health Data
- Limits Health
- Physical Disk Health
Buyers
Requirements vSphere
Virtual SAN 6.0 requires vCenter Server 6.0. Both the Windows version as visa Virtual SAN can handle. Virtual SAN 6.0 is configurable and monitored exclusively through vSphere Web Client. It also requires a minimum of 3 vSphere hosts with local storage. This amount is not arbitrary, but is used for the cluster meets the fault tolerance requirements of at least one host, a disc network failure.
Storage System Requirements
Disk controllers
Each vSphere host own contribution to the cluster storage Virtual SAN requires a driver disk, which can be SAS, SATA (HBA) or RAID controller. However, a RAID controller must operate in any of the following ways:
- Pass-through
- RAID 0
The Pass-through (JBOD or HBA) is the preferred mode settings 6.0 Virtual SAN that enables managing RAID configurations the attributes of storage policies and performance requirements defined in a virtual machine
Magnetic devices
When the hybrid architecture of Virtual SAN 6.0 is used, each vSphere host must have at least one SAS, NL-SAS or SATA disk in order to participate in the Virtual Cluster SAN cluster.
flash devices
In architecture-based flash drives 6.0 Virtual SAN devices can be used as a layer of cache as well as for persistent storage. In hybrid architectures each host must have at least a flash based (SAS, SATA or PCI-E) in order to participate in the Virtual SAN disk cluster.
In the All-flash architecture each vSphere host must have at least one flash based device marked as device capacity and one for performance in order to participate Virtual SAN cluster.
Networking requirements
Network Interface Cards (NIC)
In hybrid architectures Virtual SAN, each vSphere host must have at least one network adapter 1Gb or 10Gb. VMware's recommendation is 10 Gb.
The All-flash architectures only support 10Gb Ethernet NICs. For redundancy and high availability, you can configure NIC Teaming per host. NIC Teaming is not supported for link aggregation (performance).
Virtual Switches
Virtual SAN 6.0 is supported by both VMware vSphere Distributed Switch (VDS) and the vSphere Standard Switch (VSS). Other virtual switches are not supported in this release.
VMkernel network
You must create a VMkernel port on each host for communicating and labelled for Virtual SAN Virtual SAN traffic. This new interface is used for intracluster communications as well as for read and write operations when a vSphere cluster host is the owner of a particular VM but the current data blocks are housed in a remote cluster host.
In this case, the operations of I / O network must travel through the cluster hosts. If this network interface on a vDS is created, you can use the Network feature I / O control to configure shares or reservations for Virtual SAN traffic.
Conclusion
This new second generation Virtual SAN is a storage solution enterprise-class hypervisor level that combines computing resources and storage from the hosts. With its two supported architectures (hybrid and All-Flash) Virtual SAN 6.0 meets the demands of all virtualized applications, including business-critical applications.
Without doubt Virtual SAN 6.0 is a storage solution that realizes the VMWare defined storage software or SDS (Software Defined Storage) offering great benefits to both customers and the vSphere administrators who every day we face new challenges and complexities. It certainly is an architecture that will change the vision of storage systems from now on.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Solutions Architect with 51-200 employees
The solution is simple to manage but redirect-on-write snapshots is needed
Over the past decade VMware has changed the way IT is provisioned through the use of Virtual Machines, but if we want a truly Software-Defined Data Centre we also need to virtualise the storage and the network.
For storage virtualisation VMware has introduced Virtual SAN and Virtual Volumes (expected to be available in 2015), and for network virtualisation NSX. In this, the first of a three part series, we will take a look at Virtual SAN (VSAN).
So why VSAN?
Large Data Centres, built by the likes of Amazon, Google and Facebook, utilise commodity compute, storage and networking hardware (that scale-out rather than scale-up) and a proprietary software layer to massively drive down costs. The economics of IT hardware tend to be the inverse of economies of scale (i.e. the smaller the box you buy the less it costs per unit).
Most organisations, no matter their size, do not have the resources to build their own software layer like Amazon, so this is where VSAN (and vSphere and NSX) come in – VMware provides the software and you bring your hardware of choice.
There are a number of hyper-converged solutions on the market today that can combine compute and storage into a single host that can scale-out as required. None of these are Software-Defined (see What are the pros and cons of Software-Defined Storage?) and typically they use Linux Virtual Machines to provision the storage. VSAN is embedded into ESXi, so you now have the choice of having your hyper-converged storage provisioned from a Virtual Machine or integrated into the hypervisor – I know which I would prefer.
Typical use cases are VDI, Tier 2 and 3 applications, Test, Development and Staging environments, DMZ, Management Clusters, Backup and DR targets and Remote Offices.
VSAN Components
To create a VSAN you need:
- From 3 to 32 vSphere 5.5 certified hosts
- For each host a VSAN certified:
- I/O controller
- SSD drive or PCIe card
- Hard disk drive
- 4 GB to 8GB USB or SD card for ESXi boot
- VSAN network – GbE or 10 GbE (preferred) for inter-host traffic
- Layer 2 Multicast must be enabled on physical switches
- A per socket license for VSAN (also includes licenses for Virtual Distributed Switch and Storage Policies) and vSphere
The host is configured as follows:
- The controller should use pass-through mode (i.e. no RAID or caching)
- Disk Groups are created which include one SSD and from 1 to 7 HDDs
- Five Disk Groups can be configured per host (maximum of 40 drives)
- The SSD is used as a read/write flash accelerator
- The HDDs are used for persistent storage
- The VSAN shared datastore is accessible to all hosts in the cluster
The solution is simple to manage as it is tightly integrated into vSphere, highly resilient as there is zero data loss in the event of hardware failures and highly performant through the use of Read/Write flash acceleration.
VSAN Configuration
The VSAN cluster can grow or shrink non-disruptively with linear performance and capacity scaling – up to 32 hosts, 3,200 VMs, 2M IOPS and 4.4 PBs. Scaling is very granular as single nodes or disks can be added, and there is no dedicated hot-spare disks instead the free space across the cluster acts as a “hot-spare”.
Per-Virtual Machine policies for Availability, Performance and Capacity can be configured as follows:
- Number of failures to tolerate – How many replicas (0 to 3 – Default 1 equivalent to a Distributed RAID 1 Mirror)
- Number of disk stripes per object – The higher the number the better the performance (1-12 – Default 1)
- Object space reservation – How Thickly provisioned the disk is (0-100% – Default 0)
- Flash read cache reservation – Flash capacity reserved as read cache for the storage object (0-100% – Default 0)
The Read/Write process
Typically a VMDK will exist on two hosts, but the Virtual Machine may or may not be running on one of these. VSAN takes advantage of the fact that 10 GbE latency is an order of magnitude lower than even SSDs therefore there is no real world difference between local and remote IO – the net result is a simplified architecture (which is always a good thing) that does not have the complexity and IO overhead of trying to keep compute and storage on the same host.
All writes are first written to the SSD and to maintain redundancy also immediately written to an SSD in another host. A background process sequentially de-stages the data to the HDDs as efficiently as possible. 70% of the SSD cache is used for Reads and 30% for Writes, so where possible reads are delivered from the SSD cache.
So what improvements would we like to see in the future?
VSAN was released early this year after many years of development, the focus of the initial version is to get the core platform right and deliver a reliable high performance product. I am sure there is an aggressive road-map of product enhancements coming from VMware, but what we would like to see?
The top priorities have to be efficiency technologies like redirect-on-write snapshots, de-duplication and compression along with the ability to have an all-flash datastore with even higher-performance flash used for the cache – all of these would lower the cost of VDI storage even further.
Next up would be a two-node cluster, multiple flash drives per disk group, Parity RAID, and kernel modules for synchronous and asynchronous replication (today vSphere Replication is required which supports asynchronous replication only).
So are we about to see the death of the storage array? I doubt it very much, but there are going to be certain use cases (i.e. VDI) whereby VSAN is clearly the better option. For the foreseeable future I would expect many organisations to adopt a hybrid approach mixing a combination of VSAN with conventional storage arrays – in 5 years time who knows how that mix will be, but one thing is for sure the percentage of storage delivered from the host is only likely to be going up.
Some final thoughts on EVO:RAIL
EVO:RAIL is very similar in concept to the other hyper-converged appliances available today (i.e. it is not a Software-Defined solution). It is built on top of vSphere and VSAN so in essence it cannot do anything that you cannot do with VSAN. Its advantage is simplicity – you order an appliance, plug it in, power it on and you are then ready to start provisioning Virtual Machines.
The downside … it goes against VMware’s and the industries move towards more Software-Defined solutions and all the benefits they provide.
Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with VMware.
Buyer's Guide
VMware vSAN
November 2024
Learn what your peers think about VMware vSAN. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
824,067 professionals have used our research since 2012.
Head Of Products And Solutions Architect at a government with 201-500 employees
Responsive technical support, easy to use, and stable
Pros and Cons
- "The solution is simple to use compared to other solutions, such as Hyperflex, VxRail, and Nutanix"
- "VMware vSAN needs to improve its features because other solutions have more advanced features."
What is our primary use case?
What is most valuable?
The solution is simple to use compared to other solutions, such as Cisco Hyperflex, Dell VxRail, and Nutanix
What needs improvement?
VMware vSAN needs to improve its features because other solutions have more advanced features.
For how long have I used the solution?
I have been using VMware vSAN for approximately four years.
What do I think about the stability of the solution?
The solution is quite stable in small and medium environments. However, I do not have experience using the solution in enterprises companies.
What do I think about the scalability of the solution?
We have 31 people in my organization using this solution.
How are customer service and technical support?
Technical has been good in my experience but they could improve.
What's my experience with pricing, setup cost, and licensing?
The price of VMware vSAN is expensive and there is an annual license required.
Which other solutions did I evaluate?
I have evaluated many other solutions, such as Cisco Hyperflex, Dell VxRail, and Nutanix
What other advice do I have?
In my country, Myanmar, both VMware, and Cisco are the most reliable solution for networking and virtualization than other related solutions. Other vendors, such as Nutanix and SimpliVity are quite strange to our IT environments at this time.
I rate VMware vSAN an eight out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: partner
Head Of Network & Technical Support at a financial services firm with 501-1,000 employees
Good for applications and high availability, and possible to install on-premises yourself
Pros and Cons
- "The high availability is very good."
- "The stability needs to be improved."
What is our primary use case?
We primarily use the solution for our applications and its high availability.
What is most valuable?
The high availability is very good.
It's a good place to store our applications.
You can install the solution yourself.
What needs improvement?
The solution isn't as scalable as we would like it to be.
The stability needs to be improved.
The installation process is difficult.
For how long have I used the solution?
I've used the solution for about three months. I installed it around a year or so ago.
What do I think about the stability of the solution?
The stability could be better. We're not really happy with the reliability or performance.
What do I think about the scalability of the solution?
The scalability isn't ideal. A company might have trouble with this aspect of the solution.
We have about 500 users still using the solution.
How was the initial setup?
The installation process isn't easy. It's not straightforward. It's a bit difficult, actually. They could work to make it easier.
We have about 17 people on staff that can handle maintenance tasks.
What about the implementation team?
I handled the installation myself. I did not get help from a consultant or integrator. It was all handled in-house.
What's my experience with pricing, setup cost, and licensing?
We do not currently pay a license fee. I cannot speak to any costs related to having this product in the company.
What other advice do I have?
We're using version seven of the solution. I'm not sure if it is the latest version or not.
I'd rate the solution at a nine out of then.
I would recommend the solution to other users.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
VDI Administrator at a healthcare company with 1,001-5,000 employees
Easy to predict IOPS needs and we can design for low latency using all-flash
Pros and Cons
- "it's easy to scale, it's easy to predict IOP needs, and you can design for low latency using all-flash... Also, for setting up new clusters for VDI quickly, it's nice. You don't have to wait on an order for a storage vendor to ship you a system and help you configure it, you do it all yourself. And the sizing guides are pretty straightforward."
- "I would like to see better performance graphs, maybe something that you can export outside to a different console, and maybe a little bit longer time period. The 18-hour maximum, or 24-hour maximum, is kind of short. Also, the hardware compatibility limitations are a little frustrating sometimes, but as everybody's starting to adopt vSAN more, you get more options for hardware."
What is our primary use case?
We use it for all our virtual desktop storage.
How has it helped my organization?
It's definitely cheaper to buy it piece by piece, instead of an entire shelf at a time.
What is most valuable?
- It's easy to scale.
- It's easy to predict IOPS needs.
- You can design for low latency using all-flash.
- The whole hyperconverged notion is pretty neat.
Also, for setting up new clusters for VDI quickly, it's nice. You don't have to wait on an order for a storage vendor to ship you a system and help you configure it, you do it all yourself. It's kind of convenient that way. And the sizing guides are pretty straightforward.
What needs improvement?
I would like to see better performance graphs, maybe something that you can export outside to a different console, and maybe a little bit longer time period. The 18-hour maximum, or 24-hour maximum, is kind of short.
Also, the hardware compatibility limitations are a little frustrating sometimes, but as everybody's starting to adopt vSAN more, you get more options for hardware.
For how long have I used the solution?
One to three years.
What do I think about the stability of the solution?
It's stable. We haven't had any major issues.
What do I think about the scalability of the solution?
Scalability is easy. You just buy a node and go.
How are customer service and technical support?
The vSAN technical support guys are great.
Which solution did I use previously and why did I switch?
We chose it because of cost considerations. We already had an enterprise agreement with VMware, so vSAN licensing was included.
How was the initial setup?
There was a small learning curve, but it's pretty straightforward once you understand the basics of how everything works.
Which other solutions did I evaluate?
We did evaluate other vendors initially but this was our second hyperconverged solution. We went with it because of the cost.
What other advice do I have?
Do your homework. Make sure you know what kind of IOPS and latency requirements you need to meet. Picking hardware is not hard anymore. Everybody has an HCL. vSAN has a great list. Just pick what you want and go, it's not that hard.
I rate it at eight out of 10 because nothing is perfect. I'm hard to please. I'm not saying there are growing pains, but vSAN was still new at the time. They didn't have dedupe and compression yet. The performance was pretty good. Most of it was hybrid in the beginning, but now with all-flash, it's speedy, when it needs to be. It's a young product and nobody gets a 10 out of the gate.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Great performance from all-flash, but scaling up or down is an involved process
Pros and Cons
- "I would like to see it be more hardware-agnostic. Other than that, the only other complication is - and it has gotten better with the newer versions - that lately, once you're running an all-flash, if you need to grow or scale down your infrastructure, it's a long process. You need to evacuate all data and make sure you have enough space on the host, then add more hosts or take out hosts. That process is a little bit complex. You cannot scale as needed or shrink as needed."
What is our primary use case?
The primary use of the product is for storage for VDI plus some other storage for file servers and the like. The performance is great. We use it on all-flash.
What is most valuable?
Performance and the ability to use all-flash.
What needs improvement?
I would like to see it be more hardware-agnostic.
Other than that, the only other complication is - and it has gotten better with the newer versions - that lately, once you're running an all-flash, if you need to grow or scale down your infrastructure, it's a long process. You need to evacuate all the data and make sure you have enough space on the host, then add more hosts or take out hosts. That process is a little bit complex. You cannot scale as needed or shrink as needed.
What do I think about the stability of the solution?
Right now, the stability is pretty good. It's getting a lot better.
What do I think about the scalability of the solution?
It has its quirks but the scalability is good. Given that you have to have the hardware, the right driver, the right framework, and so on, it's not easy to put it together, it's not a plug-and-play solution. But once you get all of that done, it becomes a good product.
How are customer service and technical support?
I have used the technical support, but most of the time it comes down to the manufacturer of the hardware; Cisco or whoever we're using for it. It's a compatibility type of thing. But tech support is okay.
Which solution did I use previously and why did I switch?
Our previous solution was SAN-based. I wanted to bring in something new and not only stay with the market, where it's going with the trends, but also to bring in something that is stable enough for production.
How was the initial setup?
Once we got all of the driver configurations done, etc., it was easy enough.
What was our ROI?
We have definitely seen value, especially in performance.
What other advice do I have?
Give it a try.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Supervisor at a energy/utilities company with 1,001-5,000 employees
With the Vx Rack and SDDC, everything is managed much more easily
Pros and Cons
- "I would like to see some of the more traditional SAN functions that are out the now. I can list them: being able to Snapshot on the back-end, better de-dupe, and better compression. Those are the major ones."
What is our primary use case?
We use it for all of our Production and it has been very effective.
How has it helped my organization?
It's more scalable and faster than what we had, and it's easier to support.
What is most valuable?
- The non-complexity
- The cost
What needs improvement?
I would like to see some of the more traditional SAN functions that are out there now. I can list them: being able to Snapshot on the back-end, better de-dupe, and better compression. Those are the major ones.
What do I think about the stability of the solution?
We haven't had any issues with the stability.
What do I think about the scalability of the solution?
The scalability is very good. You plug it in and it goes.
How are customer service and technical support?
We have not had to use technical support for vSAN yet.
Which solution did I use previously and why did I switch?
We knew we needed a new solution. The other one was too complex and too costly and was never really maintained properly. Too many teams had too many hands in it. With the new ACI solution with the Vx Rack, and SDDC, everything is a lot more easily managed.
The most important criterion when selecting a vendor is reputation.
How was the initial setup?
The initial setup was straightforward.
What was our ROI?
It's a liitle hard to say what our ROI is because we bought it to replace an old, traditional setup. It was either pay for maintenance and the like, refresh it, or go to an ACI. We went to an ACI. I don't know what the cost to refresh the other environment was, so I don't know exact numbers for return on investment.
Which other solutions did I evaluate?
Our shortlist was really just EMC. That decision was made before I took over the project. We were always an EMC shop, so we moved away from Cisco and went to Dell EMC for it. I don't know why, exactly, but they said to me, "Here, make it work."
What other advice do I have?
Be careful of your FTT policies.
I rate it a nine out of ten. It would be a ten if it had better deduping, compression, and the ability to Snapshot volumes on the back-end.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Works at a university with 1,001-5,000 employees
It is easier to deploy than the traditional SAN
Pros and Cons
- "It is easier to deploy than the traditional SAN."
- "Dedupe in non flash drives can be improved."
What is our primary use case?
We are thinking of using vSAN instead of the traditional SAN. We are just starting to explore how vSAN can benefit us.
How has it helped my organization?
This is not yet deployed, we are just starting to explore how vSAN can benefit us. it seems very expensive to obtain a vSAN license.
What is most valuable?
Based on my findings, it seems easier to deploy than the traditional SAN. I was told vSAN can be deployed in a few minutes.
What needs improvement?
Dedupe in non flash drives can be improved. The raw capacity for PFTT two is only able to use 67% of the raw capacity.
For how long have I used the solution?
Trial/evaluations only.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros
sharing their opinions.
Updated: November 2024
Product Categories
HCIPopular Comparisons
VxRail
StarWind Virtual SAN
Nutanix Cloud Infrastructure (NCI)
HPE SimpliVity
Dell PowerFlex
Sangfor HCI - Hyper Converged Infrastructure
HPE Alletra dHCI
DataCore SANsymphony
HPE Hyper Converged
Dell vSAN Ready Nodes
StorMagic SvSAN
Lenovo ThinkAgile VX Series
Scale Computing Platform
Huawei FusionCube Hyper-Converged Infrastructure
StarWind HyperConverged Appliance
Buyer's Guide
Download our free VMware vSAN Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- I am looking to compare Nutanix and VMware vSAN. Which one is better in terms of functionality and management?
- Nutanix and vSAN: Which is best for cloud services?
- What Is The Biggest Difference Between vSAN And VxRail?
- Do you think VMware’s HCI solution is a good alternative to AWS?
- What Is The Biggest Difference Between Nutanix And VMware vSAN?
- Which is your recommended HCI solution in 2022: Nutanix Acropolis AOS, VMware vSAN or anything else?
- What is the biggest difference between HPE SimpliVity and VMware vSAN?
- Which would you choose - Nutanix Acropolis AOS or VMware vSAN?
- Which solution performs better: Nutanix Cloud Infrastructure or VMware vSAN?
- How does HPE Simplivity compare with VMware vSAN?