IT Development Manager at a comms service provider with 10,001+ employees
Real User
2019-10-09T06:50:46Z
Oct 9, 2019
On a basic, structural level, virtual networks aren't that different from physical networks.
So In virtualization, virtual switches are used to establish a connection between the virtual network and the physical network.
Once the vSwitch has bridged the connection between the virtual network and the physical network, the virtual machines residing on the host server can begin transferring data to, and receiving data from, all of the network-capable devices connected to the physical network. That is to say, the virtual machines are no longer limited to communicating solely across the virtual network.
What I want to say the network performance depend on many factors rather than hypervisor itself with my long experience in virtualization after working on VMware, OVM, KVM, Hyper-V, and Nutanix AHV, we can get the best performance for all of these hypervisors if we are using the proper NIC card, physical server, and physical switches.
From my point of view, Nutanix can provide the best performance due to the data locality which can offer more than 10 Gb for the hosted virtual machines.
But again you can gain the best performance from VMware if you have the best design.
Search for a product comparison in Server Virtualization Software
Works at a marketing services firm with 201-500 employees
Real User
2019-10-10T11:22:28Z
Oct 10, 2019
I felt the need to, again, make some remarks.
Left out of the discussion is the question of which architecture is planned for use and what OSes as guests. In the past, Intel CPU on the x86 ISA has been used merely, but that landscape is rapidly shifting.
There is a big change coming. Apart from the new x86 Epyc CPUs from AMD which have much better gains on a lot of virtualization platforms, the latest developments are now pointing into the direction of other ISA's like ARM and RISC. Not for the faint of heart as of yet, but it is coming and it won’t be stopped this time.
If you look carefully at the AMD Epyc CPU line, with lots of CPU lanes and much better performance figures than can be obtained on current platinum and gold editions of Intel CPU's, you quickly discover the benefits. And yes, this platform is rapidly maturing. This is something to consider when choosing the hypervisor; not all hypervisors perform equally well on those platforms. Initial testing I did with Epyc Rome suggests that the more mature Linux Hypervisors are taking the lead.
It all depends on your particular needs for that 10Gbit speed you want to implement. Without further details, it is hard to offer good advice. If your workload is SQL server the backend plays a much more important rule. In that respect Xenserver 8.0 on Epyc takes the crown, but only if your backend is of good quality too. Full flash backends are not always better for that particular network load and workload. I think if your wallet is deep enough full M.2 on Epyc CPUs spans the top of the line, no matter what hypervisor is chosen.
Before undertaking any network optimization effort, you should understand the physical aspects of the network. The following are just a few aspects of the physical layout that merit close consideration:
* Consider using server-class network interface cards (NICs) for the best performance.
* Make sure the network infrastructure between the source and destination NICs doesn’t introduce bottlenecks. For example, if both NICs are 10Gb/s, make sure all cables and switches are capable of the same speed and that the switches are not configured to a lower speed
For the best networking performance, we recommend the use of network adapters that support the following hardware features:
* Checksum offload
* TCP segmentation offload (TSO)
* Ability to handle high-memory DMA (that is, 64-bit DMA addresses)
* Ability to handle multiple Scatter Gather elements per Tx frame
* Jumbo frames (JF)
* Large receive offload (LRO)
* When using a virtualization encapsulation protocol, such as VXLAN or GENEVE, the NICs should support offload of that protocol’s encapsulated packets.
* Receive Side Scaling (RSS)
Make sure network cards are installed in slots with enough bandwidth to support their maximum throughput. As described in “Hardware Storage Considerations” on page 13, be careful to distinguish between similar-sounding—but potentially incompatible—bus architectures
Ideally, single-port 10Gb/s Ethernet network adapters should use PCIe x8 (or higher) or PCI-X 266 and dual-port 10Gb/s Ethernet network adapters should use PCIe x16 (or higher). There should preferably be no “bridge chip” (e.g., PCI-X to PCIe or PCIe to PCI-X) in the path to the actual Ethernet device (including any embedded bridge chip on the device itself), as these chips can reduce performance.
Ideally 40Gb/s Ethernet network adapters should use PCI Gen3 x8/x16 slots (or higher)
Multiple physical network adapters between a single virtual switch (vSwitch) and the physical network constitute a NIC team. NIC teams can provide passive failover in the event of hardware failure or network outage and, in some configurations, can increase performance by distributing the traffic across those physical network adapters.
When using load balancing across multiple physical network adapters connected to one vSwitch, all the NICs should have the same line speed.
If the physical network switch (or switches) to which your physical NICs are connected support Link Aggregation Control Protocol (LACP), configuring both the physical network switches and the vSwitch to use this feature can increase throughput and availability.
Owner at a tech services company with 51-200 employees
Real User
2019-10-08T09:17:52Z
Oct 8, 2019
I have worked only with VMware Hypervisor and have seen that for most customers a 2 x 10Gbit connection it works fine when using in combination with the distributed virtual switch with network profile on it. (VMware QOS) in the DVS.
Using Intel network cards 520 and 540, I have not tested any 550 yet. Proxmox gets the best performance, but not by much, XEN and VMware get really close, I do not think it can be the "deciding" factor. With Broadcom network cards the result changes a lot. In this case, Proxmox gets WAY better performance compared to XEN and VMware but is a little slower than Intel. I will not provide numbers because my tests are very informal and relaxed, just copy a big file, or open a bunch of queries in an SQL server.
Data Center Development at Telekomunikasi Indonesia
User
2019-10-09T03:18:24Z
Oct 9, 2019
In my company, we use VMware vSphere as a hypervisor and Openstack. We have tested for VMware because VMware is the main platform for critical business. We've tested using Iperf tools, with 2 NIC 10Gb/s teaming to Cisco ACI can reach 18Gb/s as expected.
The most important things to achieve high throughput are:
1. Make sure software/firmware compatibility between the hypervisor version and NIC's firmware card.
2. Load testing (send file 1.5TB )using 2 servers as a client/source and 1 server as Target. Of course, before performing a test, all physical layers should be already error-free (optical cable, NIC, switch port).
Note: we found that error(CRC error) and packet drop appears if the firmware is not compatible.
1) Likely is Nutanix because of the locality.
2) What workload is running that requires 10gb throughput or IOPS? Because the reality is, most workloads are not taxing a 10Gb port. If they are, then great, scale-out infrastructure like Nutanix can help distribute that workload as well as a number of modern DB's such as Mongo and NoSQL, etc.
System Administrator at Bakhresa Group of companies
Real User
2019-10-08T12:43:16Z
Oct 8, 2019
I found VMware vSphere is far better equipped to meet the demands of an enterprise datacenter than other hypervisors and it delivers the production-ready performance and scalability needed to implement an efficient and responsive data center.
Nutanix, for sure. Data locality makes a difference. For new-gen disk technologies, the network will be the bottleneck so the more you avoid networking traffic, the better.
Senior Strategic Technical Marketing Engineer at Nutanix
Real User
2019-10-08T11:46:39Z
Oct 8, 2019
When you use Nutanix hyper-converged infrastructure (HCI), you can choose your own hypervisor, VMware ESXi, Hyper-V, or native Nutanix AHV. As you modernize your full-stack of architecture, the less visible the underlying infrastructure, the easier it is to use for your end-users. We recommend testing the configuration using our built-in X-ray tool so that you can choose what is best for your environment. The network performance at 10gb or higher might be dependent on other factors than just the selection of hypervisor.
Works at a marketing services firm with 201-500 employees
Real User
2019-10-08T10:53:11Z
Oct 8, 2019
Apart from the question of which hypervisor to use, it boils down to the quality of the network adapters, switching capacity and CPU lanes your proc supports. Checksum offloading is another important topic. If all is done well I measured performance gains with Xen Hypervisor compared to VMware hypervisor on the same hardware approaching 8% or better. I am a real user.
Nevertheless, the hypervisor itself plays just an integral part in the overall setup and hence the performance figures. It all starts with high-quality network adapters and good switches as the most important components for 10Gbit performance.
Storage Specialist at Informatics Services Corporation
Real User
2019-10-08T07:15:58Z
Oct 8, 2019
VMware ESX is one of the best software. Our experience with iSCSI has been this, we used HPE Proliant Server And HPE Ethernet 10Gb 2-port 560FLR-SFP+ Adapter as Initiator the Cisco Switch as a Fabric Layer, the delay was acceptable and applicable.
Server Virtualization Software enables businesses to partition a single physical server into multiple virtual servers, optimizing resource utilization and reducing costs.
This technology helps organizations enhance their IT infrastructure efficiency and flexibility by enabling the creation of isolated environments for running different applications, servers, or systems on the same hardware. It aids in improving disaster recovery, streamlining server management, and maximizing hardware usage....
On a basic, structural level, virtual networks aren't that different from physical networks.
So In virtualization, virtual switches are used to establish a connection between the virtual network and the physical network.
Once the vSwitch has bridged the connection between the virtual network and the physical network, the virtual machines residing on the host server can begin transferring data to, and receiving data from, all of the network-capable devices connected to the physical network. That is to say, the virtual machines are no longer limited to communicating solely across the virtual network.
What I want to say the network performance depend on many factors rather than hypervisor itself with my long experience in virtualization after working on VMware, OVM, KVM, Hyper-V, and Nutanix AHV, we can get the best performance for all of these hypervisors if we are using the proper NIC card, physical server, and physical switches.
From my point of view, Nutanix can provide the best performance due to the data locality which can offer more than 10 Gb for the hosted virtual machines.
But again you can gain the best performance from VMware if you have the best design.
I felt the need to, again, make some remarks.
Left out of the discussion is the question of which architecture is planned for use and what OSes as guests. In the past, Intel CPU on the x86 ISA has been used merely, but that landscape is rapidly shifting.
There is a big change coming. Apart from the new x86 Epyc CPUs from AMD which have much better gains on a lot of virtualization platforms, the latest developments are now pointing into the direction of other ISA's like ARM and RISC. Not for the faint of heart as of yet, but it is coming and it won’t be stopped this time.
If you look carefully at the AMD Epyc CPU line, with lots of CPU lanes and much better performance figures than can be obtained on current platinum and gold editions of Intel CPU's, you quickly discover the benefits. And yes, this platform is rapidly maturing. This is something to consider when choosing the hypervisor; not all hypervisors perform equally well on those platforms. Initial testing I did with Epyc Rome suggests that the more mature Linux Hypervisors are taking the lead.
It all depends on your particular needs for that 10Gbit speed you want to implement. Without further details, it is hard to offer good advice. If your workload is SQL server the backend plays a much more important rule. In that respect Xenserver 8.0 on Epyc takes the crown, but only if your backend is of good quality too. Full flash backends are not always better for that particular network load and workload. I think if your wallet is deep enough full M.2 on Epyc CPUs spans the top of the line, no matter what hypervisor is chosen.
I have good experience with VMware hypervisor over 10G network, in the next link you can see all the information about the best practices.
www.vmware.com
Hardware Networking Considerations
Before undertaking any network optimization effort, you should understand the physical aspects of the network. The following are just a few aspects of the physical layout that merit close consideration:
* Consider using server-class network interface cards (NICs) for the best performance.
* Make sure the network infrastructure between the source and destination NICs doesn’t introduce bottlenecks. For example, if both NICs are 10Gb/s, make sure all cables and switches are capable of the same speed and that the switches are not configured to a lower speed
For the best networking performance, we recommend the use of network adapters that support the following hardware features:
* Checksum offload
* TCP segmentation offload (TSO)
* Ability to handle high-memory DMA (that is, 64-bit DMA addresses)
* Ability to handle multiple Scatter Gather elements per Tx frame
* Jumbo frames (JF)
* Large receive offload (LRO)
* When using a virtualization encapsulation protocol, such as VXLAN or GENEVE, the NICs should support offload of that protocol’s encapsulated packets.
* Receive Side Scaling (RSS)
Make sure network cards are installed in slots with enough bandwidth to support their maximum throughput. As described in “Hardware Storage Considerations” on page 13, be careful to distinguish between similar-sounding—but potentially incompatible—bus architectures
Ideally, single-port 10Gb/s Ethernet network adapters should use PCIe x8 (or higher) or PCI-X 266 and dual-port 10Gb/s Ethernet network adapters should use PCIe x16 (or higher). There should preferably be no “bridge chip” (e.g., PCI-X to PCIe or PCIe to PCI-X) in the path to the actual Ethernet device (including any embedded bridge chip on the device itself), as these chips can reduce performance.
Ideally 40Gb/s Ethernet network adapters should use PCI Gen3 x8/x16 slots (or higher)
Multiple physical network adapters between a single virtual switch (vSwitch) and the physical network constitute a NIC team. NIC teams can provide passive failover in the event of hardware failure or network outage and, in some configurations, can increase performance by distributing the traffic across those physical network adapters.
When using load balancing across multiple physical network adapters connected to one vSwitch, all the NICs should have the same line speed.
If the physical network switch (or switches) to which your physical NICs are connected support Link Aggregation Control Protocol (LACP), configuring both the physical network switches and the vSwitch to use this feature can increase throughput and availability.
I have worked only with VMware Hypervisor and have seen that for most customers a 2 x 10Gbit connection it works fine when using in combination with the distributed virtual switch with network profile on it. (VMware QOS) in the DVS.
Using Intel network cards 520 and 540, I have not tested any 550 yet. Proxmox gets the best performance, but not by much, XEN and VMware get really close, I do not think it can be the "deciding" factor. With Broadcom network cards the result changes a lot. In this case, Proxmox gets WAY better performance compared to XEN and VMware but is a little slower than Intel. I will not provide numbers because my tests are very informal and relaxed, just copy a big file, or open a bunch of queries in an SQL server.
In my company, we use VMware vSphere as a hypervisor and Openstack. We have tested for VMware because VMware is the main platform for critical business. We've tested using Iperf tools, with 2 NIC 10Gb/s teaming to Cisco ACI can reach 18Gb/s as expected.
The most important things to achieve high throughput are:
1. Make sure software/firmware compatibility between the hypervisor version and NIC's firmware card.
2. Load testing (send file 1.5TB )using 2 servers as a client/source and 1 server as Target. Of course, before performing a test, all physical layers should be already error-free (optical cable, NIC, switch port).
Note: we found that error(CRC error) and packet drop appears if the firmware is not compatible.
Two answers:
1) Likely is Nutanix because of the locality.
2) What workload is running that requires 10gb throughput or IOPS? Because the reality is, most workloads are not taxing a 10Gb port. If they are, then great, scale-out infrastructure like Nutanix can help distribute that workload as well as a number of modern DB's such as Mongo and NoSQL, etc.
I found VMware vSphere is far better equipped to meet the demands of an enterprise datacenter than other hypervisors and it delivers the production-ready performance and scalability needed to implement an efficient and responsive data center.
Nutanix
Nutanix, for sure. Data locality makes a difference. For new-gen disk technologies, the network will be the bottleneck so the more you avoid networking traffic, the better.
When you use Nutanix hyper-converged infrastructure (HCI), you can choose your own hypervisor, VMware ESXi, Hyper-V, or native Nutanix AHV. As you modernize your full-stack of architecture, the less visible the underlying infrastructure, the easier it is to use for your end-users. We recommend testing the configuration using our built-in X-ray tool so that you can choose what is best for your environment. The network performance at 10gb or higher might be dependent on other factors than just the selection of hypervisor.
Apart from the question of which hypervisor to use, it boils down to the quality of the network adapters, switching capacity and CPU lanes your proc supports. Checksum offloading is another important topic. If all is done well I measured performance gains with Xen Hypervisor compared to VMware hypervisor on the same hardware approaching 8% or better. I am a real user.
Nevertheless, the hypervisor itself plays just an integral part in the overall setup and hence the performance figures. It all starts with high-quality network adapters and good switches as the most important components for 10Gbit performance.
KVM/Proxmox use kernel Linux.
VMware is best.
VMware ESX is one of the best software. Our experience with iSCSI has been this, we used HPE Proliant Server And HPE Ethernet 10Gb 2-port 560FLR-SFP+ Adapter as Initiator the Cisco Switch as a Fabric Layer, the delay was acceptable and applicable.