123Line blog explosion

Lets find out whats going on in this world

There is an incorrect assumption that comes up from time to time, one that I shared for a while, is that VMware ESXi virtual NIC (vNIC) interfaces are limited to their “speed”.

In my stand-alone ESXi 7.0 installation, I have two options for NICs: vxnet3 and e1000. The vmxnet3 interface shows up at 10 Gigabit on the VM, and the e1000 shows up as a 1 Gigabit interface. Let’s test them both.

One test system is a Rocky Linux installation, the other is a Centos 8 (RIP Centos). They’re both on the same ESXi host on the same virtual switch. The test program is iperf3, installed from the default package repositories. If you want to test this on your own, it really doesn’t matter which OS you use, as long as its decently recent and they’re on the same vSwitch. I’m not optimizing for throughput, just putting enough power to try to exceed the reported link speed.

The ESXi host is 7.0 running on an older Intel Xeon E3 with 4 cores (no hyperthreading).

Running iperf3 on the vmxnet3 interfaces, that show up as 10 Gigabit on the Rocky VM:

[ 1.323917] vmxnet3 0000:0b:00.0 ens192: renamed from eth0
[ 4.599575] IPv6: ADDRCONF(NETDEV_UP): ens192: link is not ready
[ 4.602889] vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 5 vectors allocated
[ 4.604520] vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps

It also shows up as 10 Gigabit on the Centos 8 VM:

[ 2.526942] vmxnet3 0000:0b:00.0 ens192: renamed from eth0
[ 7.715785] IPv6: ADDRCONF(NETDEV_UP): ens192: link is not ready
[ 7.719561] vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 5 vectors allocated
[ 7.720221] vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps

I ran the iperf3 server on the Centos box and the client on the Rocky Box, though that shouldn’t matter much:

vmxnet3 NIC

[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.38 GBytes 20.4 Gbits/sec 0 1004 KBytes
[ 5] 1.00-2.00 sec 2.63 GBytes 22.6 Gbits/sec 0 1.22 MBytes
[ 5] 2.00-3.00 sec 2.59 GBytes 22.3 Gbits/sec 0 1.22 MBytes
[ 5] 3.00-4.00 sec 2.56 GBytes 22.0 Gbits/sec 0 1.28 MBytes
[ 5] 4.00-5.00 sec 2.65 GBytes 22.7 Gbits/sec 0 1.28 MBytes
[ 5] 5.00-6.00 sec 2.60 GBytes 22.4 Gbits/sec 0 1.28 MBytes
[ 5] 6.00-7.00 sec 2.62 GBytes 22.5 Gbits/sec 0 1.28 MBytes
[ 5] 7.00-8.00 sec 2.55 GBytes 21.9 Gbits/sec 0 1.28 MBytes
[ 5] 8.00-9.00 sec 2.52 GBytes 21.6 Gbits/sec 0 1.28 MBytes
[ 5] 9.00-10.00 sec 2.46 GBytes 21.1 Gbits/sec 0 1.28 MBytes
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 25.6 GBytes 22.0 Gbits/sec 0 sender
[ 5] 0.00-10.04 sec 25.6 GBytes 21.9 Gbits/sec receiver

So around 22 Gigabits per second, VM to VM with vmxnet3 NICs that report as 10 Gigabit.

What about the e1000 NICs. They show up as 1 Gigabit (just showing one here, but they both are the same):

[43830.168188] e1000e 0000:13:00.0 ens224: renamed from eth0
[43830.182559] IPv6: ADDRCONF(NETDEV_UP): ens224: link is not ready
[43830.245789] IPv6: ADDRCONF(NETDEV_UP): ens224: link is not ready
[43830.247271] IPv6: ADDRCONF(NETDEV_UP): ens224: link is not ready
[43830.247994] e1000e 0000:13:00.0 ens224: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[43830.249059] IPv6: ADDRCONF(NETDEV_CHANGE): ens224: link becomes ready

e1000 NIC

[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.42 GBytes 12.2 Gbits/sec 905 597 KBytes
[ 5] 1.00-2.00 sec 924 MBytes 7.75 Gbits/sec 87 607 KBytes
[ 5] 2.00-3.00 sec 842 MBytes 7.07 Gbits/sec 0 626 KBytes
[ 5] 3.00-4.00 sec 861 MBytes 7.22 Gbits/sec 0 638 KBytes
[ 5] 4.00-5.00 sec 849 MBytes 7.12 Gbits/sec 0 655 KBytes
[ 5] 5.00-6.00 sec 878 MBytes 7.36 Gbits/sec 0 679 KBytes
[ 5] 6.00-7.00 sec 862 MBytes 7.24 Gbits/sec 0 683 KBytes
[ 5] 7.00-8.00 sec 854 MBytes 7.16 Gbits/sec 0 690 KBytes
[ 5] 8.00-9.00 sec 874 MBytes 7.33 Gbits/sec 0 690 KBytes
[ 5] 9.00-10.00 sec 856 MBytes 7.18 Gbits/sec 197 608 KBytes
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 9.04 GBytes 7.76 Gbits/sec 1189 sender
[ 5] 0.00-10.04 sec 9.04 GBytes 7.73 Gbits/sec receiver

So I got about 7 or so Gigabits per second even with the e1000 driver, even though it shows up as 1 Gigabit. It makes sense they don’t get as much as the vmxnet3 NIC as the e1000 NIC is optimized for compatibility (looking like an Intel E1000 chipset to the VM) and not performance, but still.

My ESXi host is older, with a CPU that’s about 9 years old, so with a faster CPU and more cores, it’s probable I could pass even more than 22 Gbit/7 Gbit respectively. But it was still sufficient to demonstrate that VM transfer speeds are *not* limited by the reported vNIC interface speed.

This is probably true for other hypervisors (KVM, Hyper-V, etc.) but I’m not sure. Let me know if you know in the comments.

Subscribe Our Newsletter.

What is more interesting is if you find out about  all the existing new things we are up to