RSS and multiqueue support in Linux driver for VMXNET3
search cancel

RSS and multiqueue support in Linux driver for VMXNET3

book

Article ID: 322369

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Receive side scaling (RSS) and multiqueue support are included in the VMXNET3 Linux device driver. The VMXNET3 device always supported multiple queues, but the Linux driver used one Rx and one Tx queue previously.
  • For the VMXNET3 driver shipped with VMware Tools, multiqueue support was introduced in vSphere 5.0.
  • For the VMXNET3 driver included with the Linux operating system, multiqueue support was introduced after and including Linux kernel version 2.6.37.
Multiqueue is a technique designed to enhance networking performance by allowing the Tx and Rx queues to scale with the number of CPUs in multi-processor systems. Many physical network adapters support multiqueue, and if you have this technology enabled on your NIC, Linux virtual machines running on vSphere 5 can take advantage of it.

Recent versions of VMware Tools have multiqueuing enabled by default. However, in earlier versions, you must manually enable multiqueuing. This can be done by using modprobe to set the number of transmit and receive queues for each adapter, as described in the Solution section of this article. If you need to change the RSS indirection table, which controls to which CPU each RSS hash value is sent, use modprobe with rss_ind_table (VMware Tools version) or ethtool (Linux version).

For transmit, the Linux kernel selects which Tx queue on which a packet should go out. For receive, RSS produces a hash based on the TCP source and destination ports, and IPv4 source and destination addresses that are used to dispatch packets from different flows to different queues.

This table summarizes multiqueue/RSS support:
 
 VMXNET3 VMware Tools driver versionVMXNET3 Linux driver version
Multiqueue/RSS support exists1.0.15.0 and later (introduced in vSphere 5.0)1.0.16.0-k and later (introduced in Linux kernel 2.6.37)
Multiqueue/RSS support exists, but must be enabled1.0.15.0 to 1.0.23.x--
Enabled by default1.0.24.0 and later1.0.16.0-k and later


Environment

VMware vSphere ESXi 5.5
VMware vSphere ESXi 6.0
VMware vSphere ESXi 5.1
VMware vSphere ESXi 5.0

Resolution

How do I configure multiqueue and RSS for VMXNET3?
 
The multiqueue VMXNET3 driver can be configured using module parameters only for the version that is shipped with VMware Tools. For the version included in Linux, only the indirection table can be configured with ethtool. These are the load time parameters and their usage:
  • num_tqs: Number of Tx queues for each adapter. Comma separated list of integers, one for each adapter. When set to 0, the number of Tx queues is made equal to the number of vCPUs. The default is 0.

    For example:

    If a virtual machine has three VMXNET3 adapters, this command configures the first VMXNET3 VNIC with four Tx queues, the second VMXNET3 VNIC with one Tx queue, and the third with two Tx queues:

    modprobe vmxnet3 num_tqs=4,1,2

    Note: If the vdNet.log file shows this message: vmxnet3: Unknown parameter `num_tqs', then check the kernel version. It must be higher than 2.6.25 to use multi Tx/Rx queues.
     
  • num_rqs: Number of Rx queues for each adapter. The default is 0.

    For example, the usage is the same as that for num_tqs.
     
  • rss_ind_table: Indirection table for RSS. There should be 32 entries per adapter. Each comma separated entry is an Rx queue number (queue numbers start with a 0). Repeat the same for all NICs.

    For example:

    For a single adapter with four queues, the indirection table can be configured as:

    modprobe vmxnet3 rss_ind_table=0,1,2,3,0,1,2,0,0,1,2,1,0,1,2,2,0,1,2,3,0,1,2,0,0,1,2,1,0,1,2,2

    In this example, queue 3 appears only twice in the table. For the rest of the time, its spot is taken by one of the other three queues (0,1,2). Therefore, fewer packets are dispatched to queue 3. By default, indirection table entries are filled in round-robin fashion with all queue numbers appearing uniformly.
     
  • share_tx_intr: Indicates whether or not all Tx queues share one IRQ. The default is off for all NICs.

    For example:

    To enable Tx interrupt sharing in the first adapter but not in the second one:

    modprobe vmxnet3 share_tx_intr=1,0
     
  • buddy_intr: When this is set, the driver pairs up a Tx queue with a corresponding Rx queue and uses an IRQ vector for each pair. Both buddy_intr and share_tx_intr should not be enabled for a NIC at the same time. The share_tx_intr parameter get preference in such cases. This option is on by default for all NICs.

    For example:

    This example disables sharing one IRQ between Tx and Rx queues:

    modprobe vmxnet3 buddy_intr=0
Do you always get better performance by enabling multiple queues?
 
Multiple queues allow parallelism and increase throughput at the cost of more CPU. If a single queue is sufficient to drive the throughput that you need, enabling multiple queues might not improve performance. In fact, it might result in overall worse performance because of increased CPU.

What VMXNET3 driver version is needed for multiqueue support in Linux?
 
In the driver shipped with Linux, multiqueue is available and is the default choice from version 1.0.16.0-k. To find out which version you have, use this ethtool command:

ethtool -i ethx

In the driver shipped with VMware Tools, multiqueue is available from VMXNET3 version 1.0.15.0 and later, but you must use the num_tqs and num_rqs module parameters to use multiqueue (as described above). Multiqueue is the default choice from VMXNET3 version 1.0.24.0.

For VMware Tools VMXNET3 driver versions 1.0.15.0 to 1.0.24.0, how can I make the multiqueue configuration as default?
 
You must use the module parameters described above to enable multiqueue.

How do I replace the VMXNET3 driver shipped with Linux with the VMware Tools version?
 
Updates of open-vm-tools are distributed with operating system updates and patches, as well as updates to virtual appliances. For more information, see VMware support for open-vm-tools (2073803).
 
If you are using a version of VMware Tools from an ESXi host that was shipped more recently than your version of Linux, then the Linux driver might be outdated.

Install VMware Tools in the Linux guest with the clobber option:

# ./vmware-install.pl --clobber-kernel-modules=vmxnet3

The version of the VMXNET3 driver included in Linux does not have module parameters. Does this mean we cannot configure the number of queues in the driver?
 
That is correct.

Can I use ethtool to see or change the RSS indirection table?
 
Yes, you can use ethtool to see or change the RSS indirection table. For example:

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 0 0 0 0 0 0 0
8: 0 0 1 1 1 1 1 1
16: 1 1 1 1 1 1 1 1
24: 1 1 1 1 1 1 1 1

# ethtool -X eth3 equal 2

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 1 0 1 0 1 0 1
8: 0 1 0 1 0 1 0 1
16: 0 1 0 1 0 1 0 1
24: 0 1 0 1 0 1 0 1

# ethtool -X eth3 weight 6 2

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 0 0 0 0 0 0 0
8: 0 0 0 0 0 0 0 0
16: 0 0 0 0 0 0 0 0
24: 1 1 1 1 1 1 1 1

# ethtool -X eth3 weight 1 2

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 0 0 0 0 0 0 0
8: 0 0 1 1 1 1 1 1
16: 1 1 1 1 1 1 1 1
24: 1 1 1 1 1 1 1 1

Additional Information

For translated versions of this article, see: