Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

RSS and multiqueue support in Linux driver for VMXNET3 (2020567)

Details

Receive side scaling (RSS) and multiqueue support are included in the VMXNET3 Linux device driver. The VMXNET3 device always supported multiple queues, but the Linux driver used just one Rx and one Tx queue previously.
  • For the VMXNET3 driver shipped with VMware Tools, multiqueue support was introduced in vSphere 5.0.
  • For the VMXNET3 driver included with the Linux operating system, multiqueue support was introduced after and including Linux kernel version 2.6.37.
Multiqueue is a technique designed to enhance networking performance by allowing the Tx and Rx queues to scale with the number of CPUs in multi-processor systems. Many physical network adapters support multiqueue, and if you have this technology enabled on your NIC, Linux virtual machines running on vSphere 5 can take advantage of it.

Recent versions of VMware Tools have multiqueuing enabled by default. However, in earlier versions, you must manually enable multiqueuing. This can be done by using modprobe to set the number of transmit and receive queues for each adapter, as described in the Solution section of this article. If you need to change the RSS indirection table, which controls to which CPU each RSS hash value is sent, use modprobe with rss_ind_table (VMware Tools version) or ethtool (Linux version).

For transmit, the Linux kernel selects which Tx queue on which a packet should go out. For receive, RSS produces a hash based on the TCP source and destination ports, and IPv4 source and destination addresses that are used to dispatch packets from different flows to different queues.

This table summarizes multiqueue/RSS support:

VMXNET3 VMware Tools driver versionVMXNET3 Linux driver version
Multiqueue/RSS support exists1.0.15.0 and later (introduced in vSphere 5.0)1.0.16.0-k and later (introduced in Linux kernel 2.6.37)
Multiqueue/RSS support exists, but must be enabled1.0.15.0 to 1.0.23.x--
Enabled by default1.0.24.0 and later10.0.16.0-k and later

Solution

How do I configure multiqueue and RSS for VMXNET3?

The multiqueue VMXNET3 driver can be configured using module parameters only for the version that is shipped with VMware Tools. For the version included in Linux, only the indirection table can be configured with ethtool. These are the load time parameters and their usage:
  • num_tqs: Number of Tx queues for each adapter. Comma separated list of integers, one for each adapter. When set to 0, the number of Tx queues is made equal to the number of vCPUs. The default is 0.

    Example: If a virtual machine has three VMXNET3 adapters, this command will configure the first VMXNET3 VNIC with four Tx queues, the second VMXNET3 VNIC with one Tx queue, and the third with two Tx queues:

    modprobe vmxnet3 num_tqs=4,1,2

    Note: If the vdNet.log file shows this message: vmxnet3: Unknown parameter `num_tqs', then check the kernel version. It must be higher than 2.6.25 to use multi Tx/Rx queues.

  • num_rqs: Number of Rx queues for each adapter. The default is 0.

    Example: The usage is the same as that for num_tqs.

  • rss_ind_table: Indirection table for RSS. There should be 32 entries per adapter. Each comma separated entry is an Rx queue number (queue numbers start with a 0). Repeat the same for all NICs.

    Example: For a single adapter with four queues, the indirection table can be configured as:

    modprobe vmxnet3 rss_ind_table=0,1,2,3,0,1,2,0,0,1,2,1,0,1,2,2,0,1,2,3,0,1,2,0,0,1,2,1,0,1,2,2

    In this example, queue 3 appears only twice in the table. For the rest of the time, its spot is taken by one of the other three queues (0,1,2). Therefore, fewer packets are dispatched to queue 3. By default, indirection table entries are filled in round-robin fashion with all queue numbers appearing uniformly.

  • share_tx_intr: Indicates whether or not all Tx queues share one IRQ. The default is off for all NICs.

    Example: To enable Tx interrupt sharing in the first adapter but not in the second one:

    modprobe vmxnet3 share_tx_intr=1,0

  • buddy_intr: When this is set, the driver pairs up a Tx queue with a corresponding Rx queue and uses an IRQ vector for each pair. Both buddy_intr and share_tx_intr should not be enabled for a NIC at the same time. The share_tx_intr parameter will get preference in such cases. This option is on by default for all NICs.

    Example: This example will disable sharing one IRQ between Tx and Rx queues:

    modprobe vmxnet3 buddy_intr=0

Do you always get better performance by enabling multiple queues?

Multiple queues allow parallelism and increase throughput at the cost of more CPU. If a single queue is sufficient to drive the throughput that you need, enabling multiple queues might not improve performance. In fact, it might result in overall worse performance because of increased CPU.

What VMXNET3 driver version is needed for multiqueue support in Linux?

In the driver shipped with Linux, multiqueue is available and is the default choice from version 1.0.16.0-k. To find out which version you have, use this ethtool command:

ethtool -i ethx

In the driver shipped with VMware Tools, multiqueue is available from VMXNET3 version 1.0.15.0 and later, but you must use the num_tqs and num_rqs module parameters to use multiqueue (as described above). Multiqueue is the default choice from VMXNET3 version 1.0.24.0.

For VMware Tools VMXNET3 driver versions 1.0.15.0 to 1.0.24.0, how can I make the multiqueue configuration as default?

You must use the module parameters described above to enable multiqueue.

How do I replace the VMXNET3 driver shipped with Linux with the VMware Tools version?

If you are using a version of ESXi that was shipped more recently than your version of Linux, then the Linux driver might be outdated.

Install VMware Tools in the Linux guest with the clobber option:

# ./vmware-install.pl --clobber-kernel-modules=vmxnet3

The version of the VMXNET3 driver included in Linux does not have module parameters. Does this mean we cannot configure the number of queues in the driver?

That is correct.

Can I use ethtool to see or change the RSS indirection table?

Yes, you can use ethtool to see or change the RSS indirection table. For example:

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 0 0 0 0 0 0 0
8: 0 0 1 1 1 1 1 1
16: 1 1 1 1 1 1 1 1
24: 1 1 1 1 1 1 1 1

# ethtool -X eth3 equal 2

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 1 0 1 0 1 0 1
8: 0 1 0 1 0 1 0 1
16: 0 1 0 1 0 1 0 1
24: 0 1 0 1 0 1 0 1

# ethtool -X eth3 weight 6 2

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 0 0 0 0 0 0 0
8: 0 0 0 0 0 0 0 0
16: 0 0 0 0 0 0 0 0
24: 1 1 1 1 1 1 1 1

# ethtool -X eth3 weight 1 2

# ethtool -x eth3
RX flow hash indirection table for eth3 with 2 RX ring(s):
0: 0 0 0 0 0 0 0 0
8: 0 0 1 1 1 1 1 1
16: 1 1 1 1 1 1 1 1
24: 1 1 1 1 1 1 1 1

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 6 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 6 Ratings
Actions
KB: