VMXNET3 resource considerations on a Linux virtual machine that has vSphere DirectPath I/O with vMotion enabled
search cancel

VMXNET3 resource considerations on a Linux virtual machine that has vSphere DirectPath I/O with vMotion enabled

book

Article ID: 339908

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

In a Cisco Unified Computing System (UCS) that has Cisco Virtual Machine Fabric Extender (VM-FEX) deployed, in an adapter policy you can configure resources and offload settings for VMXNET3 adapters and apply the policy to multiple hosts. The policy must have enough queue and interrupt resources to satisfy the VMXNET3 driver configuration in the guest operating system. If you use the vSphere DirectPath I/O with vMotion passthrough technology for high networking performance, the number of transmit (Tx) queues, receive (Rx) queues, completion queues (CQs), and interrupt vectors set in the VMWarePassThru adapter policy in the UCS Manager must match the numbers that are configured for the VMXNET3 driver in the guest operating system.

For information about configuring Cisco VM-FEX on vSphere, see the Cisco UCS B-Series Servers documentation.

For best practices in configuring Cisco VM-FEX and vSphere including vSphere DirectPath I/O with vMotion, see Cisco VM-FEX Best Practices for VMware ESX Environment Deployment Guide. For troubleshooting Cisco VM-FEX switch on vSphere, see Cisco VM-FEX Using VMware ESX Environment Troubleshooting Guide.

For information about the resource considerations for using VMXNET3 with vSphere DirectPath I/O with vMotion on Windows virtual machines, see to VMXNET3 resource considerations on a Windows virtual machine that has vSphere DirectPath I/O with vMotion enabled (2061598).


Environment

VMware vSphere ESXi 5.1
VMware vSphere ESXi 5.5
VMware vSphere ESXi 5.0

Resolution

Resource allocation for VMXNET3 networking in vSphere DirectPath I/O with vMotion

Transmit, receive, and completion queues

For a multiprocessor system, such as an ESXi host, increasing the number of queues increases the data that is transmitted and received in parallel. Transmitting and receiving data in parallel increases throughput and reduces latency. The packets in a particular transmit queue or receive queue can be processed by a specific virtual CPU. For improved performance you can use as many queues as the number of virtual CPUs in a virtual machine.

On a Linux guest operating system, you can examine and configure the number of Tx and Rx queues. You cannot see or change the number of completion queues, but can use the number of Tx and Rx queues to calculate it:

Number of CQs = Number of Tx queues + Number of Rx queues 

Interrupt vectors

To send or receive packets simultaneously in multiple queues on a host, processing of each queue must be triggered by an individual interrupt line. In a system that requires many interrupts, MSI or MSI-X interrupts must be used instead of wired ones. The default interrupt mode for the VMXNET3 adapter in passthrough mode is MSI-X. In passthrough mode, VMXNET3 adapters do not work with INTx interrupts.

On the host, interrupt modes are represented as numbers.

Interrupt ModeNumber
MSI-X3
MSI2
INTx1
auto0

One MSI or MSI-X interrupt vector is typically assigned to one queue. When a VMXNET3 adapter that is used for vSphere DirectPath I/O with vMotion sends or receives data, the interrupt vectors that are assigned to the adapter are allocated directly on the physical host. An ESXi host has a finite number of interrupt vectors for I/O device operations. Under certain conditions, the number of vectors might be exhausted when a great number of virtual I/O device requests are regenerated, for example, in workloads with high packet rates and many queues.

Viewing the number of transmit and receive queues

You can determine the number of Tx and Rx queues allocated for a VMXNET3 driver on by running the ethtool console command in the Linux guest operating system:

ethtool -S ethX

where X in ethX stands for the number of the interface that is reserved for the VMXNET3 adapter.

Situation 1: VMXNET3 adapter eth0 that has one Tx and Rx queues

When you run the ethtool -S eth0 console command, you see an output of statistics similar to the following:

NIC statistics:
TSO pkts tx: 0
TSO bytes tx: 0
ucast pkts tx: 391
ucast bytes tx: 38413
mcast pkts tx: 44
mcast bytes tx: 7096
bcast pkts tx: 2
bcast bytes tx: 406
pkts tx err: 0
pkts tx discard: 0
drv dropped tx total: 0
too many frags: 0
giant hdr: 0
hdr err: 0
tso: 0
ring full: 0
pkts linearized: 0
hdr cloned: 0
giant hdr: 0
LRO pkts rx: 0
LRO byte rx: 0
ucast pkts rx: 329
ucast bytes rx: 36169
mcast pkts rx: 1200
mcast bytes rx: 257433
bcast pkts rx: 7976
bcast bytes rx: 294369
pkts rx out of buf: 12
pkts rx err: 0
drv dropped rx total: 0
err: 0
fcs: 0
rx buf alloc fail: 0
tx timeout count: 0

Situation 2: VMXNET3 adapter eth0 that has two Tx and Rx queues

When you run the ethtool -S eth0 console command, you see the following statistics:

NIC statistics:
Tx Queue#: 0
TSO pkts tx: 0
TSO bytes tx: 0
ucast pkts tx: 49
ucast bytes tx: 6935
mcast pkts tx: 19
mcast bytes tx: 3627
bcast pkts tx: 1
bcast bytes tx: 342
pkts tx err: 0
pkts tx discard: 0
drv dropped tx total: 0
too many frags: 0
giant hdr: 0
hdr err: 0
tso: 0
ring full: 0
pkts linearized: 0
hdr cloned: 0
giant hdr: 0
Tx Queue#: 1
TSO pkts tx: 0
TSO bytes tx: 0
ucast pkts tx: 1
ucast bytes tx: 74
mcast pkts tx: 2
mcast bytes tx: 108
bcast pkts tx: 0
bcast bytes tx: 0
pkts tx err: 0
pkts tx discard: 0
drv dropped tx total: 0
too many frags: 0
giant hdr: 0
hdr err: 0
tso: 0
ring full: 0
pkts linearized: 0
hdr cloned: 0
giant hdr: 0
Rx Queue#: 0
LRO pkts rx: 0
LRO byte rx: 0
ucast pkts rx: 80
ucast bytes rx: 9045
mcast pkts rx: 0
mcast bytes rx: 0
bcast pkts rx: 402
bcast bytes rx: 110036
pkts rx out of buf: 0
pkts rx err: 0
drv dropped rx total: 0
err: 0
fcs: 0
rx buf alloc fail: 0
Rx Queue#: 1
LRO pkts rx: 0
LRO byte rx: 0
ucast pkts rx: 6
ucast bytes rx: 642
mcast pkts rx: 1
mcast bytes rx: 82
bcast pkts rx: 263
bcast bytes rx: 89464
pkts rx out of buf: 0
pkts rx err: 0
drv dropped rx total: 0
err: 0
fcs: 0
rx buf alloc fail: 0
tx timeout count: 0

Determining the interrupt mode and the number of used interrupt vectors

To determine the type of the interrupt vectors used by a VMXNET3 adapter, run the dmesg console command that includes a grep option for the interface associated with the VMXNET3 adapter in the guest operating system.

The output from the dmesg command contains the numeric representation of the interrupt mode.

Situation 1: VMXNET3 adapter eth2 that uses one interrupt vector of type MSI-X

When you run the console command

dmesg |grep eth2

you see an output of statistics similar to the following:

[ 1.690164] eth2: NIC Link is Up 10000 Mbps
[ 29.290465] eth2: intr type 3, mode 0, 1 vectors allocated
[ 29.291314] eth2: NIC Link is Up 10000 Mbps
[ 88.063482] eth2: no IPv6 routers present

Situation 2: VMXNET3 adapter eth0 that uses one interrupt vector of type MSI

When you run the console command

dmesg | grep eth0

you see an output of statistics similar to the following:

[ 7.076019] eth0: NIC Link is Up 10000 Mbps
[ 47.360588] eth0: intr type 2, mode 0, 1 vectors allocated
[ 47.372722] eth0: NIC Link is Up 10000 Mbps
[ 187.143713] eth0: no IPv6 routers present

Setting the number of Tx and Rx queues

To configure the number of Tx and Rx queues for the VMXNET3 adapters of a virtual machine, set the num_tqs and num_rqs parameters of the VMXNET3 driver module.

num_tqs=x,y,z
num_rqs=x,y,z

num_tqs and num_rqs are comma-separated lists of integer values where each entry is the number of Tx or Rx queues for an adapter. You can set 1, 2, 4 or another number equal to a power of two, up to the number of vCPUs of the virtual machine. To make the number of the Tx or Rx queues equal to the number of vCPUs, you must set a value of 0 for an adapter . The default value for an adapter is 0.

To set the num_tqs or num_rqs parameter of the VMXNET3 driver on the Linux guest operating system, run the modprobe command. For example, if a virtual machine that has three VMXNET3 adapters, to configure a number of Tx queues for each of them, run the following commands:

modprobe vmxnet3 num_tqs=4,1,1 

Changing the interrupt mode

To change the interrupt mode for the VMXNET3 adapter, in the virtual machine configuration file (.vmx) set the number for the relevant interrupt mode to the ethernetX.intrMode parameter. X next to ethernet stands for the sequence number of the adapter in the virtual machine.

In MSI mode, a VMXNET3 adapter uses only one interrupt, and the multiqueue functionality is off.

You can edit the ethernetX.intrMode parameter by using the vSphere Web Client or the vSphere Client to modify the virtual machine settings, or directly in the configuration file. See the vSphere Virtual Machine Administration documentation.

Sharing interrupt resources between Tx and Rx queues

You can use one of the following methods for sharing interrupt vectors:

  • Shared Tx interrupt vector. Share an interrupt vector among all Tx queues of a VMXNET3 adapter.

    To enable to disable Tx queue interrupt vector sharing for one or more VMXNET3 adapters, edit the share_tx_intr parameter of the VMXNET3 driver module by running the modprobe console command. You must set a comma-separated list of 0 and 1 values to the share_tx_intr parameter. Each entry represents the shared Tx queue interrupt vector mode of an adapter. Set the value of 1 to enable Tx interrupt vector sharing, and 0 to disable it.

    For example, in a virtual machine that has two VMXNET3 adapter, to enable Tx interrupt sharing for the first adapter and to disble it for the second, run the following command:

    modprobe vmxnet3 share_tx_intr=1,0

  • Shared Tx-Rx interrupt vectors. Pair a Tx queue with a corresponding Rx queue and use a common interrupt vector for each pair.

    To enable or disable Tx-Rx interrupt vector sharing for one or more VMXNET3 adapters, edit the buddy_intr parameter of the VMXNET3 driver module by running the modprobe console command. You must set a comma-separated list of 0 and 1 values to the buddy_intr parameter. Each entry represents the shared Tx-Rx interrupt mode of an adapter. Set a value of 0 to disable Tx-Rx interrupt vector sharing, and 1 to enable it. By default, shared Tx-Rx interrupt vectors are enabled.

The final number of queues depends also on the number of vCPUs. Run the ethtool command in the guest operating system to verify the number of Tx and Rx queues.

Note: Do not activate shared Tx interrupt vectors and shared Tx-Rx interrupt vectors for a VMXNET3 adapter at the same time. In such a case, the share_tx_intr parameter is applied.

Resource requirements for specific VMXNET3 driver versions

VMXNET3 driver versions that support vSphere DirectPath I/O with vMotion require certain queue and interrupt resources on Linux guest operating systems.

VMXNET3 Driver VersionOperating SystemInterrupt Mode
Resource TypeRequirements
1.0.14.0-k-NAPI or later
  • Red Hat Enterprise Linux 6
  • SUSE Linux Enterprise Server 11, SUSE Linux Enterprise Server 11 SP1, SUSE Linux Enterprise Server 11 SP2, or SUSE Linux Enterprise Server 11 SP3
MSI-X
Number of Tx queues1
Number of Rx queues
1
Number of interrupt vectors
1
1.0.36.0-NAPI
  • Red Hat Enterprise Linux 6
  • SUSE Linux Enterprise Server 11, SUSE Linux Enterprise Server 11 SP1, SUSE Linux Enterprise Server 11 SP2, or SUSE Linux Enterprise Server 11 SP3
MSI-XNumber of Tx queues
Number of vCPUs
Number of Rx queues
Number of vCPUs
Number of interrupt vectors
Number of vCPUs + 1
1.0.13.0 or later
  • Red Hat Enterprise Linux 6
  • SUSE Linux Enterprise Server 11, SUSE Linux Enterprise Server 11 SP1, SUSE Linux Enterprise Server 11 SP2, or SUSE Linux Enterprise Server 11 SP3
MSINumber of Tx queues
1
Number of Rx queues
1
Number of interrupt vectors
1

Additional Information

For translated versions of this article, see: