Search the VMware Knowledge Base (KB)
View by Article ID

Configuring Mellanox RDMA I/O Drivers for ESXi 5.x (Partner Verified and Support) (2058261)

  • 24 Ratings


This article provides instructions for Partner Support for Mellanox Software Drivers.

  • Installation, configuration, and support of Mellanox Software and Hardware. For a full list of supported devices, see VMware Hardware Compatibility List (HCL).
  • Basic creation of virtual network (vNIC) and virtual HBA (vHBA) adapters to ESX 4.x, ESXi 4.x and ESXi 5.x hypervisors.

Note: The Partner Verified and Supported Products (PVSP) policy implies that the solution is not directly supported by VMware. For issues with this configuration, contact Mellanox Technologies Inc. directly. For information on how partners can engage with VMware, see the VMware Compatibility Guide. It is the partner's responsibility to verify that the configuration functions with future vSphere major and minor releases, as VMware does not guarantee that compatibility with future releases is maintained.

Disclaimer: The partner product reference in this article is a software module that is developed and supported by a partner. Use of this product is also governed by the end user license agreement of the partner. You must obtain the application, support, and licensing for using this product from the partner. For more information, see

To contact Mellanox Technologies Technical Support directly:


Introduction to Mellanox Technologies Inc.

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at

Mellanox NIC adapters can run RDMA (Remote Direct Memory Access) over InfiniBand, as well as RDMA over Converged Ethernet (RoCE). This allows the hypervisor to run high-speed network/storage drivers for VMkernel service. In addition, our future drivers’ releases will support SR-IOV that will enable the Virtual Machine to enjoy the large variety of RDMA-based application levels directly while bypassing the hypervisor software overhead. Please contact us directly to check the release dates.

Single-root I/O virtualization (SR-IOV) is a standard that enables one PCI Express (PCIe) adapter to be presented as multiple separate logical devices to virtual machines. vSphere 5.1 and later supports this technology, and it can be enabled on Mellanox NIC adapters, for both InfiniBand and Ethernet. This way, the virtual machine can run native RDMA applications (using InfiniBand or RoCE) and achieve near wire speed performance.

Mellanox solutions include IP-over-InfiniBand (IPoIB) driver, which allows spanning IP network on top of an InfiniBand high-speed network, this brings the standard Interment Protocol to enjoy the advantages of the InfiniBand technology, and at the same time, it keeps the same look-and-feel for the IP-based applications. IPoIB registers “vmnic” interfaces with phenomenal performance, which can be then be attached to the vSphere vSwitch and used for various VMkernel services, as well as virtual machine services.

RDMA is a perfect fit for accelerating storage solutions, with SRP and iSER drivers, the hypervisor can expose a standard SCSI device that connects to a remote storage and makes use of RDMA technology. Obviously, other IP-based storage solutions such as NFS, can also run on top of IPoIB driver as well.

Note: To use Mellanox RDMA drivers, install vSphere Installation Bundle (VIB) from Mellanox website. The recommended Firmware/Driver compatibility matrix can be found here.

SRP protocol runs on InfiniBand fabrics, while iSER runs on InfinIBand and RoCE fabrics.

Installing the Mellanox Driver Bundle

To install the Mellanox driver bundle/vib onto the ESXi 5.x host:
  1. Access the ESXi console as the root user.
  2. From the console:
    1. Run this command to install the Mellanox driver bundle:

      esxcli software vib install –d

    2. Run this command to install the Mellanox driver vib:

      esxcli software vib install –n mlxXXX

    3. Reboot the ESXi host.

To update the Mellanox driver bundle/vib onto the ESXi 5.x host:

  1. Access the ESXi console as the root user.
  2. From the console:
    1. Run this command to update the Mellanox driver bundle:

      esxcli software vib update –d

    2. Run this command to update the Mellanox driver vib:

      esxcli software vib update –n mlxXXX

    3. Reboot the ESXi host.
To enable/disable an RDMA module on boot time:
  1. Access the ESXi console as the root user.
  2. Run this command to list the loaded modules to show all modules names:

    esxcfg-module –l

  3. Run this command to enable the given module:

    esxcfg-module –e [module name]

  4. Run this command to disable the given module:

    esxcfg-module –d [module name]

Getting logs from ESX host:

To gather diagnostics information, troubleshoot issues, and understand the setup from the support side, run the command:


This command gathers required logs from various files, core-dump (if present), and information on the state of the virtual machines.

Mellanox Driver logs are located in:
/ var/log/vmkernel.log

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


  • 24 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)

Please enter the Captcha code before clicking Submit.
  • 24 Ratings