vmkapi version removal and Installing/upgrading implication with ESXi 7.0
search cancel

vmkapi version removal and Installing/upgrading implication with ESXi 7.0

book

Article ID: 318024

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:
From ESXi 7.0 GA onwards, when you attempt to upgrade your host from ESXi 6.7 or ESXi 6.5 to ESXi 7.0 with async driver VIBs as listed in Table-1, your upgrade experience may be impacted. During such situation, upgrade will fail with vmkapi DependencyError 

If your host require impacted device/driver combination, your fresh-install experience may also be impacted. When you attempt to install or update of 6.x driver VIBs listed at Table-1, on a host with ESXi 7.0GA and onwards, it will fail with vmkapi DependencyError 

When you attempt to upgrade your host from ESXi 6.7 or ESXi 6.5 to ESXi 7.0 with deprecated vmklinux driver VIBs without native driver replacements in the ESXi 7.0 images, upgrade will fail with vmkapi and additional dependency errors.

Note, customers upgrading from ESXi 6.5 or ESXi 6.7 without drivers as listed in Table-1 should not be impacted with this issue when using stock1 ESXi 7.0 image from VMware or using partner-supplied stock2 image. The only customers who may be impacted are those using one or more of the async drivers from Table-1, and their target 7.0 image does not include a higher version of each of the drivers . 

Following are few examples of the error which customer may encounter. 
  • Dependency Errors due to vmkapi version deprecated in vSphere 7.0:
    [DependencyError]
    ...
    VIB QLC_bootbank_qedi_2.10.15.0-1OEM.670.0.0.8169922 requires vmkapi_2_2_0_0, but the requirement cannot be satisfied within the ImageProfile
    ...
    VIB QLC_bootbank_qedi_2.10.15.0-1OEM.670.0.0.8169922 requires vmkapi_2_2_0_0, but the requirement cannot be satisfied within the ImageProfile
    ...
    VIB QLC_bootbank_qedf_1.3.35.0-1OEM.600.0.0.2768847 requires vmkapi_2_2_0_0, but the requirement cannot be satisfied within the ImageProfile.
    ...
    VIB QLC_bootbank_qedf_1.3.35.0-1OEM.600.0.0.2768847 requires vmkapi_2_3_0_0, but the requirement cannot be satisfied within the ImageProfile.
    ...
    VIB QLC_bootbank_qedf_1.3.35.0-1OEM.600.0.0.2768847 requires qedentv_ver = X.11.6.0, but the requirement cannot be satisfied within the ImageProfile.
    ...
    VIB QLC_bootbank_qedi_2.10.15.0-1OEM.670.0.0.8169922 requires qedentv_ver = X.11.6.0, but the requirement cannot be satisfied within the ImageProfile.
    ...
    VIB QLC_bootbank_qfle3f_1.0.68.0-1OEM.650.0.0.4598673 requires vmkapi_2_2_0_0, but the requirement cannot be satisfied within the ImageProfile.
    ...
    VIB SFC_bootbank_sfvmk_2.3.3.1002-1OEM.650.0.0.4598673 requires vmkapi_2_2_0_0, but the requirement cannot be satisfied within the ImageProfile.
  • Dependency Errors due to deprecated vmklinux drivers without native replacement drivers:
    [DependencyError]
    ...
    VIB HPE_bootbank_scsi-hpdsa_5.5.0.54-1OEM.550.0.0.1331820 requires com.vmware.driverAPI-9.2.2.0, but the requirement cannot be satisfied within the ImageProfile.
    VIB HPE_bootbank_scsi-hpvsa_5.5.0.102-1OEM.550.0.0.1331820 requires vmkapi_2_2_0_0, but the requirement cannot be satisfied within the ImageProfile.
    VIB HPE_bootbank_scsi-hpvsa_5.5.0.102-1OEM.550.0.0.1331820 requires com.vmware.driverAPI-9.2.2.0, but the requirement cannot be satisfied within the ImageProfile.
    VIB HPE_bootbank_scsi-hpdsa_5.5.0.54-1OEM.550.0.0.1331820 requires vmkapi_2_2_0_0, but the requirement cannot be satisfied within the ImageProfile.
    ...
    VIB EMU_bootbank_scsi-be2iscsi_11.1.145.8-1OEM.600.0.0.2494585 requires com.vmware.driverAPI-9.2.3.0, but the requirement cannot be satisfied within the ImageProfile.
    VIB EMU_bootbank_scsi-be2iscsi_11.1.145.8-1OEM.600.0.0.2494585 requires vmkapi_2_3_0_0, but the requirement cannot be satisfied within the ImageProfile.


Environment

VMware vSphere ESXi 7.0.0

Cause

This issue is caused when drivers are built with dependencies on older vmkapi versions (vmkapi_2_2_0_0 and vmkapi_2_3_0_0) and these vmkapi versions are removed from ESXi 7.0 GA and later.

Resolution

  • If you are using OEM/partner-supplied ISO images, ensure there is a higher version of the impacted driver available in partner-supplied ISO image. 
  • If a higher version of the impacted driver is not available in the partner-supplied custom ISO,  download the recommended driver version (as per Table-1) and update the driver(s) on your host before attempting the upgrade/install to ESXi 7.0. 
  • Here is how to choose the Recommendation that meets your needs from Table-1: Option 1 is Highly recommended; Use option 2 or 3, only if option 1 is not available for your server models or option 1 cannot work. 
  • If the impacted driver is a deprecated vmklinux driver and a replacement native driver is not available in the partner supplied custom image, download the replacement native driver version, if available and update the native driver(s) on your host before attempting the upgrade to ESXi 7.0.
    • If there is no replacement native driver available and i) impacted vmklinux driver VIBs are not in use by the devices on the host, you can use the Workaround solution in Table-1 to proceed with upgrade. 
    • If these old vmklinux drivers are in use, mostly the devices using these are no longer supported and you cannot upgrade to ESXi 7.0.
  • To check if the required drivers are compatible with your adapters, see the VMware Compatibility Guide
    • Download the required drivers from https://www.vmware.com . Click Downloads > vSphere. Select applicable ESXi version. Click Drivers & Tools. Click Driver CDs. Find the required driver version. Click GO TO DOWNLOADS link.
  • Check and download the required OEM Custom Images from https://www.vmware.com. Click Downloads > vSphere. Select version as 7.0. Click Custom ISOs & Addons. Click OEM Customized Installer CDs. Find the required OEM Custom Image version. Click Go To Downloads link.
Table-1: List of impacted drivers/VIBs with deprecated vmkAPIs

Driver Name

Driver Version(s) Impacted

Description

Vendor

Recommendations

qfle3f

1.1.3.0-1OEM or older

QLogic E3 native FCoE storage driver

Marvell/
QLogic

1. Use ESXi 7.0 OEM Custom Image which includes QLogic Network/iSCSI/FCoE Driver Set v3.0.121.0 for ESXi-7.0 (contain qfle3f-2.1.6.0-1OEM) or later version

2a. i) On ESXi-6.7x hosts, first update ESXi-6.7 qfle3f-1.1.6.0-1OEM (part of QLogic Network/iSCSI/FCoE Driver Set v2.0.119.0) or later version driver.
       ii)  Upgrade to ESXi 7.0
2b. i) On ESXi-6.5x hosts, first update ESXi-6.5 qfle3f-1.1.6.0-1OEM  (part of QLogic Network/iSCSI/FCoE Driver Set v1.0.119.0) or later version driver.
       ii)  Upgrade to ESXi 7.0

3.  i)  Create a ESXI 7.0 Customized Installer ISO by adding QLogic Network/iSCSI/FCoE Driver Set v3.0.121.0 (contain qfle3f-2.1.6.0-1OEM) or later version
     ii) Upgrade to ESXi 7.0 using this Custom Image/ISO

qedi

2.10.19.0-1OEM or older

QLogic FastlinQ E4 native iSCSI storage driver

Marvell/
QLogic

1. Use ESXi 7.0 OEM Custom Image which includes Marvell FastLinQ Network/iSCSI/FCoE Driver Set v5.0.189 (contain qedf-2.2.4.0-1OEM, qedi-2.19.5.0-1OEM) or later version.

2.  i)  Create a ESXI 7.0 Customized installer ISO by adding Marvell FastLinQ Network/iSCSI/FCoE Driver Set v5.0.189 (contain qedf-2.2.4.0-1OEM, qedi-2.19.5.0-1OEM) or later version
     ii) Upgrade to ESXi 7.0 using this Custom Image/ISO

3. Use Workaround-1 if  NIC/adapters on the host does not use these drivers

qedf

1.3.42.2-1OEM or older

QLogic FastLinQ E4 native FCoE storage driver

Marvell/
QLogic

1. Use ESXi 7.0 OEM Custom Image which includes Marvell FastLinQ Network/iSCSI/FCoE Driver Set v5.0.189 (contain qedf-2.2.4.0-1OEM, qedi-2.19.5.0-1OEM) or later version.

2.  i)  Create a ESXI 7.0 Customized installer ISO by adding Marvell FastLinQ Network/iSCSI/FCoE Driver Set v5.0.189 (contain qedf-2.2.4.0-1OEM, qedi-2.19.5.0-1OEM) or later version
     ii) Upgrade to ESXi 7.0 using this Custom Image/ISO

3. Use Workaround-1 if  NIC/adapters on the host does not use these drivers

sfvmk

2.3.3.1002-1OEM or older

Solarflare XtremeScale Ethernet driver

Solarflare

1. Use ESXi 7.0 OEM Custom Image which includes Solarflare-NIC  sfvmk-2.3.3.1016-1OEM 6.7 driver component or later version

2a. i) On ESXi-6.7x hosts, first update ESXi-6.7 sfvmk-2.3.3.1016-1OEM (part of  Solarflare-NIC sfvmk-2.3.3.1016-1OEM driver component) or later version driver.
       ii)  Upgrade to ESXi 7.0
2b. i) On ESXi-6.5x hosts, first update ESXi-6.5 sfvmk-2.3.3.1016-1OEM or later version driver.
       ii)  Upgrade to ESXi 7.0

3.  i)  Create a ESXI 7.0 Customized Installer ISO by adding  Solarflare-NIC  sfvmk-2.3.3.1016-1OEM 6.7 driver component or later version
     ii) Upgrade to ESXi 7.0 using this Custom Image/ISO

igbn

ESXi-6.0 1.4.10-1OEM or older

NIC Driver for Intel Ethernet Controllers 82580,I210,I350 and I354 family

Intel

1. Use ESXi 7.0 OEM Custom Image which includes  ESXi-7.0 igbn 1.4.11.0-1OEM NIC Driver or later version

2a. On ESXi-6.7x hosts, if installed igbn version  is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built igbn driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
2b. i) On ESXi-6.5x hosts, first update ESXi-6.5 built igbn-
1.5.1.0-1OEM driver or later version.
       ii)  Upgrade to ESXi 7.0

3.  i)  Create a ESXI 7.0 Customized Installer ISO by adding  ESXi-7.0 igbn 1.4.11.0-1OEM NIC Driver or later version
     ii) Upgrade to ESXi 7.0 using this Custom Image/ISO

ixgben

ESXi-6.0 1.7.20-1OEM or older

NIC Driver for Intel Ethernet Controllers 82599, x520, x540, x550, x552 and x553 family

Intel

1. Use ESXi 7.0 OEM Custom Image which includes ESXi-7.0 ixgben 1.8.9.0-1OEM NIC Driver or later version

2a. On ESXi-6.7x hosts, if installed ixgben version  is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built ixgben driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
2b. On ESXi-6.5x hosts, if installed ixgben version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.5 built ixgben driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

3.  i)  Create a ESXI 7.0 Customized Installer ISO by adding  ESXi-7.0 ixgben 1.8.9.0-1OEM NIC Driver or later version
     ii) Upgrade to ESXi 7.0 using this Custom Image/ISO

i40en

ESXi-6.0 1.9.5-1OEM or older

NIC Driver for Intel Ethernet Controllers X710, XL710, XXV710,and X722 family

Intel

1. Use ESXi 7.0 OEM Custom Image which includes ESXi-7.0 i40en 1.10.9.0-1OEM NIC Driver or later version

2a. On ESXi-6.7x hosts, if installed i40en version is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built i40en driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
2b. On ESXi-6.5x hosts, if installed i40en version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.5 built i40en driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

3.  i)  Create a ESXI 7.0 Customized Installer ISO by adding  ESXi-7.0 i40en 1.10.9.0-1OEM NIC Driver or later version
     ii) Upgrade to ESXi 7.0 using this Custom Image/ISO

intel-nvme

ESXi-6.0 1.3.2.8-1OEM or older

NVMe Driver for Intel(R) DC Series NVM Express Solid-State Drives

Intel

1. On ESXi-6.7x or 6.5x hosts, if installed intel-nvme version  is  *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600 or *-1OEM.550.., first update ESXi-6.5 built intel-nvme driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

hiodriver

ESXi-6.0 5.0.2.3-1OEM or older

NVMe Driver for Huawei SS8210 PCIe SSD

Huawei

1. Use Workaround-2
2. Fresh install or ESXi 7.0

hinic

ESXi-6.0 1.8.2.8-1OEM or older

NIC Driver for Huawei IN200 4*25Gb Ethernet Controller

Huawei

1a. On ESXi-6.7x hosts, if installed hinic version is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built hinic driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
1b. On ESXi-6.5x hosts, if installed hinic version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.5 built hinic driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

native-shannon

ESXi-6.0 2.9.0-1OEM or older

SCSI driver for Shannon Direct-IO PCIe SSD adapters

Shannon 

1. Use Workaround-2 or
2. Fresh install of ESXi 7.0

iomemory-vsl4

ESXi-6.0 4.3.7.1205-1OEM or older

SCSI Driver for ioMemory adapters

Fusion-IO
(SanDisk, WD)

1a. On ESXi-6.7x hosts, if installed iomemory-vsl4 version is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built iomemory-vsl4 driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
1b. On ESXi-6.5x hosts, if installed iomemory-vsl4 version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.5 built iomemory-vsl4 driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

scsi-celerity16fc

ESXi 5.5(6.0) 2.11-1OEM or older

Celerity FC-16XX/32XX Fibre Channel Host Bus Adapters

ATTO

1. On ESXi-6.7x or 6.5x hosts, if installed scsi-celerity16fc version  is  *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is  *-1OEM.550., first update ESXi-6.5 built scsi-celerity16fc driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

cxl

ESXi-6.0 1.1.0.11-1OEM or older

NIC Driver for Chelsio 1/10/25/40/50/100Gb PCI Express Ethernet adapters

Chelsio

1a. On ESXi-6.7x hosts, if installed cxl version is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built cxl driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
1b. On ESXi-6.5x hosts, if installed cxl version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.5 built cxl driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

ftsys-msgpt3

ESXi-6.0 6.0.2.283-1OEM or older

SAS Driver for Stratus hardened LSI Logic Fusion-MPT 12G

Stratus

1a. On ESXi-6.7x hosts, if installed ftsys-msgpt3 version is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built ftsys-msgpt3 driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
1b. On ESXi-6.5x hosts, if installed ftsys-msgpt3 version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.5 built ftsys-msgpt3 driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

ftsys-qlnativefc

ESXi-6.0 6.0.2.280-1OEM or older

FC Driver for Stratus hardened QLogic FC HBA

Stratus

1a. On ESXi-6.7x hosts, if installed ftsys-qlnativefc version is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600. or *-1OEM.550., first update ESXi-6.7 built ftsys-qlnativefc driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
1b. On ESXi-6.5x hosts, if installed ftsys-qlnativefc version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600. or *-1OEM.550., first update ESXi-6.5 built ftsys-qlnativefc driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

liquidio

ESXi-6.0 1.7.0.1-1OEM or older

NIC Driver for Cavium LiquidIO 23XX 25/10G Ethernet Controllers

Cavium 

1. On ESXi-6.7x or 6.5x hosts, if installed liquidio version  is  *.650, you are not impacted. You can proceed to step ii)
        i-a) If installed version is  *-1OEM.600. and its version is <= latest 6.5 version, first update ESXi-6.5 built liquidio driver version >= current installed version
        i-b) If installed version is  *-1OEM.600. and its version is >  latest version 6.5 driver, go to option 2 or 3.
        ii)  Upgrade to ESXi 7.0

2 a. Use Workaround-1 if NIC/adapters on the host does not use these drivers
2 b. Use Workaround-2

3. Fresh Install of ESXi 7.0

lpnic

ESXi-6.0 1.4.385.0-1OEM

Broadcom/Emulex LP16x NIC driver

Broadcom/ Emulex

1a. On ESXi-6.7x hosts, if installed lpnic version is *.670. or *.650, you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.7 built lpnic driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
1b. On ESXi-6.5x hosts, if installed lpnic version  is *.650. , you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600., first update ESXi-6.5 built lpnic driver version >= current installed version
        ii)  Upgrade to ESXi 7.0

smartpqi

ESXi-6.5 1.0.4.3017-1OEM
ESXi-6.0 1.0.4.3017-1OEM

Microsemi Storage Controller driver

Microchip/ Microsemi

1a. On ESXi-6.7x hosts, if installed smartpqi version is *.670., you are not impacted. You can proceed to step ii)
        i) If installed version is *-1OEM.600. or *-1OEM.650., first update ESXi-6.7 built smartpqi driver version >= current installed version
        ii)  Upgrade to ESXi 7.0
1b. On ESXi-6.5x hosts, if installed smartpqi version  <= 1.0.4.3011., you are not impacted. You can proceed to step ii)
        i) If installed version is >= 1.0.4.3017-1OEM, and this driver is used by storage controller on your host you CANNOT upgrade to 7.0. Use option 3
        ii)  Upgrade to ESXi 7.0

2. Use Workaround-1 if storage controllers/devices on the host does not use these drivers

3. Fresh install of ESXi 7.


Notes
  • The above list may not be exhaustive.
  • If ESXi release is not specified, impacted driver versions are from all of the applicable 6.x releases.


Workaround:
On your ESXi 6.x hosts, check if the impacted drivers are in use using following commands

For storage drivers, use:
# esxcli storage core adapter list |grep -i driver_name

For network drivers, use:
# esxcli network nic list |grep -i driver_name

If command output shows driver(s) in the output, these are IN-USE by the devices on the host. If command returns NULL, driver(s) are NOT in use.
If you're not sure, you can also run these commands without the added grep and compare the drivers with the table shown under "Resolution":
# esxcli storage core adapter list
# esxcli network nic list


Workaround 1:
  1. If the impacted drivers are not in use, then you can remove relevant driver VIBs to proceed with upgrade. You may need to reboot the host for some VIB removal to complete:
    # esxcli software vib list | grep driver_name
    # esxcli software vib remove -n driver_VIB_name
  2. Run the upgrade to ESXi 7.0

Note: If the Adapter/devices using the driver is in use, then then do not use above workaround as you may lose the relevant configuration and device access. 

Workaround 2:

For the case of drivers where there are no replacements or fixed driver versions are available in ESXi 6.x or for ESXi 7.0 and you do not need to retain the configuration and/or device access is not needed, you can go with following alternates
  1. Perform fresh install of ESXI 7.0
  2. Remove VIBs and upgrade the host
    1. Properly clean up the configuration with the relevant devices using these drivers. Migrate to alternate applicable storage or network controllers
For example: Move the host management interface and VM network alternate other supported NICs

Move the VMs to other supported storage device datastores.
  1. You can remove relevant driver VIBs to proceed with upgrade. You may need to reboot the host for some VIB removal:
    # esxcli software vib list | grep driver_name
    # esxcli software vib remove -n driver_VIB_name 
  1. Run the upgrade to ESXi 7.0

Important Note: You may permanently lose the access to the relevant adapters/devices and its configuration. Device access and/or relevant configuration are non-recoverable. 
 
1 “Stock” refers the unaltered image provided by VMware. 
2 “Stock” refers the unaltered custom image provided by VMware OEM partners. 
3 Before you install or upgrade ESXi 7.0, consult the KB article: Important information before upgrading to vSphere 7.0