VMware ESXi 6.5, Patch ESXi-6.5.0-20170702001-standard
search cancel

VMware ESXi 6.5, Patch ESXi-6.5.0-20170702001-standard

book

Article ID: 326752

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile Name
ESXi-6.5.0-20170702001-standard
Build
For build information, see KB 2149910.
Vendor
VMware, Inc.
Release Date
July 27, 2017
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
  • VMware_bootbank_esx-base_6.5.0-1.26.5969303
  • VMware_bootbank_vsanhealth_6.5.0-1.26.5912974
  • VMware_bootbank_vsan_6.5.0-1.26.5912915
  • VMware_bootbank_esx-tboot_6.5.0-1.26.5969303
  • VMW_bootbank_vmkusb_0.1-1vmw.650.1.26.5969303
  • VMW_bootbank_misc-drivers_6.5.0-1.26.5969303
  • VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303
  • VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303
  • VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303
  • VMW_bootbank_vmkata_0.1-1vmw.650.1.26.5969303
  • VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303
  • VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303
  • VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303
  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303
  • VMW_bootbank_nvme_1.2.0.32-4vmw.650.1.26.5969303
  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303
  • VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303
  • VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303
  • VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303
  • VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303
  • VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303
  • VMW_bootbank_igbn_0.1.0.0-14vmw.650.1.26.5969303
  • VMW_bootbank_ntg3_4.1.2.0-1vmw.650.1.26.5969303
  • VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303
  • VMware_bootbank_esx-ui_1.21.0-5724747
PRs Fixed
1226067, 1751189, 1759620, 1760761, 1765298, 1765391, 1768396, 1776389, 1779570, 1780524, 1782442, 1782479, 1783152, 1783750, 1784077, 1784393, 1792472, 1793824, 1796643, 1796948, 1798656, 1799704, 1800745, 1800903, 1805207, 1805383, 1807033, 1808089, 1808532, 1812751, 1818459, 1818649, 1818887, 1820504, 1821649, 1821937, 1822996, 1823028, 1824285, 1825994, 1827738, 1829739, 1831235, 1831499, 1832222, 1832242, 1832318, 1832940, 1835341, 1836212, 1838786, 1839087, 1839380, 1840639, 1842944, 1843621, 1843743, 1846540, 1850988, 1851787, 1852419, 1859128, 1859906, 1863888, 1868876, 1869230, 1876149, 1876880, 1876972, 1881001, 1812407, 1819968, 1824983, 1832972, 1849883, 1791204, 1812539, 1760451, 1770770, 1829440, 1836397, 1846297
Related CVE numbers
N/A


Environment

VMware vSphere ESXi 6.5

Resolution

Summaries and Symptoms

This patch updates the esx-base, esx-tboot, vsan, vsanhealth, vmkusb, misc-drivers, ne1000, ixgben, usbcore-usb, vmkata, qlnativefc, vmw-ahci, sata-ahci, lsu-ho-hosa-plugin, nvme, vmware-esx-esxcli-nvme-plugin, i40en, lsu-lsi-lsi-mr3-plugin, lsu-lsi-lsi-msgpt3-plugin, lsu-lsi-megaraid-sas-plugin, lsu-lsi-mpt2sas-plugin, igbn, ntg3, pvscsi, esx-ui VIBs to resolve the following issues:

  • Dell 13G Servers use DDR4 memory modules. These modules are displayed with Unknown status on the Hardware Health Status Page in vCenter Server.
  • The performance counter cpu.system is incorrectly calculated for a virtual machine. The value of the counter is always 0 (zero) and never changes, which makes it impossible to do any kind of data analysis on it.
  • When you hot-add two or more hard disks to a VMware PVSCSI controller in a single operation, the guest OS can see only one of them.
  • An ESXi host might fail with a purple screen because of a race condition when multiple multipathing plugins (MPPs) try to claim paths.
  • Major upgrade of dd image booted ESXi host to version 6.5 by using vSphere Update Manager fails with the Cannot execute upgrade script on host error.
  • The vmswapcleanup jumpstart plugin fails to start. The syslog contains the following line:

    jumpstart[XXXX]: execution of '--plugin-dir /usr/lib/vmware/esxcli/int/ systemInternal vmswapcleanup cleanup' failed : Host Local Swap Location has not been enabled

  • When you configure a virtual machine with enabled Fault Tolerance and its Secondary VM is being powered on from the ESXi host with insufficient memory, the Secondary VM cannot power on and the ESXi host might fail with purple screen.
  • You can copy all the VMware Tools data into its own ramdisk. As a result, the data can be read from the flash media only once per boot. All other reads will go to the ramdisk. vCenter Server Agent (vpxa) accesses this data through the /vmimages directory which has symlinks that point to productLocker.

    To activate this feature, follow the steps:

    1. Use the command to set the advanced ToolsRamdisk option to 1:
      esxcli system settings advanced set -o /UserVars/ToolsRamdisk -i 1
    2. Reboot the host.
  • If a VVol VASA Provider returns an error during a storage profile change operation, vSphere tries to undo the operation, but the profile ID gets corrupted in the process.
  • The original packet buffer can be shared across multiple destination ports if the packet is forwarded to multiple ports (such as broadcast packet). If the VLAN offloading is disabled and you modify the original packet buffer, the VLAN tag will be inserted to the packet buffer before it is forwarded to the guest VLAN. The other port will detect a packet corruption and drop.
  • If the active memory of a virtual machine that runs on an ESXi host falls under 1% and drops to zero, the host might start reclaiming memory even if the host has enough free memory.
  • The previous software profile version of an ESXi host is displayed in esxcli software profile get command output after execution of a esxcli software profile update command. Also software profile name is not marked Updated in esxcli software profile get command output after an ISO upgrade.
  • When vSphere FT is enabled on an vSphere HA-protected VM where the vSphere Guest Application Monitor is installed, the vSphere Guest Application Monitoring SDK might fail.
  • If installation of VMware Tools requires a reboot to complete, the guest variable guestinfo.toolsInstallErrCode is set to 1603. The variable is not cleared by rebooting the Guest OS.
  • The ESXi 6.5 host fails to join the Active Directory domain and the process might become unresponsive for an hour before returning the Operation timed out error, if the host uses only IPv4 address, and the domain has IPv6 or mixed IPv4 and IPv4 setup.
  • Per host Read/Write latency displayed for VVol datastores in the vSphere Web Client is incorrect.
  • The NFS v3 client does not properly handle a case where NFS server returns an invalid filetype as part of File attributes, which causes the ESXi host to fail with a purple screen.
  • For a Pure Storage FlashArray device you have to add manually the SATP rule to set the SATP, PSP and IOPs. A new SATP rule is added to ESXi to set SATP to VMW_SATP_ALUA, PSP to VMW_PSP_RR, and IOPs to 1 for all Pure Storage FlashArray models.

    Note: In case of a stateless ESXi installation, if an old host profile is applied, it overwrites the new rules after upgrade.
  • Virtual machine configured to use EFI firmware will fail to obtain an IP address when trying to PXE boot if the DHCP environment responds by IP unicast. The EFI firmware was not capable of receiving a DHCP reply sent by IP unicast.
  • Simple Network Management Protocol (SNMP) agent is reporting the same value for both ifOutErrors and ifOutOctets counters, when they should be different.
  • An ESXi host might lose network connectivity when performing a stateless boot from Auto Deploy if the management vmkernel NIC has static IP and is connected to a Distributed Virtual Switch.
  • PCI passthru does not support devices with MMIO located above 16 TB where MPNs are wider than 32 bits.
  • Although the ESXi host no longer supports running hardware version 3 virtual machines, it does allow registering these legacy virtual machines in order to upgrade them to a newer, supported, version. A recent regression caused the ESXi hostd service to disconnect from vCenter Server during the registration process. This would prevent the virtual machine registration from succeeding.
  • UTF-8 characters are not handled properly before being passed on to a VVol Vasa Provider. As a result, the VM storage profiles which are using international characters are either not recognised by the Vasa Provider or are treated, or displayed incorrectly by the Vasa Provider.
  • During snapshot consolidation, a precise calculation might be performed to determine the storage space required to perform the consolidation. This precise calculation can cause the virtual machine to stop responding, because it takes a long time to complete.
  • By default, each ESXi host has one virtual switch, vSwitch0. During the installation of ESXi, the first physical NIC is chosen as the default uplink for the vSwitch0. In case that NIC linkstate is down, the ESXi host might not have a network connection, although the other NICs linkstate is up and have a network access.
  • An ESXi host might fail with purple diagnostic screen when collecting performance snapshots with vm-support due to calls for memory access after the data structure has already been freed.

    An error message similar to the following is displayed:

    @BlueScreen: #PF Exception 14 in world 75561:Default-drai IP 0xxxxxxxxxxxxaddr 0xxxxxxxxxxxx
    PTEs:0xXXXXXXXXXXXXX;0xYYYYYYYYYYYYYY;0xZZZZZZZZZZZZZZ;0x0;
    [0xxxxxxxxxxxx]SPLockIRQWork@vmkernel#nover+0x26 stack: 0xxxxxxxxxxxx
    [0xxxxxxxxxxxx]VMKStatsDrainWorldLoop@vmkernel#nover+0x90 stack: 0x17
  • During normal virtual machine operation VMware Tools (version 9.10.0 and later) services create vSocket connections to exchange data with the ESXi host. When a large number of such connections have been made, the ESXi host might run out of lock serial numbers, causing the virtual machine to shut down automatically with the MXUserAllocSerialNumber: too many locks error. If the workaround from KB article 2149941 has been used, please re-enable Guest RPC communication over vSockets by removing the line: guest_rpc.rpci.usevsocket = "FALSE"
  • A preallocated VM will allocate all its memory at power-on time. However, it can happen that the scheduler will pick a wrong NUMA node for this initial allocation. In particular, the numa.nodeAffinity vmx option might not be honored. The Guest OS might see a performance degradation because of this. Virtual machines configured with latency sensitivity set to high or with a passtrhough device are typically preallocated.
  • Existing VMs using Instant Clone and new ones, created with or without Instant Clone, lose connection with the Guest Introspection host module. As a result, the VMs are not protected and no new Guest Introspection configurations can be forward to the ESXi host. You are also present with a Guest introspection not ready warning in the vCenter Server UI.
  • When you take a snapshot of a virtual machine, the virtual machine might become unresponsive.
  • Unable to collect vm-support bundle from ESXi 6.5 host because when generating logs in ESXi 6.5 through the vSphere Web Client, the select specific logs to export text box is blank. The options: network, storage, fault tolerance, hardware etc. are blank as well. This issue occurs because the rhttpproxy port for /cgi-bin has a value different from 8303.
  • An ESXi host might fail with a purple screen on shutdown if IPv6 mld is used because of a race condition in tcpip stack.
  • If you disconnect your ESXi host from vCenter Server and some of the virtual machines on that host are using LAG, your ESXi host might become unresponsive when you reconnect it to vCenter Server after recreating the same lag on the vCenter Server side and you might see an error, such as the following:

    0x439116e1aeb0:[0x418004878a9c]LACPScheduler@#+0x3c stack: 0x417fcfa00040
    0x439116e1aed0:[0x418003df5a26]Net_TeamScheduler@vmkernel#nover+0x7a stack: 0x43070000003c
    0x439116e1af30:[0x4180044f5004]TeamES_Output@#+0x410 stack: 0x4302c435d958
    0x439116e1afb0:[0x4180044e27a7]EtherswitchPortDispatch@#+0x633 stack: 0x0
  • An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) - possible deadlock with PCPU error, when you reboot the ESXi host under the following conditions:
     
    • You use the vSphere Network Appliance (DVFilter) in an NSX environment
    • You migrate a virtual machine with vMotion under DVFilter control
  • When you hot-add an existing or new virtual disk to a CBT enabled VM residing on VVOL datastore, the guest operation system might stop responding until the hot-add process completes. The VM unresponsiveness depends on the size of the virtual disk being added. The VM automatically recovers once hot-add completes.
  • Windows 2012 terminal server running VMware tools 10.1.0 on ESXi 6.5 stops responding when many users are logged in.

    vmware.log will show similar messages to:

    2017-03-02T02:03:24.921Z| vmx| I125: GuestRpc: Too many RPCI vsocket channels opened.
    2017-03-02T02:03:24.921Z| vmx| E105: PANIC: ASSERT bora/lib/asyncsocket/asyncsocket.c:5217
    2017-03-02T02:03:28.920Z| vmx| W115: A core file is available in "/vmfs/volumes/515c94fa-d9ff4c34-ecd3-001b210c52a3/h8-ubuntu12.04x64/vmx-debug-zdump.001"
    2017-03-02T02:03:28.921Z| mks| W115: Panic in progress... ungrabbing

  • Due to a memory leak in the LVM module, you might see the LVM driver running out of memory on certain conditions, causing the ESXi host to lose access to the VMFS datastore.
  • You are prompted for a password twice when connecting to an ESXi host through SSH if the ESXi host is upgraded from vSphere version 5.5 to 6.0 while being part of a domain.
  • In the host profile section, a compliance error on Security.PasswordQualityControl is observed, when the PAM password setting in the PAM password profile is different to the advanced configuration option Security.PasswordQualityControl. Because the advanced configuration option Security.PasswordQualityControl is unavailable for the host profile in this release, use the Requisite option in the Password PAM Configuration for changing password policy instead.
  • For a VM with e1000/e1000e vNIC, when the e1000/e1000e driver tells the e1000/e1000e VMkernel emulation to skip a descriptor (the transmit descriptor address and length are 0), a loss of network connectivity might occur.
  • Couldn't enable keep alive warnings occur during VMware NSX and partner solutions communication through a VMCI socket (vsock). The VMkernel log now omits these repeated warnings because they can be safely ignored.
  • The installation of ESXi 6.5 hangs on a system with the TPM 1.2 chip. If tbootdebug is specified as a command line parameter, the last log message is:

    Relocating modules and starting up the kernel...
    TBOOT: **************TBOOT ******************
    TBOOT: TPM family: 1.2
  • A host scan operation fails with a RuntimeError in ImageProfile module if the module contains VIBs for a specific hardware combination. What causes the failure is a code transition issue between Python 2 and Python 3.
  • The destination virtual machine of the vSphere Storage vMotion is incorrectly stopped by a periodic configuration validation for the virtual machine. vSphere Storage vMotion that takes more than 5 minutes fails with the The source detected that the destination failed to resume message. The VMkernel log from the ESXi host contains the message D: Migration cleanup initiated, the VMX has exited unexpectedly. Check the VMX log for more details.
  • Virtual machines with a paravirtual RDMA (PVRDMA) device run RDMA applications to communicate with the peer queue pairs. If an RDMA application attempts to communicate with a non-existing peer queue number, the PVRDMA device might wait for a response from the peer indefinitely. As a result, the virtual machine becomes inaccessible if during a snapshot operation or migration, the RDMA application is still running.
  • The RDMA communication between two virtual machines that reside on a host with an active RDMA uplink occasionally triggers spurious completion entries in the guest VMkernel applications. The completion entries are incorrectly triggered by non-signaled fast-register work requests that are issued by the kernel-level RDMA Upper Layer Protocol (ULP) of a guest. This can cause completion queue overflows in the kernel ULP.
  • When the PVRDMA driver is installed on a guest OS that supports PVRDMA device, the PVRDMA driver might fail to load properly when the guest OS is powered on. You might be stuck with the unavailable device in link down state until you manually reload the PVRDMA driver.
  • When you use vSphere Storage vMotion on vSphere Virtual Volumes storage, the UUID of a virtual disk might change. The UUID identifies the virtual disk and a changed UUID makes the virtual disk appear as a new and different disk. The UUID is also visible to the guest OS and might cause drives to be misidentified.
  • If the mandatory field in the VMODL object of the profile path is left unset, a serialization issue might occur during the answer file validation for network configuration, resulting in a vpxd service failure.
  • Attaching a disk in a different folder than the virtual machine's folder while it is powered on might fail, if it is initiated by using the vSphere API directly with a ConfigSpec that specifies the disk backing file using the generic vim.vm.Device.VirtualDevice.FileBackingInfo class, instead of the disk type specific backing class, such as vim.vm.Device.VirtualDisk.FlatVer2BackingInfo, vim.vm.Device.VirtualDisk.SeSparseBackingInfo.
  • You cannot change some ESXi advanced settings such as /Net/NetPktSlabFreePercentThreshold because of the wrong default value. This problem is resolved by changing the default value.
  • Setting the /Power/PerfBias advanced configuration option is not available. Any attempt to set it to a value returns an error.
  • An ESXi host might stop responding if a LUN unmapping is made on the storage array side to those LUNs while connected to an ESXi host through Broadcom/Emulex fiber channel adapter (the driver is lpfc) and has I/O running.
  • When opening a VMFS-6 volume, it allocates a journal block. Upon successful allocation, a background thread is started. If there is no space on the volume for the journal, it is opened in read-only mode and no background thread is initiated. Any intent to close the volume, results in attempts to wake up a nonexistent thread. This results in the ESXi host failure.
  • When using the vSphere Web Client to attempt to change the value of the Syslog.global.logDirUnique option, this option appears grayed out, and cannot be modified.
  • When the virtual machines use the SCP4 feature with Get LBA Status command to query thin-provisioned features of large vRDMs attached, the processing of this command might run for a long time in the ESXi kernel without relinquishing the CPU. The high CPU usage can cause the CPU heartbeat watchdog process to deem a hung process and the ESXi host might stop responding.
  • If a virtual machine has a driver (especially a graphics driver) or an application that pins too much memory, it creates a sticky page in the VM. When such a VM is about to be migrated with vMotion to another host, the migration process is suspended and later fails because of incorrect pending of I/O computations.
  • vSphere 6.5 does not support disjointed Active Directory domain. The disjoint namespace is a scenario in which a computer's primary domain name system (DNS) suffix does not match the DNS domain name where that computer resides.
  • A VMDK file might reside on a VMFS6 datastore which is mounted on multiple ESXi hosts (for example 2 hosts, ESXi host1 and ESXi host2). When the VMFS6 datastore capacity is increased from ESXi host1, while having it mounted on ESXi host2, and the disk.vmdk has file blocks allocated from an increased portion of the VMFS6 datastore from ESXi host1. Now, if the disk.vmdk file is accessed from ESXi host2, and if the file blocks are allocated to it from ESXi host2, the ESXi host2 might fail with a purple screen.
  • During the boot of an ESXi host, error messages related to execution of the jumpstart plug-ins iodm and vmci are observed in the jumpstart logs.
  • Entering maintenance mode would time out after 30 mins even if the specified a timeout is larger than 30 mins.
  • If the paths to a LUN have different LUN IDs in case of multipathing, the LUN will not be registered by PSA and end users will not see them.
  • The recompose operation in Horizon View might fail for desktop virtual machines residing on NFS datastores with stale NFS file handle errors, because of the way virtual disk descriptors are written to NFS datastores.
  • Performance issues might occur when the not aligned unmap requests are received from the Guest OS under certain conditions. Depending on the size and number of the not aligned unmaps, this might occur when a large number of small files (less than 1 MB in size) are deleted from the Guest OS.
  • The ESXi functionality that allows unaligned unmap requests did not account for the fact that the unmap request may occur in a non-blocking context. If the unmap request is unaligned, and the requesting context is non-blocking, it could result in a purple screen. Common unaligned unmap requests in non-blocking context typically occur in HBR environments.
  • An ESXi host might fail with a purple screen because of a CPU heartbeat failure only if the SEsparse is used for creating snapshots and clones of virtual machines. The use of SEsparse might lead to CPU lockups with the warning message in the VMkernel logs, followed by a purple screen:

    PCPU <cpu-num></cpu-num>didn't have a heartbeat for <seconds></seconds>seconds; *may* be locked up.
  • If a pNIC is disconnected and connected to a virtual switch, the VMware NetQueue load balancer must identify it and pause the ongoing balancing work. In some cases, the load balancer might not detect this and access wrong data structures. As a result, you might see a purple screen.
  • The frequent lookup to a vSAN metadata directory (.upit) on virtual volume datastores can impact its performance. The .upit directory is not applicable to virtual volume datastores. The change disables the lookup to the .upit directory.
  • When all NFS datastores are disabled within the Host Profile document, extracted from a reference host, the host profile’s remediation might fail with compliance errors and existing datastores are removed or new are added during the remediation.
  • The reconfigure operation stops responding with SystemError under the following conditions:
     
    • the virtual machine is powered on
    • the ConfigSpec includes an extraConfig option with an integer value

    The SystemError is triggered by a TypeMismatchException, which can be seen in the hostd log on the ESXi host with the message:

    Unexpected exception during reconfigure: (vim.vm.ConfigSpec) { <snip></snip>} Type Mismatch: expected: N5Vmomi9PrimitiveISsEE, found: N5Vmomi9PrimitiveIiEE.

  • For latency-sensitive virtual machines, the netqueue load balancer can try to reserve exclusive Rx queue. If the driver provides queue-preemption, then netqueue load balancer uses this to get exclusive queue for latency-sensitive virtual machines. The netqueue load balancer holds lock and execute queue preemption callback of the driver. With some drivers, this might result in a purple screen in the ESXi host, especially if a driver implementation involves sleep mode.
     
  • The ESXi host might fail with a purple screen when booting the ESXi host on the following Oracle servers: X6-2, X5-2, and X4-2, and the backtrace shows that is caused by pcidrv_alloc_resource failure. The issue is caused because the system cannot recognize or reserve resources for the USB device.
     
  • When a Keyboard is configured with a different layout other than the U.S. default, and later it is unplugged and plugged again to the ESXi host, the newly-connected keyboard is assigned with U.S. default layout instead of the user-selected layout.
     
  • On some servers, a USB network device is integrated in IMM or iLO to manange the server. When you reboot IMM by using the vSphere Web Client or an IMM or iLO command, the transaction on the USB network device is lost.
     
  • The XHCI related platform error messages, such as xHCI Host Controller USB 2.0 Control Transfer may cause IN Data to be dropped and xHCI controller Parity Error response bit set to avoid parity error on poison packet, are reported in ESXi VMkernel logs.
     
  • An ESXi host might become unresponsive with no heartbeat NMI state on AMD machines with OHCI USB Host Controllers.
     
  • When TSO capability is enabled in NE1000 driver, I218 NIC reset frequently in heavy traffic scenario, because of I218 h/w issue. The NE1000 TSO capability for I218 NIC should be disabled.
     
  • When the physical link status of a vminc is changed, for example the cable is unplugged or the switch port is shutdown, the generated output from the esxcli command might give a wrong link status on Intel 82574L based NICs (Intel Gigabit Desktop CT/CT2). You must manually restart the NIC to get the actual link status.
     
  • ESXi 5.5 and 6.x hosts stop responding after running for 85 days. In the /var/log/vmkernel log file you see entries similar to:

    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved a PUREX IOCB woh oo
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved the PUREX IOCB.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): sizeof(struct rdp_rsp_payload) = 0x88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674qlnativefc: vmhba2(5:0.0): transceiver_codes[0] = 0x3
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): transceiver_codes[0,1] = 0x3, 0x40
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Stats Mailbox successful.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Sending the Response to the RDP packet
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 53 01 00 00 00 00 00 00 00 00 04 00 01 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) c0 1d 13 00 00 00 18 00 01 fc ff 00 00 00 00 20
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 88 00 00 00 b0 d6 97 3c 01 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 02 00 00 00 00 00 00 80 00 00 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 18 00 00 00 00 01 00 00 00 00 00 0c 1e 94 86 08
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0e 81 13 ec 0e 81 00 51 00 01 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 2c 00 04 00 00 01 00 02 00 00 00 1c 00 00 00 01
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 40 00 00 00 00 01 00 03 00 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)50 01 43 80 23 18 a8 89 50 01 43 80 23 18 a8 88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 01 00 03 00 00 00 10 10 00 50 eb 1a da a1 8f

    This is a firmware problem and it is caused when Read Diagnostic Parameters (RDP) between the Fibre Channel (FC) Switch and the Hot Bus Adapter (HDA) fails 2048 times. The HBA adapter stops responding and because of this the virtual machine and/or the ESXi host might fail. By default, the RDP routine is initiated by the FC Switch and occurs once every hour, resulting in a reaching the 2048 limit in approximately 85 days.

  • When you turn on or off the following LED commands

    esxcli storage core device set -l locator -d <device id>
    esxcli storage core device set -l error -d <device id>
    esxcli storage core device set -l off -d <device id>,


    they might fail in the HBA mode of some HP Smart Array controllers, for example, P440ar and HP H240. In addition, the controller might stop responding, causing the following management commands to fail:

    LED management:
    esxcli storage core device set -l locator -d <device id>
    esxcli storage core device set -l error -d <device id>
    esxcli storage core device set -l off -d <device id>


    Get disk location:
    esxcli storage core device physical get -d <device id>

    This problem is firmware specific and it is triggered only by LED management commands in the HBA mode. There is no such issue in the RAID mode.

    Workaround: Retry the management command until success.
     
  • Some Intel devices, for example P3700, P3600, and so on, have a vendor specific limitation on their firmware or hardware. Due to this limitation, all IOs across the stripe size (or boundary), delivered to the NVMe device can be affected from significant performance drop. This problem is resolved from the driver by checking all IOs and splitting command in case it crosses the stripe on the device.
     
  • The lsi_mr3 driver allocates memory from address space below 4GB. The vSAN disk serviceability plugin lsu-lsi-lsi-mr3-plugin and the lsi_mr3 driver communicate with each other. The driver might stop responding during the memory allocation when handling the IOCTL event from storelib. The hostd process might fail even after restart of hostd.

    This issue is resolved in this release with a code change in the lsu-lsi-lsi-mr3-plugin plugin of lsi_mr3 driver, setting a timeout value to 3 seconds to get the device information to avoid plugin and hostd failures.
     
  • NICs using ntg3 driver might experience unexpected loss of connectivity. The network connection cannot be restored until you reboot the ESXi host. The devices affected are Broadcom NetXtreme I 5717, 5718, 5719, 5720, 5725 and 5727 Ethernet Adapters. The problem is related to certain malformed TSO packets sent by VMs such as (but not limited to) F5 BIG-IP Virtual Appliances. The ntg3 driver, version 4.1.2.0 resolves this problem.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.
 
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.