Search the VMware Knowledge Base (KB)
View by Article ID

VMware ESXi 6.5, Patch ESXi-6.5.0-20170304001-standard (2148987)

  • 0 Ratings


Profile Name
For build information, see KB 2148989.
VMware, Inc.
Release Date
Mar 9, 2017
Acceptance Level
Affected Hardware
Affected Software
Affected VIBs
  • VMware_bootbank_esx-base_6.5.0-0.14.5146846
  • VMware_bootbank_vsan_6.5.0-0.14.5146846
  • VMware_bootbank_vsanhealth_6.5.0-0.14.5146846
  • VMW_bootbank_vmkusb_0.1-1vmw.650.0.14.5146846
  • VMW_bootbank_misc-drivers_6.5.0-0.14.5146846
  • VMW_bootbank_vmw-ahci_1.0.0-34vmw.650.0.14.5146846
  • VMW_bootbank_ne1000_0.8.0-11vmw.650.0.14.5146846
  • VMware_bootbank_esx-ui_1.15.0-5069532
  • VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846
  • VMW_bootbank_ixgben_1.0.0.0-9vmw.650.0.14.5146846
PRs Fixed
1759763, 1763297, 1764075, 1767820, 1768709, 1772892, 1777234, 1778524, 1782130, 1797111, 1798703, 1791807, 1713093, 1791413
Related CVE numbers


Summaries and Symptoms

This patch resolves the following issues:

  • If you hot-add a child disk to a virtual machine, and the path to the parent disk is different than the virtual machine's home directory, the virtual machine unexpectedly stops working, powers off, and you cannot power it on again. The issue does not reproduce if you hot-add a child disk and you do not specify the path, or just specify the datastore to which the child disk belongs.

  • When the Dump file set is called using the esxcfg-dumppart or other commands multiple times in parallel, an ESXi host might stop responding and display a purple diagnostic screen with entries similar to the following as a result of a race condition while dump block map is freed up:

    @BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4907 - Corruption in dlmalloc
    Code start: 0xnnnnnnnnnnnn VMK uptime: 234:01:32:49.087
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PanicvPanicInt@vmkernel#nover+0x37e stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Panic_NoSave@vmkernel#nover+0x4d stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DLM_free@vmkernel#nover+0x6c7 stack: 0x8
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Heap_Free@vmkernel#nover+0xb9 stack: 0xbad000e
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Dump_SetFile@vmkernel#nover+0x155 stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SystemVsi_DumpFileSet@vmkernel#nover+0x4b stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x41f stack: 0x4fc
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@ # +0x394 stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@ # +0xb4 stack: 0xffb0b9c8
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry_@vmkernel#nover+0x0 stack: 0x0
  • Tools in guest operating system might send unmap requests that are not aligned to the VMFS unmap granularity. Such requests are not passed to the storage array for space reclamation. In result, you might not be able to free space on the storage array.

  • Slow device discovery causes bootbank/scratch to not get mounted because the LUN is not available. A setup specific issue, for instance slow login to target ports or fabric delay, might cause the slow discovery. To compensate for possible delays, you now can configure device path discovery wait time. For more information, see Knowledge Base Article 2149444.
  • A newly added physical NIC might not have an entry in the esx.conf file even after a host reboot and as a result the NIC uses the virtual MAC address 00:00:00:00:00:00 during communication.
  • Attempts to cancel snapshot creation for a VM whose VMDKs are on Virtual Volumes datastores might result in virtual disks not getting rolled back properly and consequent data loss. This situation occurs when a VM has multiple VMDKs with the same name and these come from different Virtual Volumes datastores.
  • When a VMDK is configured with I/O filters, and the guest OS issues a successful SCSI unmap commands, it is possible that the SCSI unmap command is sucessful although one of the I/O filters might fail the operation. As a result, the state of the VMDK and the filter diverges and can result in data corruption.
  • The ESXi SNMP agent crashes randomly and triggers false host reboot alarms in the SNMP monitoring station. For stateful ESXi hosts the core dump is located in the /var/core directory, and the syslog.log file contains out of memory error messages. For stateless ESXi hosts, see Knowledge Base Article 1032051. The issue results in loss of monitoring of the host.
  • In an environment with Nutanix NFS storage, the secondary Fault Tolerance VM fails to take over when the primary Fault Tolerance VM is down. The issue occurs when an ESXi host does not receive a response of a CREATE call within a timeout period. After you apply this patch, you can configure the CreateRPCTimeout parameter by running the following command:

    esxcfg-advcfg -s 20 /NFS/CreateRPCTimeout

    Note: As the host profile does not capture the CreateRPCTimeout parameter, the value of CreateRPCTimeout is not persistent in stateless environments.
  • The issue occurs during I/O operations of a VM with SEsparse-based snapshots. The VM might hang when running a specific type of I/O workload in a number of threads because of interlocking SEsparse metadata resources.
  • In order to update the last seen time stamp for each LUN on an ESXi host, a process has to acquire a lock on /etc/vmware/lunTimestamps.log file. The lock is being held for a long time than necessary in each process. If there are too many such processes trying to update the /etc/vmware/lunTimestamps.log file, they might result in lock contention on this file. If hostd is one of these processes that is trying to acquire the lock, the ESXi host might get disconnected from the vCenter Server or become unresponsive with lock contention error messages (on lunTimestamps.log file) in the hostd logs. You might get a similar error message:

    Error interacting with configuration file /etc/vmware/lunTimestamps.log: Timeout while waiting for lock, /etc/vmware/lunTimestamps.log.LOCK, to be released. Another process has kept this file locked for more than 30 seconds. The process currently holding the lock is <process_name>(<PID>). This is likely a temporary condition. Please try your operation again.

    • process_name is the process or service that is currently holding the lock on the /etc/vmware/lunTimestamps.log. For example, smartd, esxcfg-scsidevs, localcli, etc.
    • PID is the process ID for any of these services.
  • If you have more than one USB keyboard connected to an ESXi host and you configure all of the keyboards to have a layout different than the default layout, only one of the USB keyboard's layout is updated. The rest of the USB keyboards remain with the default layout.
  • When you are logged in to the VMware Host Client, you are not able to view the health status reported by various supported storage hardware.
  • There is no option to add the RDM disk to a virtual machine in the vSphere Host Client UI.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the VMware vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)

Please enter the Captcha code before clicking Submit.
  • 0 Ratings