Search the VMware Knowledge Base (KB)
View by Article ID

VMware ESXi 6.0, Patch ESXi-6.0.0-20170202001-standard (2148166)

  • 0 Ratings


Profile Name
For build information, see KB 2148155.
VMware, Inc.
Release Date
Feb 24, 2017
Acceptance Level
Affected Hardware
Affected Software
Affected VIBs
  • VMware_bootbank_esx-base_6.0.0-3.57.5050593   
  • VMware_bootbank_vsan_6.0.0-3.57.5050595      
  • VMware_bootbank_vsanhealth_6.0.0-3000000.
  • VMware_bootbank_sata-ahci_3.0-26vmw.600.3.57.5050593
  • VMware_bootbank_net-e1000e_3.2.2.1-2vmw.600.3.57.5050593
  • VMware_bootbank_net-ixgbe_3.
  • VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-3vmw.600.3.57.5050593
  • VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-3vmw.600.3.57.5050593
  • VMware_bootbank_misc-drivers_6.0.0-3.57.5050593
  • VMware_bootbank_esx-ui_1.14.0-4940836
  • VMware_locker_tools-light_6.0.0-3.57.5050593
  • VMware_bootbank_esx-tboot_6.0.0-3.57.5050593
  • VMware_bootbank_qlnativefc_2.1.50.0-1vmw.600.3.57.5050593
  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-4vmw.600.3.57.5050593
PRs Fixed
1454324, 1467698, 1600197, 1611644, 1621742, 1645959, 1662466, 1667992, 1671787, 1677237, 1687609, 1705594, 1706102, 1706150, 1715318, 1718878, 1723002, 1727390, 1729643, 1731046, 1739958, 1739979,  1744682, 1747992, 1753438, 1753809, 1754060, 1759826, 1763536, 1764752, 1765418, 1768693, 1770561, 1774793, 1777279, 1760450, 1796649
Related CVE numbers


Summaries and Symptoms

  • Attempts use the esxcli storage vvol storagecontainer abandonedvvol command to clean the stale Virtual Volume volume and the VMDK files that remain on the Virtual Volume volume are unsuccessful.

  • When symmetric multiprocessor (SMP) Fault Tolerance is enabled on a VM, the VM network latency might go up significantly in both average and variations. The increased latency might result in significant performance degradation or instability for VM workloads that are sensitive to such latency increases.

  • Under certain conditions while the ESXi installer reads the installation script during the boot stage, an error message similar to the following is displayed:

VmkNicImpl::DisableInternal:: Deleting vmk0 Management Interface, so setting advlface to NULL

  •  An ESXi patch update installation might fail if the size of the target profile file is larger than 239 MB. This might happen when you upgrade the system using ISO causing image profile size larger than 239MB without getting any warning message. This will prevent any additional VIBs from being installed on the system.
  • ESXi hosts with a large number of physical CPUs might stop responding during statistics collection. This issue occurs when the collection process attempts to access pages that lie beyond the range initially assigned to it
  • Log messages related to Virtual SAN similar to the following are logged in the hostd.log file every 90 seconds even when the Virtual SAN is not enabled:

YYYY-MM-DDT06:50:01.923Z info hostd[nnnnnnnn] [Originator@6876 sub=Hostsvc opID=21fd2fe8] VsanSystemVmkProvider : GetRuntimeInfo: Complete, runtime info: ( {
YYYY-MM-DDT06:51:33.449Z info hostd[nnnnnnnn] [Originator@6876 sub=Hostsvc opID=21fd3009] VsanSystemVmkProvider : GetRuntimeInfo: Complete, runtime info: ( {
YYYY-MM-DDT06:53:04.978Z info hostd[nnnnnnnn] [Originator@6876 sub=Hostsvc opID=21fd3030] VsanSystemVmkProvider : GetRuntimeInfo: Complete, runtime info: ( {

  • A userworld dump might fail when a user process runs out of memory. The error message, Unable to allocate memory, is displayed.
  •  When vSphere FT is enabled on an vSphere HA-protected VM where the vSphere Guest Application Monitor is installed, the vSphere Guest Application Monitoring SDK might fail.
  • Attempts to upgrade VMware Tools on multiple VMs simultaneously through Update Manager might fail. If this issue occurs, VMware Tools for some VMs might not be upgraded.

  • Horizon View recompose operation might fail for a few desktop VMs residing in NFS datastore with Stale NFS file handle error.

  • When you use the esxcli software profile update command to apply a new Image Profile, the image profile name does not change to the new image profile name. Also when you use the ISO to perform the upgrade, the new image profile name is not marked as Updated.

  • The vmkernel.log file is spammed with multiple USB resumed and suspended events similar to the following:

YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: resumed
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: suspended
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: resumed
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: suspended
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: resumed
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: suspended
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: resumed
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: suspended
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: resumed
YYYY-MM-DDT<TIME> cpu28:33398)<6>hub 2-1.7:1.0: suspended

  • When a large number of processes are present in a guest operating system, the ListProcesses process is invoked more than once and the data from VMware Tools arrives in multiple chunks. When multiple ListProcesses calls to the guest OS (one for every chunk) are assembled together, the implementation creates a conflict. Multiple ListProcesses identify when all the data arrived and calls an internal callback handler. Calling the handler twice results in the failure of hostd.

  • In ESXi 5.5, HostMultipathStateInfoPath provided path information in this format: HostWWN-ArrayWWN-LUN_ID, For example, sas.500605b0072b6550-sas.500c0ff1b10ea000-naa.600c0ff0001a20bd1887345701000000. However, in ESXi 6.0, the path value appears as vmhbaX:CX:TX:LX, which might impact users who rely on the HostMultipathStateInfoPath object to retrieve information such as HostWWN and ArrayWWN.

  • VMFS uses the SCSI compare-and-write command, also called ATS, for periodic heartbeating. Any miscompare error during ATS command execution is treated as a lost heartbeat and the datastore initiates a recovery action. To prevent corruption, all I/O operations on the device are canceled. When the underlying storage erroneously reports miscompare errors during VMFS heartbeating, the datastore initiates an unnecessary recovery action.

  • When you reboot a VM with PCI Passthru multiple times, the ESXi host might stop responding and display a purple diagnostic screen with messages similar to the following in the vmware.log file:

XXXXXXXXXXXXXXX| vcpu-0| W110: A core file is available in "<path_to_VM>/vmx-debug-zdump.000"
XXXXXXXXXXXXXXX| vcpu-0| I120: Msg_Post: Error
XXXXXXXXXXXXXXX| vcpu-0| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-0)

XXXXXXXXXXXXXXX| vcpu-0| I120+ vcpu-7:ASSERT vmcore/vmm/intr/intr.c:459

  • When you boot a virtual machine for Citrix VDI, the physical switch is flooded with RARP packets (over 1000) which might cause network connections to drop and a momentary outage.

  • When you perform file operations such as mounting large files present on a datastore, these operations might fail on an ESXi host. This situation can occur when a memory leak in the buffer cache causes the ESXi host to run out of a memory, for example, when a non-zero copy of data results in buffers not getting freed. An error message similar to the following is displayed on the virtual machine.
The operation on file /vmfs/volumes/5f64675f-169dc0cb/CloudSetup_20160608.iso failed. If the file resides on a remote file system, make sure that the network connection and the server where this disk resides are functioning properly. If the file resides on removable media, reattach the media. Select Retry to attempt the operation again. Select Cancel to end this session. Select Continue to forward the error to the guest operating system.

  • When a DVFilter_TxCompletionCB() operation attempts to complete a dvfilter share memory packet, it frees the IO complete data member stored inside the packet. In some cases, this data member becomes 0, causing a NULL pointer exception. An error message similar to the following is displayed:

YYYY-MM-DDT04:11:05.134Z cpu24:33420)@BlueScreen: #PF Exception 14 in world 33420:vmnic4-pollW IP 0x41800147d76d addr 0x28
YYYY-MM-DDT04:11:05.134Z cpu24:33420)Code start: 0x418000800000 VMK uptime: 23:18:59:55.570

YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461bdd0:[0x41800147d76d]DVFilterShmPacket_TxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0x3d sta
YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461be00:[0x41800146eaa2]DVFilterTxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0xbe stack: 0x0

YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461be70:[0x418000931688]Port_IOCompleteList@vmkernel#nover+0x40 stack: 0x0
YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bef0:[0x4180009228ac]PktListIOCompleteInt@vmkernel#nover+0x158 stack: 0x0
YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bf60:[0x4180009d9cf5]NetPollWorldCallback@vmkernel#nover+0xbd stack: 0x14
YYYY-MM-DDT04:11:05.137Z cpu24:33420)0x43915461bfd0:[0x418000a149ee]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0

  • ESXi hosts with vFlash configured might fail with a purple diagnostic screen and an error message similar to PSOD: @BlueScreen: #PF Exception 14 in world 500252:vmx-vcpu-0:V.

  • After the server is removed from lockdown mode, the VMware provider method returns a different value that is not compatible with the value before getting into the lockdown mode. The issue results in the VMware provider method to validate user permissions not to work with the same username and password as it did before the lockdown mode.

  • An ESXi host displays a purple diagnostic screen when it encounters a device that is registered, but whose paths are claimed by a two multipath plugins, for example EMC PowerPath and the Native Multipathing Plugin (NMP). This type of conflict occurs when a plugin claim rule fails to claim the path and NMP claims the path by default. NMP tries to register the device but because the device is already registered by the other plugin, a race condition occurs and triggers an ESXi host failure.

  • Attempts to run failover for a VM might fail with an error message similar to the following during the synchonize storage operation:

An error occurred while communicating with the remote host.

The following messages are logged in the HBRsrv.log file:

YYYY-MM-DDT13:48:46.305Z info hbrsrv[nnnnnnnnnnnn] [Originator@6876 sub=Host] Heartbeat handler detected dead connection for host: host-9
YYYY-MM-DDT13:48:46.305Z warning hbrsrv[nnnnnnnnnnnn] [Originator@6876 sub=PropertyCollector] Got WaitForUpdatesEx exception: Server closed connection after 0 response bytes read; 171:53410'>, >)>

Also on the ESXi host, the hostd service might stop responding with messages similar to the following:

YYYY-MM-DDT13:48:38.388Z panic hostd[468C2B70] [Originator@6876 sub=Default]
--> Panic: Assert Failed: "progress >= 0 && progress <= 100" @ bora/vim/hostd/vimsvc/HaTaskImpl.cpp:557
--> Backtrace:

The hostd service might fail when performing a quiesced snapshot operation during replication process. An error message similar to the following appears in the hostd.log file:

2016-06-10T22:00:08.582Z [37181B70 info 'Hbrsvc'] ReplicationGroup will retry failed quiesce attempt for VM (vmID=37)
2016-06-10T22:00:08.583Z [37181B70 panic 'Default']
-->Panic: Assert Failed: "0" @ bora/vim/hostd/hbrsvc/ReplicationGroup.cpp:2779

  •  In ESXi 6.0 update 2, hosts with Dell CIM provider can have their syslog.log file flooded with Unknown error messages if the Dell CIM provider is disabled or in an idle state. Also, when the ESXi 6.0 update 2 host reboots, the syslog.log file might log error messages intermittently with Unknown entries.
  • ARP request packets between two VMs might be dropped if one VM is configured with guest VLAN tagging and the other VM is configured with virtual switch VLAN tagging, and VLAN offload is turned off on the VMs.
  • The VMware Tools installation error code displays that a reboot is required even after the reboot occurred after VMware Tools was installed. The guestInfo.toolsInstallErrCode variable on the virtual machine executable (VMX) side is not cleared when VMware Tools is successfully installed and reboot occurs. This causes vSphere Update Manager to send incorrect reminders to reboot VMware Tools.

  • An ESXi host might stop responding if it encounters storage I/O failures for a VM provisioned with an LSI virtual controller and memory is overcommitted on the ESXi host.

  • When the Dump file set is called using the esxcfg-dumppart or other commands multiple times in parallel, an ESXi host might stop responding and display a purple diagnostic screen with entries similar to the following as a result of a race condition while dump block map is freed up:

@BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4907 - Corruption in dlmalloc
Code start: 0xnnnnnnnnnnnn VMK uptime: 234:01:32:49.087
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PanicvPanicInt@vmkernel#nover+0x37e stack: 0xnnnnnnnnnnnn

0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Panic_NoSave@vmkernel#nover+0x4d stack: 0xnnnnnnnnnnnn
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DLM_free@vmkernel#nover+0x6c7 stack: 0x8

0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Heap_Free@vmkernel#nover+0xb9 stack: 0xbad000e
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Dump_SetFile@vmkernel#nover+0x155 stack: 0xnnnnnnnnnnnn
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SystemVsi_DumpFileSet@vmkernel#nover+0x4b stack: 0x0
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x41f stack: 0x4fc0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@<None>#<None>+0x394 stack: 0x00
xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@<None>#<None>+0xb4 stack: 0xffb0b9c8
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x0

0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry_@vmkernel#nover+0x0 stack: 0x0

  •  In VDI environment, the high read load of the VMware Tools images can result in corruption of the flash media.

  • Attempts to cancel snapshot creation for a VM whose VMDKs are on Virtual Volumes datastores might result in virtual disks not getting rolled back properly and consequent data loss. This situation occurs when a VM has multiple VMDKs with the same name and these come from different Virtual Volumes datastores.

  • An ESXi host might fail with a purple diagnostic screen when a Fault Tolerance Secondary VM fails to power on.

  • A newly added physical NIC might not have the entry in the esx.conf file after a host reboot, resulting in a virtual MAC address of 00:00:00:00:00:00 listed for the physical NIC during communication.
  • When a host profile with vmknic adapters in both vSphere Standard Switch and vSphere Distributed Switch is applied to an ESXi host, it might remove the vmknic adapter vmk0 (anagement interface) from vSphere Standard Switch which could result in the host being disconnected from vCenter Server.

  • When a VM virtual disk is configured with IO filters and the guest OS issues SCSI unmap commands, the SCSI unmap commands might succeed even when one of the configured IO filters failed the operation. As a result, the state reflected in the VMDK diverges from that of the IO filter and data corruption or loss might be visible to the guest OS.
  • ESXi 6.x hosts stop responding after running for 85 days. When this problem occurs, the /var/log/vmkernel log file displays entries similar to the following:

YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved a PUREX IOCB woh oo
YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved the PUREX IOCB.
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): sizeof(struct rdp_rsp_payload) = 0x88

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674qlnativefc: vmhba2(5:0.0): transceiver_codes[0] = 0x3
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): transceiver_codes[0,1] = 0x3, 0x40

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Stats Mailbox successful.
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Sending the Response to the RDP packet

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 53 01 00 00 00 00 00 00 00 00 04 00 01 00 00 10
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) c0 1d 13 00 00 00 18 00 01 fc ff 00 00 00 00 20

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 88 00 00 00 b0 d6 97 3c 01 00 00 00
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 02 00 00 00 00 00 00 80 00 00 00 01 00 00 00 04

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 18 00 00 00 00 01 00 00 00 00 00 0c 1e 94 86 08
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0e 81 13 ec 0e 81 00 51 00 01 00 01 00 00 00 04

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 2c 00 04 00 00 01 00 02 00 00 00 1c 00 00 00 01
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 40 00 00 00 00 01 00 03 00 00 00 10
YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 50 01 43 80 23 18 a8 89 50 01 43 80 23 18 a8 88

YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 01 00 03 00 00 00 10 10 00 50 eb 1a da a1 8f

This issue is caused by a qlnativefc driver bug sending a Read Diagnostic Parameters (RDP) response to the HBA adapter with an incorrect transfer length. As a result, the HBA adapter firmware does not free the buffer pool space. Once the buffer pool is exhausted, the HBA adapter is not able to further process any requests causing the HBA adapter to become unavailable. By default, the RDP routine is initiated by the FC Switch and occurs once every hour, resulting in the buffer pool being exhausted in approximately 80 to 85 days under normal circumstances.

  • Hostd fails when you upgrade ESXi 5.5.x hosts to ESXi 6.0.x with the ESXi 6.0 patch ESXi600-201611011 or higher.

    Although ESXi supports getting HPSA disk location information in HBA mode, problems might occur when one of the following conditions is met:

  • You installed an old hpssacli utility, version or older.

  • You used an external array to connect the HPSA controller
These problems lead to hostd failures and the host becoming unreachable by vSphere Client and vCenter Server.
  • When you compile HPE ESXi WBEM provider with the 6.0 CIMPDK and install it on an ESXi 6.0 Update 3 system, enumeration of SMX provider classes might fail.

    This failure might result from enumeration of the following SMX classes, among others:

    • SMX_EthernetPort
    • SMX_Fan
    • SMX_PowerSupply
    • SMX_SAMediaAccessStatData

    The following error is displayed by the sfcbd for these SMX classes:

    # enum_instances SMX_EthernetPort root/hpq error: enumInstances Server returned nothing (no headers, no data)

    Providers respond to enumeration queries and successfully deliver responses to the sfcbd. There are no provider restarts or provider core dumps. Each enumeration produces sfcbd CIMXML core dumps, such as sfcb-CIMXML-Pro-zdump.000.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the VMware vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.


Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)

Please enter the Captcha code before clicking Submit.
  • 0 Ratings