Search the VMware Knowledge Base (KB)
View by Article ID

VMware ESXi-5.5, Patch ESXi-5.5.0-20161204001-no-tools (2147799)

  • 0 Ratings


Release date: December 20, 2016

Profile Name
For build information, see KB 2147788.
VMware, Inc.
Release Date
December 20, 2016
Acceptance Level
Affected Hardware
Affected Software
Affected VIBs
  • VMware:esx-base:5.5.0:3.100.4722766   
  • VMware:esx-tboot:5.5.0:3.100.4722766 
PRs Fixed
1501552, 1576412, 1600575, 1667754, 1678407, 1695012, 1702363, 1718503, 1720353, 1729991, 1754931
Related CVE numbers


Summaries and Symptoms

  • The snapshot consolidation might fail and get aborted on an ESXi 5.5 though there is sufficient free space available on datastore. This happens due to miscalculation of the available space.

  • An ESXi 5.5 host might fail  with a purple diagnostic screen during the path state updates. This happens due to race condition. An error message similar  to the following is displayed:
    mm-dd-yy18:20:06.400Z cpu3:32849)World: 8780: CR0 0x8001003d CR3 0x6a8a6000 CR4 0x216c
    mm-dd-yy18:20:06.442Z cpu3:32849)Backtrace for current CPU #3, worldID=32849, ebp=0x41238145dd30
    mm-dd-yy18:20:06.442Z cpu3:32849)0x41238145dd30:[0x418002b0e47c]vmk_ScsiGetDeviceState@vmkernel#nover+0x4c stack: 0x41238145dd70, 0x
    mm-dd-yy18:20:06.442Z cpu3:32849)0x41238145de00:[0x418003144516]nmp_DeviceUpdatePathStates@com.vmware.vmkapi#v2_2_0_0+0x4e2 stack: 0
    mm-dd-yy18:20:06.442Z cpu3:32849)0x41238145de30:[0x418003139a37]nmpPathProbe@com.vmware.vmkapi#v2_2_0_0+0x3b stack: 0x410910782d80,
    mm-dd-yy18:20:06.442Z cpu3:32849)0x41238145de60:[0x418002b3ff03]SCSIPathProbe@vmkernel#nover+0x5f stack: 0x41238145de90, 0x96, 0x412
    mm-dd-yy18:20:06.442Z cpu3:32849)0x41238145ded0:[0x418002b4448d]SCSIPathSCN@vmkernel#nover+0x1f1 stack: 0x0, 0x410919909f40, 0x7ff9d
    mm-dd-yyT18:20:06.442Z cpu3:32849)0x41238145df30:[0x418002b44542]SCSIPathSCNHelper@vmkernel#nover+0x52 stack: 0x0, 0x412381467000, 0x
    mm-dd-yyT18:20:06.442Z cpu3:32849)0x41238145dfd0:[0x418002861435]helpFunc@vmkernel#nover+0x6a1 stack: 0x0, 0x0, 0x0, 0x0, 0x0
    mm-dd-yy18:20:06.442Z cpu3:32849)0x41238145dff0:[0x418002a56b6a]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0, 0x0, 0x0, 0x0, 0
    mm-dd-yy18:20:06.459Z cpu3:32849)ESC[45mESC[33;1mVMware ESXi 5.5.0 [Releasebuild-3116895 x86_64]ESC[0m
    #PF Exception 14 in world 32849:helper8-0 IP 0x418002b0e47c addr 0x18

  • An ESXi host might fail with a purple diagnostic screen. When DVFilter_TxCompletionCB() is called to complete a dvfilter share memory packet, it frees the IO complete data stored inside the packet. Sometimes, this data  becomes 0 causing a NULL pointer exception.
        An error message similar  to the following is displayed:
    YYYY-MM-DDT04:11:05.134Z cpu24:33420)@BlueScreen: #PF Exception 14 in world 33420:vmnic4-pollW IP 0x41800147d76d addr 0x28
    YYYY-MM-DDT04:11:05.134Z cpu24:33420)Code start: 0x418000800000 VMK uptime: 23:18:59:55.570
    YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461bdd0:[0x41800147d76d]DVFilterShmPacket_TxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0x3d stack
    YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461be00:[0x41800146eaa2]DVFilterTxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0xbe stack: 0x0
    YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461be70:[0x418000931688]Port_IOCompleteList@vmkernel#nover+0x40 stack: 0x0
    YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bef0:[0x4180009228ac]PktListIOCompleteInt@vmkernel#nover+0x158 stack: 0x0
    YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bf60:[0x4180009d9cf5]NetPollWorldCallback@vmkernel#nover+0xbd stack: 0x14
    YYYY-MM-DDT04:11:05.137Z cpu24:33420)0x43915461bfd0:[0x418000a149ee]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0

  • The LLDP multicast MAC address might be unavailable in management ports FRP because of which LLDP might fail to receive LLDP traffic. This happens when LLDP is disabled on the portset, removing the management port LLDP multicast MAC address. The host does not receive the LLDP packet and no longer displays LLDP information.

  • An ESXi host might fail with a purple diagnostic screen when you use NSX distributed firewall. This happens when the same dvfilter entry gets added to the port input or output chain twice, due to the distributed firewall deferring a prior dvfilter destroy call. An error message similar to the following is displayed:
      YYYY-MM-DDT07:35:05.555Z cpu15:33512)0x43911741b520:[0x418004efc397]DVFilterInputOutputIOChainCB@com.vmware.vmkapi#v2_3_0_0+0x3b stack:
      YYYY-MM-DDT07:35:05.555Z cpu15:33512)0x43911741b550:[0x41800454e998]IOChain_Resume@vmkernel#nover+0x210 stack: 0x43026b276d18, 0x4391174
    YYYY-MM-DDT07:35:05.555Z cpu15:33512)0x43911741b5f0:[0x41800453226e]PortOutput@vmkernel#nover+0xae stack: 0x439e18e16b00, 0x439e019ad138
    YYYY-MM-DDT07:35:05.555Z cpu15:33512)0x43911741b630:[0x418004bbf2f2]EtherswitchForwardLeafPortsQuick@#+0x136 stack: 0x439117
    YYYY-MM-DDT07:35:05.555Z cpu15:33512)0x43911741b680:[0x418004bc0760]EtherswitchPortDispatch@#+0x5ec stack: 0x43026b276780, 0
    YYYY-MM-DDT07:35:05.555Z cpu15:33512)0x43911741b870:[0x4180045324d3]Port_InputResume@vmkernel#nover+0x17b stack: 0x418004bc0174, 0x41800
    YYYY-MM-DDT07:35:05.555Z cpu15:33512)0x43911741b8d0:[0x418004532621]Port_Input_Committed@vmkernel#nover+0x29 stack: 0x439e18e16b00, 0x43

  • The ESXi hostd service might stop responding when there is a IO load on VMs with a. LSI virtual SCSI controller and memory overcommit situation.
  • Attempts to perform vMotion fail if the physical NICS of the Link Aggregation Groups (LAG) uplinks are moved to normal distributed switch uplinks.

  • The hostd service might fail when a quiesced snapshot operation is reset during replication process. An error message similar to the following might be written to the hostd.log file:
    YYYY-MM-DDT22:00:08.582Z [37181B70 info 'Hbrsvc'] ReplicationGroup will retry failed quiesce attempt for VM (vmID=37)
    YYYY-MM-DDT22:00:08.583Z [37181B70 panic 'Default']
    -->--> Panic: Assert Failed: "0" @ bora/vim/hostd/hbrsvc/ReplicationGroup.cpp:2779

  • When a driver or module calls for memory allocation, the ESXi host might fail and display a purple screen with the following messages due to the Physical CPU lock up:
        cpu14:1234)WARNING: Heartbeat: 796: PCPU 9 didn't have a heartbeat for 50 seconds; *may* be locked up.
    cpu14:1234)World: 9729: PRDA 0xnnnnnnnnnnnn ss 0x0 ds 0x10b es 0x10b fs 0x0 gs 0x13b
    cpu14:1234)World: 9731: TR 0xnnnn GDT 0xnnnnnnnnnnnn (0x402f) IDT 0xnnnnnnnnnnnn (0xfff)

    cpu14:1234)World: 9732: CR0 0xnnnnnnnn CR3 0xnnnnnnnnn CR4 0xnnnnn....
    cpu9:1234)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]MemNode_NUMANodeMask2MemNodeMask@vmkernel#nover+0x5b stack: 0xnnnnnn
    cpu9:1234)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]MemDistributeNUMAPolicy@vmkernel#nover+0x27a stack: 0xnn
    cpu9:1234)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]MemDistribute_Alloc@vmkernel#nover+0x299 stack: 0xnnnnnn
    cpu9:1234)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_LinuxSyscallHandler@vmkernel#nover+0x1d stack: 0x0
    cpu9:1234)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry_@vmkernel#nover+0x0 stack: 0x0

  • During vMotion of the NSX Edge VM, the ESXi host might fail with a purple screen that contains messages similar to the following:
    cpu14:)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]MCSLockWithFlagsWork@vmkernel#nover+0x1 stack: 0xnnnnnnnnnnnn, 0x0,
    cpu14:)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DVFilterRefFilter@com.vmware.vmkapi#v2_3_0_0+0x44 stack: 0x0, 0x4180
    cpu14:)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DVFilterCheckpointGetSize@com.vmware.vmkapi#v2_3_0_0+0x78 stack: 0xb
    cpu14:)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VMotionRecv_GetDVFilterState@esx#nover+0x26 stack: 0xnnnnnnnnnnnn, 0
    cpu14:)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VMotionRecv_ExecHandler@esx#nover+0x12b stack: 0x14d, 0xnnnnnnnnnnnn
    cpu14:)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VMotionRecv_Helper@esx#nover+0x170 stack: 0x0, 0x77dc, 0x0, 0x0, 0x0
    cpu14:)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0, 0x0, 0x0, 0x0,

  • Virtual machines that run SAP application might randomly fail into the vmx.zdump. An error message similar to the following is displayed when running too many VMware Tools stats commands inside the VM.
    CoreDump error line 2160, error Cannot allocate memory.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)

Please enter the Captcha code before clicking Submit.
  • 0 Ratings