VMware ESXi 5.5, Patch ESXi-5.5.0-20141004001-standard
search cancel

VMware ESXi 5.5, Patch ESXi-5.5.0-20141004001-standard

book

Article ID: 334658

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile Name
ESXi-5.5.0-20141004001-standard
Build
For build information, see KB 2087358.
Vendor
VMware, Inc.
Release Date
October 15, 2014
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
  • VMware:esx-base:5.5.0-2.39.2143827
  • VMware:misc-drivers:5.5.0-2.39.2143827
  • VMware:sata-ahci:3.0-21vmw.550.2.39.2143827
  • VMware:xhci-xhci:1.0-2vmw.550.2.39.2143827
  • VMware:tools-light:5.5.0-2.39.2143827
  • VMware:net-vmxnet3:1.1.3.0-3vmw.550.2.39.2143827
PRs Fixed
840974, 1110520, 1141670, 1160571, 1199247, 1218945, 1225735, 1248778, 1249986, 1252100, 1255076, 1255435, 1262151, 1262828, 1263543, 1264733, 1266338, 1267354, 1267756, 1272013, 1277054, 1278416, 1280226, 1281169, 1286928, 1296629, 1298521, 1318391, 1224800, 1254961, 1293691, 1250079
Related CVE numbers
NA

For information on patch and update classification, see KB 2014447.

To search for available patches, see the Patch Manager Download Portal.


Environment

VMware vSphere ESXi 5.5

Resolution

Summaries and Symptoms

This patch resolves the following issues:

  • After you upgrade to ESXi 5.5 Patch, ESXi550-201410001, all the existing host profiles might fail the Host Profile compliance check as the Native Multipathing Plugin (NMP) device information is not listed as required by the Host Profile compliance checker. The following compliance error message is displayed:

    Specification state absent from the host: SATP configuration for device naa.XXXXX needs to be updated
    Host state doesn't match specification: SATP configuration for device naa.XXXXXXX needs to be updated

    To resolve the issue, see KB2032822.

  • When Direct Control User Interface (DCUI) is accessed from a serial console, the arrow keys function as Esc while navigating to the menu and you are forcefully logged out of the DCUI configuration screen.
  • After you upgrade from ESXi 5.1.x to 5.5, the vSphere Replication might fail if you enable the vSphere Replication traffic checkbox available under Networking tab using the vSphere Web Client on ESXi 5.1.x.

    Error messages similar to the following are logged in the vmkernel.log file:

    2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 549: Connection failed to 10.1.253.51 (groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout
    2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 4521: Failed to establish connection to [10.1.253.51]:31031(groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout

    When you select virtual machine Summary tab, the following error message might be displayed:

    No Connection to VR server: Not responding

    Attempts to start synchronization might also fail with the following error message:

    An ongoing synchronization task already exists

    The Tasks & Events tab might display the state of the virtual machine as Invalid.

  • Attempts to perform cold migration and Storage VMotion fail with the following error message if the VMDK name begins with core - with a lowercase c':

    A general system error occurred: Error naming or renaming a VM file.

  • When you reboot the ESXi host, the Simple Management Network Protocol (SNMP) traps might not be initiated for certain hardware events such as power supply interruption, disk drive failure, and so on, even if the SNMP agent is enabled.
  • If you change the MTU size in an environment where the IPV6_PKTINFO socket option is enabled, the ESXi hosts might fail with a purple diagnostic screen with a backtrace similar to the following:

    xxxx-06-20T07:13:33.477Z cpu5:4562)@BlueScreen: #GP Exception 13 in world 4562:iscsi_trans_ @ 0x418018b64da0
    xxxx-06-20T07:13:33.477Z cpu5:4562)Code start: 0x418018400000 VMK uptime: 0:00:06:53.788
    xxxx-06-20T07:13:33.477Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]ip6_output@ <NONE></NONE># <NONE></NONE>+0xe7f stack: 0x41001906d910
    xxxx-06-20T07:13:33.478Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]tcp_output@ <NONE></NONE># <NONE></NONE>+0x148b stack: 0x417fc9008e60
    xxxx-06-20T07:13:33.479Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]tcp_usr_send@ <NONE></NONE># <NONE></NONE>+0x1ca stack: 0x41220749b99c
    xxxx-06-20T07:13:33.479Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]sosend_generic@ <NONE></NONE># <NONE></NONE>+0x3f2 stack: 0x417fc9008e60
    xxxx-06-20T07:13:1331513295663933.480Z cpu5:4562)xxxxxxxxxxxxxx[xxxxxxxxxxxxxx]vmk_sosendMBuf@ <NONE></NONE># <NONE></NONE>+0x25 stack: 0x41220749b9a0
    xxxx-06-20T07:13:33.481Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]iscsivmk_ProcessWorldletConnTxEvent@ <NONE></NONE># <NONE></NONE>+0x380 stack: 0x412
    xxxx-06-20T07:13:33.482Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]WorldletProcessQueue@vmkernel#nover+0x4b0 stack: 0x41220749bc90
    xxxx-06-20T07:13:33.482Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x100000000000000
    xxxx-06-20T07:13:33.483Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]BH_Check@vmkernel#nover+0x185 stack: 0x41220749bda0
    xxxx-06-20T07:13:33.484Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]CpuSchedIdleLoopInt@vmkernel#nover+0x13b stack: 0x20749bde0
    xxxx-06-20T07:13:33.484Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]CpuSchedDispatch@vmkernel#nover+0xabe stack: 0x0
    xxxx-06-20T07:13:33.485Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]CpuSchedWait@vmkernel#nover+0x242 stack: 0x410000000000
    xxxx-06-20T07:13:33.486Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]CpuSched_WaitIRQ@vmkernel#nover+0x1a stack: 0x417fc48014b8
    xxxx-06-20T07:13:33.487Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx][email protected]#v2_1_0_0+0x46 stack: 0x4
    xxxx-06-20T07:13:33.487Z cpu5:4562)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]vmkWorldFunc@vmkernel#nover+0x52 stack: 0x0
    xxxx-06-20T07:13:33.488Z cpu5:4562)xxxxxxxxxxxxxx:[0x0] <UNKNOWN></UNKNOWN>stack: 0x0
    xxxx-06-20T07:13:33.490Z cpu5:4562)base fs=0x0 gs=0xxxxxxxxxxxxx Kgs=0x0
    xxxx-06-20T07:13:33.490Z cpu5:4562)vmkernel 0x0 .data 0x0 .bss 0x0

  • An ESXi host might fail with a purple diagnostic screen indicating that a PCPU did not receive a heartbeat. This occurs when the host experiences a memory overload due to a burst of memory allocation requests. A backtrace similar to the following is displayed:

    LPageMgrUnlockPoolList@esx#nover+0x75
    LPage_ReapPools@esx#nover+0x1ba
    MemDistributeNUMAPolicyMigration@esx#nover+0x156
    MemDistributeAllocateAndTestPages@esx#nover+0x1c2
    This patch resolves the issue by optimizing the memory allocation path when a large page (lpage) memory pool is utilized.

  • When you upgrade to ESXi 5.5 along with multipathing modules written for vmkapi < v2_2_0_0, for example Powerpath 5.9, the ESXi host might report dead paths for gatekeeper lun.<BR>

    This patch resolves the issue by claiming all the paths in such scenarios.

  • On an ESXi host, a virtual machine might fail when a suspend operation or a checkpoint process is in progress with activities such as collecting vm-support. Alternatively, a VM might fail while running vm-support with HungVM:Suspend_VM as described in KB 2005831. The vmware.log file might contain a message similar to the following:

    |vcpu-0| I120: ASSERT bora/vmcore/vmx/main/monitorAction.c: <XXX></XXX>

  • When the NetFlow collector and the ESXi host are on different networks and you have a firewall in between dropping the UDP source port 0, the NetFlow packets are not delivered to the collector.

    This patch resolves the issue by changing the source port to 12055.

  • Storage adapter rescans that you run from the vCenter Server or ESXCLI might take a long time to complete if the ESXi host has a large number of VMFS datastores.

    This patch improves performance for various storage operations such as storage adapter rescans and VMFS datastore rescans. In addition, performance of operations such as List VMFS Snapshots and Devices in the Create Datastore wizard has been improved.

  • When you simultaneously trigger Network transaction and NFS datastore create operations, the ESXi host might stop responding due to an internal deadlock.
  • With the Link Aggregation Control Protocol (LACP) configured, the health status for VLAN and MTU are displayed as unknown if the Health Check feature is enabled before adding uplinks to Distributed Virtual Switch (DVS).
  • The Direct Console User Interface (DCUI) might become unresponsive if the vim-cmd command that you are running takes a long time to complete.

    This patch resolves the issue by implementing the timeout mechanism for vim-cmd commands that take a long time to complete.

  • When you attempt to get the statistics using the get /vmkModules/vsan/dom/schedStats command, an incorrect value is displayed for the Cumulated number of bytes through VMDisk IO queue.
  • An ESXi 5.5 host might become unresponsive with a purple screen and report the error #PF esxcfg-info in Heap_VSIList due to corruption of "heapSetup.heapList". You see backtrace similar to the following:

    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:6690925)@BlueScreen: #PF Exception 14 in world 6690925:esxcfg-info IP 0xxxxxxxxxxxxx addr 0x10PTEs:0xxxxxxxxxx;0xxxxxxxxxx;0x0;
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)Code start: 0xxxxxxxxxxxxx VMK uptime: 13:23:16:07.415
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]Printf_WithFunc@vmkernel#nover+0xe10 stack: 0xxxxxxxxxxxxx
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]vsnprintf@vmkernel#nover+0x37 stack: 0xxxxxxxxxxxxx
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]vmk_StringFormat@vmkernel#nover+0x46 stack: 0xxxxxxxxxxxxxxxxx
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]Heap_VSIList@vmkernel#nover+0xba stack: 0xxxxxxxxxxxxx
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]VSI_GetListInfo@vmkernel#nover+0x244 stack: 0x6d0
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]UWVMKSyscallUnpackVSI_GetList@ <NONE></NONE># <NONE></NONE>+0xca stack: 0xxxxxxxxxxx
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]User_UWVMKSyscallHandler@ <NONE></NONE># <NONE></NONE>+0x246 stack: 0xxxxxxxxxxxxx
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0xff9b82e8
    xxxx-xx-xxTxx:xx:xx.xxxx cpu41:xxxxxxx)0xxxxxxxxxxxxx:[0xxxxxxxxxxxxx]gate_entry@vmkernel#nover+0x64 stack: 0x0

  • High CPU usage might be displayed in vCenter performance chart whereas such high values might not be displayed when you run the esxtop command with the -c option at the same time. This issue occurs only if hyper-threading is enabled on the ESXi host.
  • ESXi host might generate excessive Address Resolution Protocol (ARP) packets, if the NFS export is removed/offlined from the storage array without unmounting the corresponding NFS datastore from the ESXi host.
  • Attempts to create a host profile might fail due to duplication of vmhba adapter with the following error message:

    Found Device Alias vmhbaX duplicated within the profile

    NOTE: If an Async driver is installed after you upgrade to ESXi 5.5 Patch, ESXi550-201410001, the old host profile might not work due to change in vmhba name and you might need to create new host profiles.

  • If you set the Fast TimeOut (FTO) flag on the Link Aggregation Group (LAG) using esxcli network vswitch dvs vmware lacp timeout set -l 0 -s dvs-51 -t 1, the FTO setting will not persist after you disconnect and reconnect a vmnic to the vSphere Distributed Switch. If you add a new vmnic into the LAG, the new port also will not have the FTO set.
  • Under a highly specific and detailed set of internal timing conditions, the AMD Family 10h processor in 64-bit mode may incorrectly update the stack pointer due to erratum 721. The incorrect stack pointer might cause unpredictable program or system behavior. For further information, see KB 2061211
  • Virtual machines that use an E1000e virtual network adapter might fail with a purple diagnostic screen if Receive Side Scaling is enabled on the virtual machine. The purple diagnostic screen or backtrace contains entries similar to the following:

    E1000PollRxRing@vmkernel#nover+0xeb7
    E1000DevRx@vmkernel#nover+0x18a
    IOChain_Resume@vmkernel#nover+0x247
    PortOutput@vmkernel#nover+0xe3
    EtherswitchForwardLeafPortsQuick@ # +0xd6
    EtherswitchPortDispatch@ # +0x13bb
    Port_InputResume@vmkernel#nover+0x146
    Net_AcceptRxList@vmkernel#nover+0x157
    NetPollWorldletCallback@vmkernel#nover+0x5c
    WorldletProcessQueue@vmkernel#nover+0x488
    WorldletBHHandler@vmkernel#nover+0x60
    BH_Check@vmkernel#nover+0x185
    IDT_IntrHandler@vmkernel#nover+0x1fc
    gate_entry@vmkernel#nover+0x63

    For additional details see KB 2079094.
  • This patch introduces additional Transparent Page Sharing (TPS) management capabilities. For further details, refer to KB 2091682.
  • On an ESXi 5.5 host connected to a SCSI Enclosure device, log spew related to SCSI Mode sense command failure (0x1a) similar to the following example might be logged in vmkernel.log file every 5 mintues.

    2014-05-28T12:50:55.450Z cpu3:1000014134)ScsiDeviceIO: SCSICompleteDeviceCommand:xxxx: Cmd(xxxxxxxxxxxxxx) 0x1a, CmdSN 0x26 from world 0 to dev "naa.xxxxxxxxxxxxxxxx" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

  • When you run the ESXCLI command storage vmfs unmap on a big VMFS datastore which is nearly full, the ESXi host might display a purple screen with an error message:

    @BlueScreen: PCPU 12: no heartbeat (2/2 IPIs received).
  • When you perform virtual machine operations like power on/off, suspend or resume a VM, change portgroups, or when you deploy a virtual machine, the ESXi host might become unresponsive and display a purple diagnostic screen with the following backtrace:

    [7mxxxx-xx-xxT12:31:27.968Z cpu17:1582040)WARNING: Heartbeat: 785: PCPU 8 didn't have a heartbeat for 7 seconds; *may* be locked up.[0m
    [31;1m2014-07-21T12:31:27.968Z cpu8:121984)ALERT: NMI: 709: NMI IPI received. Was eip(base):ebp:cs [0x76959(0x418008200000):0xxxxxxxxxdbf0:0x4010](Src 0x1, CPU8) [0m
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxdbf0:[0x418008276959]MCSWaitForUnblock@vmkernel#nover+0x119 stack: 0xxxxxxxxxdcd0
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxddc0:[0x4180082773a0]MCSLockRWContended@vmkernel#nover+0x188 stack: 0x101000100000000
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxde00:[0x418008277bdd]MCS_DoAcqWriteLockWithRA@vmkernel#nover+0x135 stack: 0x4109aa41fef0
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxde60:[0x41800841db93]vmk_PortsetAcquireByPortID@vmkernel#nover+0x3b3 stack: 0xxxxxxxxxdec
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxdec0:[0x418008c62b8c][email protected]#0.0.0.1+0xe4 stack: 0xxxxxfbe7c4
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxdf30:[0x41800837fa57]PortsetFireEventCB@vmkernel#nover+0x18f stack: 0x0
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxdfd0:[0x41800826125a]helpFunc@vmkernel#nover+0x6b6 stack: 0x0
    xxxx-xx-xxT12:31:27.968Z cpu8:121984)0xxxxxxxxxdff0:[0x418008453502]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0

  • When you run Solaris 10 or 11 virtual machines using the e1000 adapter, the network throughput within the Solaris guest OS is very poor.
  • When you deploy Windows 2008, 2012 or 2012 R2, the Windows Failover Cluster might fail to pass one or more of the following storage validation tests:
    • Validate Disk Arbitration
    • Validate Disk Failover
    • Validate Simultaneous Failover
    • Validate SCSI-3 Persistent Reservation
    • Validate Disk Access Latency

    The validation report, located at <systemroot>\Cluster\Reports, contains entries similar to:

    Failed to arbitrate for cluster disk 0 from node node1.local, failure reason: The request could not be performed because of an I/O device error.

    Failed to access Test Disk 1, or disk access latency of 539311 milliseconds from node node1.local is more than the acceptable limit of 3000 milliseconds status The request could not be performed because of an I/O device error.

    Failure issuing call to Persistent Reservation REGISTER AND IGNORE EXISTING on Test Disk 1 from node node1.local when the disk has no existing registration. It is expected to succeed. The request could not be performed because of an I/O device error.

  • An ESXi 5.5 host that uses Series 63xx AMD Opteron processor might become unresponsive with a purple screen. The purple screen mentions the text IDT_HandleInterrupt or IDT_VMMForwardIntr followed by an unexpected function as described in KB 2061211.
  • When you perform operations like resetting the physical nic, the ESXi 5.5 host might become unresponsive with a purple screen and report the error #PF Exception 14 Uplink_SysinfoPNicPropertiesGet@vmkernel. You see backtrace similar to the following:

    #PF Exception 14 in world 34727:net -lbt Uplink_SysinfoPNicPropertiesGet@vmkernel
    VSI_GetInfo@vmkernel
    UWVMKSyscallUnpackVSI_Get@ <NONE></NONE>
    User_UWVMKSyscallHandler@ <NONE></NONE>
    User_UWVMKSyscallHandler@vmkernel
    gate_entry@vmkernel

  • When you perform any operation that results in set coalesce and the driver does not implement the callback function get coalesce at the same time, the ESXi 5.5 host might become unresponsive with a purple screen and report the error #PF Exception 14 UplinkAsyncProcessCallsHelperCB. You see backtrace similar to the following:

    2014-05-02T09:56:09.478Z cpu7:32820)@BlueScreen: #PF Exception 14 in world 32820:helper11-0 IP 0x0 addr 0x0
    PTEs:0x15c96d027;0x15d143027;0x0; 2014-05-02T09:56:09.478Z cpu7:32820)Code start: 0x41800b000000 VMK uptime: 0:00:07:03.811 2014-05-02T09:56:09.479Z
    cpu7:32820)xxxxxxxxxxxxxx:[0x0] <UNKNOWN></UNKNOWN>stack: xxxxxxxxxxxxxx 2014-05-02T09:56:09.480Z
    cpu7:32820)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]UplinkAsyncProcessCallsHelperCB@vmkernel#nover+0x223 stack: 0x0 2014-05-02T09:56:09.481Z
    cpu7:32820)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]helpFunc@vmkernel#nover+0x6b6 stack: 0x0 2014-05-02T09:56:09.482Z
    cpu7:32820)xxxxxxxxxxxxxx:[xxxxxxxxxxxxxx]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0

  • Windows 2008 R2 virtual machines might stop responding and display a blue screen when the vShield Endpoint Thin Agent driver tries to log a filename.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.