VMware ESXi 5.5, Patch ESXi-5.5.0-20150104001-standard
search cancel

VMware ESXi 5.5, Patch ESXi-5.5.0-20150104001-standard

book

Article ID: 334361

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile Name
ESXi-5.5.0-20150104001-standard
Build
For build information, see KB 2099265.
Vendor
VMware, Inc.
Release Date
Jan 27, 2015
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
  • VMware:esx-base:5.5.0-2.54.2403361
  • VMware:tools-light:5.5.0-2.54.2403361
  • VMware:misc-drivers:5.5.0-2.54.2403361
  • VMware:sata-ahci:3.0-21vmw.550.2.54.2403361
  • VMware:net-igb:5.0.5.1.1-1vmw.550.2.54.2403361
PRs Fixed
1135831, 1259109, 1260558, 1265765, 1269213, 1274146, 1283323, 1291922, 1295573, 1296944, 1296974, 1305481, 1307463, 1308058, 1310591, 1314102, 1325127, 1325137, 1326902, 1333296, 1334211, 1334989, 1336909, 1343586, 1358087, 1358622, 1361033, 1361838, 1364747, 1364807, 1372041, 1375773, 1330567, 1367234
Related CVE numbers
NA

For information on patch and update classification, see KB 2014447.

To search for available patches, see the Patch Manager Download Portal.


Environment

VMware vSphere ESXi 5.5

Resolution

Summaries and Symptoms

This patch resolves the following issues:

  • Storage vMotion of thin provisioned virtual machine disk (VMDK) takes longer than the Lazy Zeroed Thick (LZT) disk.

  • The VMware Tools status for Solaris virtual machine might report an Out of date status error in vSphere Client for new VMware Tools. This is because the ESXi has no AVR manifest for Solaris.

  • When you configure Network Interface Cards (NICs) as uplink ports on a dvSwitch, after a reboot or PXE boot the host, or when the vsish node is not ready, hostd stops responding and messages similar to the following are displayed:

    0 0x19f97092 in _dl_sysinfo_int80 () from /tmp/debug-uw.NDJcc0Mc/lib/ld-linux.so.2
    #1 0x1ec48ce5 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:67
    #2 0x1ec4a523 in abort () at abort.c:92
    #3 0x1a520bd8 in Vmacore::System::SignalTerminateHandler (info=0x3693ecac, ctx=0x3693ed30) at bora/vim/lib/vmacore/posix/defSigHandlers.cpp:65
    #4 0x00284002 in ?? ()
    #5 0x1bddb465 in StatsRegistry::VSIStats::GetObjectType (statId=84, attr=0) at bora/lib/vmkstats/vsiwrapper/vsiStatsImpl.cpp:245
    #6 0x1bddde52 in StatsRegistry::VSIStatsImpl::GetStatValue (this=0x37eefab8, statIds=..., statId=84, nodeId=1867, attrValues=..., curAttrValues=..., curAttrIndex=0, values=...) at bora/lib/vmkstats/vsiwrapper/vsiStatsImpl.cpp:500
    #7 0x1bddec56 in StatsRegistry::VSIStatsImpl::GetStatValues (this=0x37eefab8, instances=..., values=...) at bora/lib/vmkstats/vsiwrapper/vsiStatsImpl.cpp:450

  • ESXi hosts with virtual machines with vFlash installed on them might stop responding and display a purple screen with backtrace similar to the following:

    2014-06-02T17:04:41.439Z cpu8:33632)@BlueScreen: #PF Exception 14 in world 33632:Cmpl-vmhba1- IP 0xnnnnnnnnnnnn addr 0x20
    PTEs:0xnnnnnnnnn;0xnnnnnnnnnn;0x0;
    2014-06-02T17:04:41.439Z cpu8:33632)Code start: 0x41803ca00000 VMK uptime: 0:00:25:49.169
    2014-06-02T17:04:41.439Z cpu8:33632)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]BitVector_SetExtent@vmkernel#nover+0x14 stack: 0xnnnnnnnnnnnn
    2014-06-02T17:04:41.440Z cpu8:33632)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VFC_DeleteData@ <none></none># <none></none>+0x310 stack: 0x0
    2014-06-02T17:04:41.440Z cpu8:33632)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VFC_TreePurgeWrite@ <none></none># <none></none>+0x7b stack: 0xnnnnnnnnnnnn
    2014-06-02T17:04:41.441Z cpu8:33632)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VFCRW_WriteThruIOCompletion@ <none></none># <none></none>+0x18b stack: 0xnnnnnnnnnnnn
    2014-06-02T17:04:41.441Z cpu8:33632)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn][email protected]#v2_2_0_0+0x2af stack: 0x412e

  • In some rare cases, when you enable IPv6 and change the IPv6 address during traffic, the ESXi host might fail and display a purple screen with messages similar to the following:

    2014-05-29T14:50:44.583Z cpu11:39053530)Backtrace for current CPU #11, worldID=39053530, ebp=0xnnnnnnnnnnnn
    2014-05-29T14:50:44.583Z cpu11:39053530)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]ip6_input@ <none></none># <none></none>+0xdd8 stack: 0xnnnnnnnnnnnn, 0xnnnnnnnnnnnn,
    2014-05-29T14:50:44.583Z cpu11:39053530)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]netisr_check@ <none></none># <none></none>+0x167 stack: 0xnnnnnnnnnnnn, 0xnnnnnnnnnnnn
    2014-05-29T14:50:44.583Z cpu11:39053530)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]TcpipIsrCheck@ <none></none># <none></none>+0x51 stack: 0x4, 0xnnnnnnnnnnnn, 0x41256
    ......

  • Virtual machines with PVSCSI controllers might intermittently stop responding or fail.

  • In VI Client, the option End VMware Tools Install is enabled even after a successful installation of VMware Tools on Linux, FreeBSD or Solaris virtual machines. VMware Tools installation might not stop when you click on the End VMware Tools Install option.

    Note: The same option in vSphere Web Client is seen as Unmount VMware Tools Installer.

  • Attempts to upload VMware Open Virtualization Format(OVF) file in vCloud Director fail with an error message. An error message similar to the following might be displayed in hostd.log:

    Error during content upload: The operation could not be performed because the device spec is invalid. The device '3' is referring to a nonexisting controller '-,101'.

  • ESXi hosts might fail with a purple screen and display error messages similar to the following when you perform vMotion:

    @BlueScreen: #PF Exception 14 in world 249122:vmotionStrea IP 0xnnnnnnnnnnnn addr 0xnnnnnnnnnnnn
    2014-07-16T07:09:57.026Z cpu62:249122)Code start: 0xnnnnnnnnnnnn VMK uptime: 0:19:41:16.346
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]NUMALatency_GetNodeHops@vmkernel#nover+0x22 stack: 0xnnnnnnnnnnnn


    This issue occurs when the number of NUMA clients calculated exceeds the maximum number of supported NUMA clients based on the virtual machine configurations.

  • After you perform a successful snapshot consolidation, a Windows virtual machine might stop responding. Error messages similar to the following might be written to the vmware.log file:

    I120: DISKUTIL: scsi0:3 : geometry=522/255/63
    I120: VMXNET3 user: Ethernet0 Driver Info: version = 1107087 gosBits = 1 gosType = 2, gosVer = 21024, gosMisc = 161
    I120: SnapshotVMXConsolidateOnlineCB: nextState = 6 uid 0
    I120: Vix: [35900 mainDispatch.c:3964]: VMAutomation_ReportPowerOpFinished: statevar=3, newAppState=1881, success=1 additionalError=0
    I120: SnapshotVMXConsolidateOnlineCB: Destroying thread 9
    I120: Turning off snapshot info cache.
    I120: Turning off snapshot disk cache.
    I120: SnapshotVMXConsolidateOnlineCB: Done with consolidate
    I120: GuestRpcSendTimedOut: message to toolbox timed out.
    I120: GuestRpcSendTimedOut: message to toolbox timed out.
    I120: GuestRpc: app toolbox's second ping timeout; assuming app is down
    I120: GuestRpc: Reinitializing Channel 0(toolbox)
    I120: GuestMsg: Channel 0, Cannot unpost because the previous post is already completed
    I120: GuestRpc: Channel 0 reinitialized.
    I120: GuestRpc: Channel 0 reinitialized.
    I120: GuestRpcSendTimedOut: message to toolbox-dnd timed out.
    I120: GuestRpcSendTimedOut: message to toolbox-dnd timed out.
    I120: GuestRpcSendTimedOut: message to toolbox-dnd timed out.
    I120: GuestRpcSendTimedOut: message to toolbox-dnd timed out.
    I120: GuestRpcSendTimedOut: message to toolbox-dnd timed out.

  • Under certain conditions, a userworld program might not function as expected and might lead to accumulation of large number of zombie processes in the system. As a result, the globalCartel heap might exhaust causing operations like vMotion and SSH to fail as new processes cannot be forked when an ESXi host is in the heap memory exhaustion state. The ESXi host might not exit this state until you reboot the ESXi host.
    Warning messages similar to the following might be written in the vmkernel.log file:

    2014-07-31T23:58:01.400Z cpu16:3256397)WARNING: Heap: 2677: Heap globalCartel-1 already at its maximum size. Cannot expand.
    2014-08-01T00:10:01.734Z cpu54:3256532)WARNING: Heap: 2677: Heap globalCartel-1 already at its maximum size. Cannot expand.
    2014-08-01T00:20:25.165Z cpu45:3256670)WARNING: Heap: 2677: Heap globalCartel-1 already at its maximum size. Cannot expand.


  • When you enable the Bridged Protocol Data Units (BPDU) filter setting or select the Net.BlockGuestBPDU option on an ESXi host, the IS-IS frames might get dropped from the virtual machine.

    After upgrading firmware, false alarms appear in the Hardware Status tab of the vSphere Client even if the system has been idle for two to three days. Error messages similar to the following might be logged in the /var/log/syslog.log file:

    sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x8 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruChassis: Reading FRU Chassis Info Area length for 0x0 FAILED
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruBoard: Reading FRU Board Info details for 0x0 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x70 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruProduct: Reading FRU product Info Area length for 0x0 FAILED
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: data length mismatch req=19,resp=3
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0001,resp=0002
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0002,resp=0003
    2014-10-17T08:51:19Z sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0003,resp=0004
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0004,resp=0005
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0005,resp=0006
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0006,resp=0007



  • When you reboot ESXi 5.x hosts with access to RDM LUNs used by MSCS nodes, hosts deployed using the auto deploy option might take a long time to boot. This happens even when the perennially reserved flag is passed for the LUNs by using host profile. This time depends on the number of RDMs that are attached to the ESXi host. For more information on perennially reserved flag, see KB 1016106.

  • Due to some storage connectivity issues on an ESXi host, newly created virtual machines might fail to power on and warning messages similar to the following might be written in the VMkernel log:

    WARNING: Swap: 6925: Failed to create swap file '/vmfs/volumes/datastorename/vmname.vswp' : Host doesn't have a journal
    WARNING: Swap: vm 474867: 7727: Swap initialization failed Host doesn't have a journal


  • On an ESXi 5.5 host, the NFS volumes might not restore after the host reboots. This issue occurs if there is a delay in resolving the NFS server host name to IP address.

  • Attempts to power on a virtual machine that is configured with a passthrough device might fail and result in error messages similar to the following:

    PCIPassthru: PCIPassthruAttachDev:182: Attached to PCI device at 0000:04:00.4
    Device: 355: Found driver PCIPassthru for device 0x4a2441095a358448
    PCIPassthru: PCIPassthruStartDev:320: Received start request for PCI device at 0000:04:00.42014-05-09T18:35:15.193Z cpu3:33506)PCIPassthru:
    PCIPassthruScanDev:224: Received scan request for PCI device at 0000:04:00.4
    VMKPCIPassthru: 2525: Device 04:00.4 cannot be used for passthrough because of a conflicting memory region with VM's memory

    This issue occurs due to a memory overlap caused by the incorrect mapping in the physical memory between the virtual machine memory regions and the RMRR regions of the adapter.

  • ESXi 5.5.x host might fail with a purple screen with error messages similar to the following:

    @BlueScreen: #PF Exception 14 in world 32805:helper1-2 IP 0x418037069486 addr 0x189PTEs:0x185b628027;0x12102d027;0x0;
    Code start: 0x418037000000 VMK uptime: 0:00:00:46.945
    0x41238095df00:[0x418037069486]IOMMUDoReportFault@vmkernel#nover+0x1d6 stack: 0x200000404
    0x41238095df30:[0x418037069643]IOMMUProcessFaults@vmkernel#nover+0x1f stack: 0x0
    0x41238095dfd0:[0x4180370612ca]helpFunc@vmkernel#nover+0x6b6 stack: 0x0
    0x41238095dff0:[0x418037255452]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0

    This issue occurs due to the improper handling of the Input Output Memory Management Unit (IOMMU) fault reporting.

  • ESXi hosts might fail with a purple screen with error messages similar to the following:

    @BlueScreen: #PF Exception 14 in world 34370:hostd-worker IP 0x41803a8b3ef1 addr 0x18
    PTEs:0x133632027;0x806241d027;0x0;
    0xxxxxxxxx:[0xyyyyyyyyyy]VmkDrv_WakeupAllWaiters@vmkernel#nover+0x125 stack: 0xxxxxxxxxxxxx

    This issue occurs due to excessive polling requests that result in memory leakage.

  • SNMP version 2 64 bit counters might be reset with high I/O on the interface when their values reach 2^32=4294967296, which is the maximum limit of a 32 bit counter.
    The 64-bit counters and their object identifiers (OIDs) are:
    ifHCInOctets 1.3.6.1.2.1.31.1.1.1.6
    ifHCInUcastPkts 1.3.6.1.2.1.31.1.1.1.7
    ifHCInMulticastPkts 1.3.6.1.2.1.31.1.1.1.8
    ifHCInBroadcastPkts 1.3.6.1.2.1.31.1.1.1.9
    ifHCOutOctets 1.3.6.1.2.1.31.1.1.1.10
    ifHCOutUcastPkts 1.3.6.1.2.1.31.1.1.1.11
    ifHCOutMulticastPkts 1.3.6.1.2.1.31.1.1.1.12
    ifHCOutBroadcastPkts 1.3.6.1.2.1.31.1.1.1.13


  • During the process of file transmission between an ESXi host and NetApp Network Attached Storage (NAS) using Network File System (NFS), the network bandwidth might reduce when the network interface controller (NIC) cable is disconnected and reconnected back. This patch resolves the issue by allowing the VSI node to control the retransmission.

  • When you use backup software that uses the Virtual Disk Development Kit (VDDK) API call QueryChangedDiskAreas(), the list of allocated disk sectors returned might be incorrect and incremental backups might appear to be corrupt or missing. A message similar to the following is written to vmware.log:

    DISKLIB-CTK: Resized change tracking block size from XXX to YYY

    For more information, see KB 2090639.

  • In ESXi 5.5, virtual machines that have enabled vShield Endpoint might stop responding shortly after a snapshot creation or consolidation due to an internal conflict.

  • Heap memory exhaustion might cause the ESXi host to fail and display a purple diagnostic screen. Warning messages similar to the following might be displayed:

    cpu20:32991)WARNING: NMP: nmp_SatpClaimPath:2093: SATP "VMW_SATP_CX" could not add path "vmhba6:C0:T3:L101" for device "Unregistered Device". Error Out of memoryESC[0m
    cpu20:32991)WARNING: NMP: nmp_DeviceAlloc:1233: nmp_AddPathToDevice failed Out of memory (195887124).ESC[0m
    cpu20:32991)WARNING: NMP: nmp_DeviceAlloc:1242: Could not allocate NMP device.ESC[0m
    cpu20:32991)WARNING: ScsiPath: 4693: Plugin 'NMP' had an error (Out of memory) while claiming path 'vmhba6:C0:T3:L101'. Skipping the path.ESC[0m
    cpu20:32991)ScsiClaimrule: 1362: Plugin NMP specified by claimrule 65535 was not able to claim path vmhba6:C0:T3:L101. Busy
    cpu20:32991)ScsiClaimrule: 1594: Error claiming path vmhba6:C0:T3:L101. Failure.
    cpu20:32991)VMKAPICore: 123: Failed to allocate memory for spinlock (bad0014)
    cpu20:32991)WARNING: VMW_SATP_CX: satp_cx_alloc:137: vmk_SpinlockCreate failed Out of memory (0xbad0014).ESC[0m
    cpu20:32991)WARNING: VMKAPICore: 198: Corrupted lock magic for 0x410b840b6db0 (840b6da0)ESC[0m

  • When you create a host profile from an ESXi 5.5 host with a custom firewall rule for SSH service, the firewal rule does not get captured in the host profile. Even if you modify the host profile to add a custom firewall ruleset for the SSH service, it does not get applied on the ESXi host.

  • When you configure tracenet using VI Client or run the tracenet command in ESXi Shell (tracenet <ip>), the ESXi host might experience a purple screen with a backtrace similar to the following:

    0xnnnnnnnnnnnn VMK uptime: 7:07:26:48.004
    [0xnnnnnnnnnnnn]PanicvPanicInt@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    [0xnnnnnnnnnnnn]Panic_NoSave@vmkernel#nover+0x49 stack: 0xnnnnnnnnnnnn
    [0xnnnnnnnnnnnn]DLM_free@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    [0xnnnnnnnnnnnn]Heap_Free@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    [0xnnnnnnnnnnnn]Pkt_AttrFree@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnn
    [0xnnnnnnnnnnnn]Pkt_FreeHandleAndBuf@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnn
    [0xnnnnnnnnnnnn]Pkt_DropOrComplete@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    [0xnnnnnnnnnnnn]Pkt_ClearAndRelease@vmkernel#nover+0xnn stack: 0xn
    [0xnnnnnnnnnnnn]PktCapWorldCB@vmkernel#nover+0xnnn stack: 0xn
    [0xnnnnnnnnnnnn]CpuSched_StartWorld@vmkernel#nover+0xnn stack: 0xn


  • In an iSCSI environment, the read data might sometimes get corrupted during continuous read and write I/O operations from virtual machine to an RDM LUN or a virtual disk (VMDK).
    The corruption is due to the race condition in the send and receive path logic.<

  • A partition table checker module has been added in vSphere On-disk Metadata Analyzer (VOMA) to check if any VMFS partitions are missing from the partition table on the disk.

  • Due to corruption of VMFS metadata, an ESXi host might fail and display a purple diagnostic screen. An error message similar to the following might be written to the VMkernel log:
    #DE Exception 0 in world 35322:hostd-worker @ 0x41801a5f79f6

  • Virtual machines using VMXNET3 virtual adapter might fail when attempting to PXE boot from Microsoft Windows Deployment Services (WDS).

  • Frequent application failure might occur when you enable the Notify Switches option in NIC team configuration. This is due to the excessive delay caused in the ESXi Reverse Address Resolution Protocol (RARP) packet transmission.

  • Health Status in vSphere Client displays all sensor information as Unknown one hour after the ESXi host is booted.

  • An ESXi host with more than 2 vmknics stops responding when all the vmknics are deleted and recreated.

  • When multiple users login to virtual machines running Windows operating systems using the Windows Remote Desktop Protocol (RDP), VMware Tools might fail with an error similar to the following:

    VMware Tools unrecoverable error: (vthread-3)
    Exception 0xc0000005 (access violation) has occured


  • When a quiesced snapshot is created on a Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2, or a Windows Server 2012 virtual machine, duplicate disks might be created in the virtual machine.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.