VMware ESXi-5.5, Patch ESXi-5.5.0-20151204001-standard
search cancel

VMware ESXi-5.5, Patch ESXi-5.5.0-20151204001-standard

book

Article ID: 334389

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Important: Always upgrade vCenter Server to version 5.5 Update 3b before you update ESXi to ESXi 5.5 Update 3b to avoid issues due to interoperability implication relating to SSLv3 disablement.

vCenter Server will not be able to manage ESXi 5.5 Update 3b hosts if you update the ESXi hosts before updating vCenter Server to version 5.5 Update 3b. VMware best practices states that the vCenter Server should be upgraded prior to upgrading the managed ESXi hosts. For more information, see ESXi 5.5 Update 3b and later hosts are no longer manageable after upgrade (2140304).
For more information on the order in which to upgrade your vSphere environment, see Update sequence for VMware vSphere 5.5 and its compatible VMware products (2057795).

For more information surrounding the SSLv3 disablement, refer to VMware ESXi 5.5 Update 3b Release Notes and VMware vCenter Server 5.5 Update 3b Release Notes.

Profile Name
ESXi-5.5.0-20151204001-standard
Build
For build information, see KB 2135410.
Vendor
VMware, Inc.
Release Date
December 08, 2015
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
  • VMware:esx-base: 5.5.0-3.78.3248547
  • VMware:ehci-ehci-hcd: 550.3.78.3248547
  • VMware:xhci-xhci: 1.0-3vmw.550.3.78.3248547
  • VMware:misc-drivers: 5.5.0-3.78.3248547
  • VMware:net-e1000e: 3.2.2.1-2vmw.550.3.78.3248547
  • VMware:tools-light: 5.5.0-3.78.3248547
  • VMware:lsi-msgpt: 3_00.255.03.03-2vmw.550.3.78.3248547
PRs Fixed
1202095, 1269418, 1290241, 1333534, 1375919, 1381829, 1389825, 1392047, 1410356, 1417988, 1420551, 1421357, 1422937, 1423640, 1425201, 1428638, 1430313, 1430767, 1431058, 1436122, 1448051, 1450142, 1452693, 1453696, 1457775, 1460113, 1466478, 1466858, 1471675, 1474737, 1475185, 1478379, 1482497, 1487823, 1504118, 1516475, 1535777,1514513, 471039
Related CVE numbers
NA


Environment

VMware vSphere ESXi 5.5

Resolution

  • An ESXi host might stop responding and disconnect from the vCenter Server. Due to this the host cannot connect with the vSphere Web Client directly. This happens due to insufficient memory allocation for likewise components.
  • The esxtop utility reports incorrect statistics on VAAI supported LUNs for the average device latency per command and the average ESXi VMkernel latency per command) due to an incorrect calculation.Note: This issue might also impact the ESXi stats in vCenter Server and VRops.
  • Virtual machine might fail to power ON or stops responding with guest OS if NVMe XS1715 SSD controller is attached in passthrough mode. An error message similar to the following is displayed:PCI passthrough device ID(0x-57e0) is invalid
  • A message similar to the following is written to the syslog.log file at /var/log/ on the ESXi 5.x host if multiple instances of the lsassd, netlogond or lwiod daemons are running at the same time:
    lsassd[<value>]: <value>:Terminating on fatal IPC exception
    This issue might occur during the ESXi host upgrade process. For more information refer, KB 2051707.
  • Attempts to install Linux might fail when you add e1000e network adapter to a virtual machine and power on the virtual machine.
    A log similar to the following is written to vmkernel.log file:
    [ 1498.266938] Call Trace:
    [ 1498.266950] [<FFFFFFFF810CA68A>] timecounter_init+0x1a/0x30</FFFFFFFF810CA68A>
    [ 1498.266973] [<FFFFFFFFA066D687>] e1000e_config_hwtstamp+0x247/0x420 [e1000e]</FFFFFFFFA066D687>
    [ 1498.266994] [<FFFFFFFFA066DD75>] e1000e_reset+0x285/0x620 [e1000e]</FFFFFFFFA066DD75>
    [ 1498.267012] [<FFFFFFFFA066F04A>] e1000_probe+0xbaa/0xee0 [e1000e] </FFFFFFFFA066F04A>
    [ 1498.267021] [<FFFFFFFF81306F75>] local_pci_probe+0x45/0xa0</FFFFFFFF81306F75>
    [ 1498.267029] [<FFFFFFFF813083E5>] ? pci_match_device+0xc5/0xd0</FFFFFFFF813083E5>
    [ 1498.267036] [<FFFFFFFF81308529>] pci_device_probe+0xf9/0x150 </FFFFFFFF81308529>
    [ 1498.267046] [<FFFFFFFF813D22B7>] driver_probe_device+0x87/0x390</FFFFFFFF813D22B7>
    [ 1498.267054] [<FFFFFFFF813D25C0>] ? driver_probe_device+0x390/0x390</FFFFFFFF813D25C0>
    [ 1498.267062] [<FFFFFFFF813D25FB>] __device_attach+0x3b/0x40</FFFFFFFF813D25FB>
    [ 1498.267070] [<FFFFFFFF813D011B>] bus_for_each_drv+0x6b/0xb0</FFFFFFFF813D011B>
    [ 1498.267077] [<FFFFFFFF813D21B8>] device_attach+0x88/0xa0 </FFFFFFFF813D21B8>

  • When an ESXi host loses connectivity to the remote Syslog server, the events GeneralHostWarningEvent and AlarmStatusChangedEvent are indefinitely logged in too many alert messages, resulting in the vpx_event and vpx_event_arg tables to fill up the vCenter database. The issue causes extreme vCenter latency and the vCenter Server to stop responding.
  • Migrating data from a physical NAS to a virtual machine's file server on vSAN might fail. An error message similar to the following might be displayed:
    File not found
  • When an ESXi host is re-installed, its host UUID changes. When such an ESXi host is added back to the vSAN cluster, the disks and other virtual machine components that belong to this host might continue to show up as Absent or Unhealthy.
  • Attempts to reboot an ESXi host in maintenance mode might fail with a purple diagnostic screenAn error message similar to the following might be displayed in the vmkwarning.log file:
    2015-03-03T08:40:37.994Z cpu4:32783)WARNING: LSOM: LSOMEventNotify:4571: VSAN device 523eca86-a913-55d4-915e-f89bdc9fab46 is under permanent error.
    2015-03-03T08:40:37.994Z cpu1:32967)WARNING: LSOMCommon: IORETRYCompleteSplitIO:577: Throttled: max retries reached Maximum kernel-level retries exceeded
    2015-03-03T08:40:39.006Z cpu6:32795)WARNING: LSOMCommon: IORETRYParentIODoneCB:1043: Throttled: split status Maximum kernel-level retries exceeded
    2015-03-03T08:40:39.006Z cpu6:32795)WARNING: PLOG: PLOGElevWriteMDDone:255: MD UUID 523eca86-a913-55d4-915e-f89bdc9fab46 write failed Maximum kernel-level retries exceeded
    2015-03-03T08:41:44.217Z cpu1:34228)WARNING: LSOM: LSOMEventNotify:4571: VSAN device 52ed79c6-b64e-3f60-289f-5870e19a85f0 is under permanent error.

  • Attempts to consolidate disks in VMware vSAN might fail even when the vSanDatastore or the disk drive has sufficient space. An error message similar to the following is displayed:
    An error occurred while consolidating disks: msg.disklib.NOSPACE
  • Attempts to resume a virtual machine from suspended state or resume a virtual machine during vmotion/svmotion might fail. An error message similar to the following might be displayed:
    The virtual machine cannot be powered on
    You can also check the vmware.log file and search for an error with the extension msg.checkpoint.PASizeMismatch
    .
  • Virtual machines using VMXNET3 virtual adapter might fail when attempting to boot from iPXE (open source boot firmware).
  • When hostd is down, you are unable to extract the VSI nodes which might result in analyzing issues in the field. A light observer program, vsanObserver, has been added that allows the observer to run without depending on hostd.
  • Enabling vMotion on vmk10 or higher might cause vmk1 to have vMotion enabled on reboot of ESXi host. This issue can cause excessive traffic over vmk1 and result in network issues.
  • After an ESXi host failure, when HA attempts to start the affected VMs on other hosts, some of the VMs might stop responding while booting.
  • An ESXi 5.x host might get disconnected from the vCenter Server when inodes get exhausted by small footprint CIM broker daemon (sfcbd) service. After the ESXi 5.x host enters this state, it connected be reconnected to the vCenter Server.

    A log similar to the following is reported in /var/log/hostd.log indicating that the ESXi 5.x host is out of space:
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device

    A message similar to the following is written in /var/log/vmkernel.log indicating that the ESXi 5.x host is out of inodes:
    cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
    cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.


  • The ESXi 5.5 host might fail with a screen after virtual machine stops responding, due to racy window in swap code breaking large pages. An error message similar to the following might be displayed:
    #PF Exception 14 in world 32856:helper14 IP 0x41801faf6560 addr 0x410e868686802015-04-03T13:03:17.648Z cpu22:32856)0x41238161dc10:[0x41801faf6560]Alloc_Dealloc@vmkernel#nover+0x12c stack: 0x41238161dc60, 0x41801fa12015-04-03T13:03:17.648Z cpu22:32856)0x41238161dc80:[0x41801fc6874d]MemSched_WorldCleanup@vmkernel#nover+0x205 stack: 0x41238161dd00, 0x2015-04-03T13:03:17.648Z cpu22:32856)0x41238161df30:[0x41801fae317e]WorldCleanup@vmkernel#nover+0x1ce stack: 0x0, 0x412381627000, 0x41232015-04-03T13:03:17.648Z cpu22:32856)0x41238161dfd0:[0x41801fa6133a]helpFunc@vmkernel#nover+0x6b6 stack: 0x0, 0x0, 0x0, 0x0, 0x02015-04-03T13:03:17.648Z cpu22:32856)0x41238161dff0:[0x41801fc56872]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0, 0x0, 0x0, 0x0,
  • The Changed Block Tracking might intermittently reset the changeTrackingSupported flag value from true to false. As a result you might be unable to take a backup of virtual machine.
  • Attempts to set the resource pool to user defined respool might fail, this happens because the QoS priority tag in User-defined network resource pools is not taking effect.
  • When you disable VDS vmknics from host profile, they are not ignored during compliance check and also while applying the profile to any host. This can cause NSX preparation for stateless ESXi hosts to fail.
  • When ServerView CIM Provider and Emulex CIM Provider co-exist on the same ESXi host, the Emulex CIM Provider, sfcb-emulex_ucn might fail to respond resulting in failure to monitor hardware status.
  • When virtual machines are migrated or check-pointed along with VMM swapping, host might stop responding due to excessive logging similar to the following:
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0

  • The hostd service might stop responding and fail with an error. This happens due to unavailability of memory in vmkctl, which causes hostd to take take up more memory. An error message similar to the following might be displayed:
    Memory exceeds hard limit
  • After upgrading the ESXi host with 1.5 TB of memory from 5.1 to 6.0 on a HP Server with AMD Processor, the host might unexpectedly stop responding or reboot. You will also see Uncorrectable Machine Check Exceptions (UMCEs) similar to the following in the Integrated Management log file.
    Critical","CPU","##/##/2014 15:41","##/##/2014 15:41","1","Uncorrectable Machine Check Exception (Board 0, Processor 3, APIC ID 0x00000060, Bank 0x00000004, Status 0xF6000000'00070F0F, Address 0x00000050'61EA3B28, Misc 0x00000000'00000000)",
    Mode of failure: Unexpectedly reboot. IML displays UMCE occurred.

  • Attempts to remove a disk from a VMware vSAN diskgroup might result in a purple diagnostic screen as the diskgroup validation fails due to invalid metadata of the SSD/MD disk partition.
  • When virtual machines are migrated or check-pointed along with VMM swapping, host might stop responding due to excessive logging similar to the following:
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0

  • When applying host profile during Auto Deploy, you might lose network connectivity because the VXLAN Tunnel Endpoint (VTEP) NIC gets tagged as management vmknic.
  • When you attempt to upgrade VMware Tools for a virtual machine running on VMware ESXi 5.5 Update 3b, the auto-upgrade fails. An error message similar to the following is displayed:
    vix error code = 21009
    Note: The issue occurs if the following guest files exist on the Virtual Machine:
    Microsoft Windows VM:
    C:\Windows\\Temp\\vmware-SYSTEM\\VMwareToolsUpgrader.exe
    Red Hat Enterprise Linux VM:
    /tmp/vmware-root

  • VMFS volume on an ESXi host might remain locked due to failed metadata operations. An error message similar to the following is observed in vmkernel.log file:
    WARNING: LVM: 12976: The volume on the device naa.50002ac002ba0956:1 locked, possibly because some remote host encountered an error during a volume operation and could not recover.
  • After you rename a Powered On virtual machine, if you run the esxcli vm process list command to get the list of running virtual machines from the host, the list might display the old virtual machine name.
  • You might receive a spurious warning when you start a virtual machine running OS X 10.9. A message similar to the following might be displayed:
    Your Mac OS guest might run unreliably with more than one virtual core. It is recommended that you power off the virtual machine, and set its number of virtual cores to one before powering it on again. If you continue, your guest might panic and you might lose data.

    The warning was intended to appear only for older versions of OS X and does not apply to OS X 10.9. The OS X 10.9 version requires at least two processor cores in order to operate reliably. The spurious warning should be ignored.


  • Orphaned LSOM object might be retained after an All Paths Down(APD) disk error is observed on the Solid State Disk (SSD).
  • You might experience hardware monitoring failure when you use 3rd party software ServerView RAID Manager. The sfcb-vmware_aux stops responding due to race condition.
  • When collecting ESXi log bundle, the lldpnetmap command enables LLDP; however, the LLDP can be only set on Both mode and the LLDP packets are sent out by the ESXi host. The packets might cause the FCoE link to go down.
  • Limiting the IOPS value for a virtual machine disk might result in reduced IOPS that are lower than the configured Read/Write operations limit.This issue occurs when the size of the Read/Write operation (IO) is equal to the ESX IO scheduler's Cost Unit size. Due to this, IO scheduler considers an IO as multiple IOs. This leads to throttling of the IOs.
  • Enhancements to xhci driver for USB 3.0 controllers on Apple MacPro 6.1 Servers. Support for USB 3.0 controllers is available on Intel Xeon E3 v5 (E3-1200/E3-1500 V5) processor series code named Skylake-S based servers.To enable the USB 3.0 functionality, you need to install all the vibs packaged in this bulletin.
  • The Intel SunrisePt NICs, Intel I219-V and I219-LM LOM names have been corrected on Intel Skylake-S based servers.
  • This patch updates the tools-light VIB to include the VMware Tools version 10.0.0. Refer to the VMware Tools 10.0.0 Release Notes to see the issues resolved in this release.
  • vSphere might not detect all the 18 drives in the system due to the lsi_msgpt3 driver being unable to detect a single drive per HBA if there are multiple HBAs in a system.