Search the VMware Knowledge Base (KB)
View by Article ID

VMware ESXi 5.5, Patch ESXi550-201709401-BG: Updates esx-base VIB (2150883)

  • 0 Ratings

Details

Release date: September 14, 2017

Patch CategoryBugfix
Patch SeverityImportant
BuildFor build information, see KB 2150882.
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_esx-base_5.5.0-3.106.6480324
PRs Fixed 1511965 , 1668304 , 1680529 , 1735061 , 1749444 , 1755575 , 1757454 , 1760273 , 1761626 , 1767785 , 1768112 , 1791036 , 1791732 , 1794683 , 1798929 , 1800029 , 1808211 , 1830621 , 1833401 , 1848578 , 1853014 , 1855335 , 1872521 , 1911169 , 1798824 , 1825988 , 1819724, 1699831, 1412562,  1615627
Related CVE numbersNA


Solution

Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:

· Virtual MAC address of 00:00:00:00:00:00 is used during communication for a newly added physical network interface controller (PNIC) even after a host reboot, because the PNIC may not have an entry in the esx.conf file.

· The Service Location Protocol Daemon (SLPD) might fail when the last octet of an ESXi IP address is set to 127, for example xxx.xxx.xxx.127.

· In some cases, the vmx process might become unresponsive before the vMotion migration is completed. The restart of the hostd process might cause its failure, and the host to be disconnected from the vCenter Server.

· When certain corrected machine check errors are detected by the hardware, this might cause the ESXi host to fail with a purple screen.

· When the virtual machine has multiple disks with different absolute paths, but with the same name, and the parent of any delta disk is inaccessible, during a snapshot consolidation the delta disk might be reparented to a wrong parent, causing a disk chain failure and the virtual machine to power off.

· While retrieving storage devices registered with the Pluggable Storage Architecture (PSA) through the ESXCLI storage command storage core device list, hostd might fail, because memory leaks might cause hostd memory usage to exceed the configured memory hard limit.

· An ESXi host might stop responding or even fail if a LUN unmapping is made on the storage array side to those LUNs while connected to an ESXi host through Broadcom/Emulex fiber channel adapter, the driver is lpfc,and has I/O running.

· Guest OS might appear to slowdown or might experience a CPU spike that disappears after the address space layout randomization (ASLR) is disabled in the Guest OS and a fast suspend and resume (FSR) is performed on the virtual machine.
Such behavior might occur if both of the following conditions are present:
1. The translation cache fills with translations for numerous user-level CPUID/RDTSC instructions that are encountered at different virtual addresses in the Guest OS.
2. The virtual machine monitor uses a hash function with poor dispersion when checking for existing translations.
Disabling ASLR resolves the issue temporarily, until the ESXi host is upgraded to a version that contains the fix.

· A successful snapshot disk consolidation of one or more disks, followed by an unsuccessful snapshot disk consolidation, can lead to wrong Virtual Machine eXecutable (VMX) configuration. This can cause a virtual machine power on failure.

· If on an ESXi host running a debug build there are dvPorts or dvPort groups that are part of a distributed virtual switch (DVS) and have the Internet Protocol Flow Information eXport (IPFIX) in NetFlow disabled, and there are no other dvPorts or dvPort groups with enabled IPFIX left on this ESXi host that are part of same DVS, the ESXi host might experience a lock rank with a purple screen.

· While you take a snapshot of a virtual machine (VM), the VM might become unresponsive.

· When you add an ESXi host to a vSphere Distributed Switch, this might cause the hostd.log file to get flooded with the error message failed to get vsi stat set: Sysinfo error on operation returned status : Not found.

· In some cases, when you migrate a virtual machine with vMotion from one ESXi host with the ESXi550-201505002 patch to another with a higher version, for example the ESXi550-201609001 patch, this might cause the Guest OS to fail.

· When you use the ESXi command-line utility to execute multiple requests for a host configuration or advanced options with the command vim-cmd hostsvc/hostconfig, this might cause a memory leak. This leak can be observed in the hostd.log for each line of this message Failed to read advanced option subtree UserVars corresponds one memory leak instance of the object AdvancedUserOptionBranchImpl. As a result, the hostd service might stop responding when the hostd memory usage exceeds the configurable hard limit.

· When the Dump file set is called using the esxcfg-dumppart or other commands multiple times in parallel, an ESXi host might stop responding and display a purple diagnostic screen with entries similar to the following as a result of a race condition while dump block map is freed up:

@BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4907 - Corruption in dlmalloc
Code start: 0xnnnnnnnnnnnn VMK uptime: 234:01:32:49.087
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PanicvPanicInt@vmkernel#nover+0x37e stack: 0xnnnnnnnnnnnn
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Panic_NoSave@vmkernel#nover+0x4d stack: 0xnnnnnnnnnnnn
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DLM_free@vmkernel#nover+0x6c7 stack: 0x8
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Heap_Free@vmkernel#nover+0xb9 stack: 0xbad000e
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Dump_SetFile@vmkernel#nover+0x155 stack: 0xnnnnnnnnnnnn
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SystemVsi_DumpFileSet@vmkernel#nover+0x4b stack: 0x0
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x41f stack: 0x4fc
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@<None>#<None>+0x394 stack: 0x0
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@<None>#<None>+0xb4 stack: 0xffb0b9c8
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x0
0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry_@vmkernel#nover+0x0 stack: 0x0


· The legacy Fault Tolerance (FT) feature requires the Primary and Secondary virtual machines in the ESXi host to have compatible ESXi versions. When the protected virtual machine is running on an ESXi host with a newer version than ESXi 5.5 Update 3a, than the one meant for the duplicate virtual machine, for example ESXi 5.5, this might cause the legacy FT to fail.

· The CBRC filter uses 32-bit computation to perform calculations and returns a completion percentage for every digest recompute request. For large disks, the number of hashes are great enough to overflow the 32-bit calculation, resulting in an incorrect completion percentage.

· The VMXNET3 network adapter might hang while network traffic is running inside a virtual machine at the same time when VMXNET3 is performing a connecting or disconnecting operation. This might cause the virtual machine to lose networking connectivity.

· An ESXi host might fail while using vSphere Distributed Switch (VDS) if there is a corrupted port data file that is created as a virtual machine connects to a port on a distributed switch.

· Multicast and broadcast request packets between two virtual machines might drop if one virtual machine is configured with guest VLAN tagging and the other virtual machine is configured with virtual switch VLAN tagging, and VLAN offload is turned off on the virtual machines.

· For a virtual machine (VM) with e1000/e1000e vNIC, when the e1000/e1000e driver tells the e1000/e1000e vmkernel emulation to skip a descriptor, the transmit descriptor address and length are 0, a loss of network connectivity might occur.

· If the active memory of a virtual machine that runs on an ESXi host drops to zero, the host might start reclaiming memory even if the host has enough free memory.

· The ESXi550-201612001 patch includes a change that causes the ESXi host to disable Intel® IOMMU (also known as VT-d) interrupt remapper functionality. In the HPE ProLiant Gen8 servers, disabling this functionality causes PCI errors. As a result, the platform generates a NMI that causes the ESXi host to fail with purple diagnostic screen. The fix re-enables the Intel® IOMMU interrupt remapper functionality by default to avoid the failures in the HPE ProLiant Gen8 servers.

· The driver which creates millions of heap chunks might cause the CPU to hog while collecting heap stats. Using the vm-support command for heap stats collection might cause the ESXi host to fail with purple screen and panic message “TLB invalidation timeout”. For example, "@BlueScreen: PCPU 40 locked up. Failed to ack TLB invalidate".

· When you run XvMotion, crossing both host and datastore where the source machine does not have access to the storage of the target, if the datastore performance or networking of the source host are bad, the migration might fail, because reading or sending virtual disk content might fail. Workaround: You must improve performance of storage and network of the source host.

· An ESXi host might fail with purple diagnostic screen due to a race condition when handling the SCSI-3 persistent reservations command "REGISTER AND IGNORE EXISTING KEY".

· Logs for the MODE_SENSE(0x1a) SCSI command failures could cause a log spew, but with this fix, log spews are no longer observed in the VMkernel logs.

· If you try to configure CIM indications via the VMware vSphere API, the hardware monitoring service sfcbd might fail or create invalid pointers. You may observe the issue by the zdump core files in the directory, and also by diagnostic messages on the Direct Console User Interface (DCUI) and in log files. To avoid failures, use host profiles to manage CIM indications instead of the API.

· When the ESXi host has large number of LUNs, the scripted installation of ESXi and stateless caching through vSphere Autodeploy take more time to scan all disks. This causes the ESXi host boot to delay.This fix improves the performance for both Auto Deploy stateless caching and Scripted ESXi Installation.

· The OpenSSL package is updated to version openssl-1.0.2k.

· The ESXi Network Time Protocol (NTP) package version is updated to ntp-4.2.8p9.

· The libPNG library is updated to libpng-1.6.29.

· The Python library is updated to version 2.7.13.

· The Pixman library is updated to version 0.35.1.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)




Please enter the Captcha code before clicking Submit.
  • 0 Ratings
Actions
KB: