VMware ESXi 5.1, Patch ESXi-5.1.0-20151004001-standard
search cancel

VMware ESXi 5.1, Patch ESXi-5.1.0-20151004001-standard

book

Article ID: 334329

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile Name
ESXi-5.1.0-20151004001-standard
Build
For build information, see KB 2114860.
Vendor
VMware, Inc.
Release Date
October 01, 2015
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
  • VMware:esx-base:5.1.0-3.60.3070626
  • VMware:tools-light:5.1.0-3.60.3070626
PRs Fixed
976312, 1005080, 1339681, 1378455, 1413747, 1424604, 1427488, 1431335, 1439183, 1443723, 1446167, 1453058, 1455442, 1455512, 1467027, 1467661, 1508384, 1333432, 1346116
Related CVE numbers
N/A

For information on patch and update classification, see KB 2014447.

To search for available patches, see the Patch Manager Download Portal.


Environment

VMware vSphere ESXi 5.1

Resolution

Summaries and Symptoms

This patch updates the following issues:

  • An ESXi 5.1.x hostd.log might display the following error continuously:

    [54A08B90 info 'ha-eventmgr'] Event 6428 : Network memory pool free size dropped below 2 percent.

    The vSphere Client instance connected either directly to the host or to vCenter Server displays the event every 5 seconds. The event might be false positive and you might not notice any abnormal behavior in the system. This issue is resolved by accurately calculating the memory pool usage. For more information, see KB 2049815.

  • After you rename a powered on virtual machine, if you run the command esxcli vm process list to get the list of running VMs from the host, the list might display the old name of the VM.

  • Attempts to delete a desktop pool for which the VDI environment is enabled might cause the VMDK files of another desktop pool to be deleted.

    An error message similar to the following is displayed:

    014-07-23T14:08:03.408-07:00 [06072 info 'Default' opID=30545c60] [VpxLRO] -- ERROR task-19533 -- vm-1382 -- vim.ManagedEntity.destroy: vim.fault.FileNotFound:
    --> Result:
    --> (vim.fault.FileNotFound) {
    --> dynamicType =<unset>,
    --> faultCause = (vmodl.MethodFault) null,
    --> file = "[cntr-1] guest1-vm-4-vdm-user-disk-D-c7a28539-3d34-4771-9bcb-cb2386ec2016.vmdk",
    --> msg = "File </unset>
    VMDK deletion occurs, when a particular virtual machine's Operating System<unset>[cntr-1] guest1-vm-4-vdm-user-disk-D-c7a28539-3d34-4771-9bcb-cb2386ec2016.vmdk was not found",
    --> }
    --> Args

    </unset>
    VMDK deletion occurs when the OS of a particular virtual machine and user data disk are spread across different datastores. This issue is not visible when all virtual machines files reside in same datastore.

  • Attempts to install or upgrade a Linux guest OS running kernel 3.10.0-210 or later might fail if the virtual machine is configured to use an E1000E network adapter.

    An error message similar to the following is written to vmcore-dmesg.txt from kdump of Linux guest OS or the serial console logs:

    [ 1498.263326] divide error: 0000 [#1] SMP
    [ 1498.263340] Modules linked in: e1000e ptp pps_core bnep bluetooth rfkill fuse btrfs zlib_deflate raid6_pq xor vfat msdos fat ext4 mbcache jbd2 binfmt_misc ip6t_rpfilter ip6t_REJECT ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter ip_tables ppdev vmw_balloon pcspkr serio_raw parport_pc parport shpchp i2c_piix4 vmw_vmci uinput xfs libcrc32c sr_mod cdrom ata_generic pata_acpi sd_mod crc_t10dif crct10dif_common vmwgfx drm_kms_helper ttm drm ata_piix ahci libahci vmw_pvscsi i2c_core libata floppy dm_mirror
    [ 1498.266670] dm_region_hash dm_log dm_mod
    [ 1498.266682] CPU: 0 PID: 4988 Comm: kworker/0:2 Not tainted 3.10.0-210.el7.x86_64 #1
    [ 1498.266689] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/30/2013
    [ 1498.266704] Workqueue: pciehp-192 pciehp_power_thread
    [ 1498.266728] task: ffff88004bd1db00 ti: ffff88006a8c4000 task.ti: ffff88006a8c4000
    [ 1498.266733] RIP: 0010:[<ffffffffa0663ed7>] [<ffffffffa0663ed7>] e1000e_cyclecounter_read+0xa7/0xc0 [e1000e]
    [ 1498.266764] RSP: 0000:ffff88006a8c7ae8 EFLAGS: 00010246
    [ 1498.266769] RAX: 0000000000000000 RBX: ffff8800368e37e0 RCX: 0000000000000000
    [ 1498.266773] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff8800368e37c8
    [ 1498.266778] RBP: ffff88006a8c7ae8 R08: 0000000000000032 R09: 0000000000000000
    [ 1498.266782] R10: 00000007ffffffff R11: ffff88007a513000 R12: 13b0c54b2f2762ed
    [ 1498.266786] R13: 0000000000000000 R14: ffff8800368e0db8 R15: 0000000000000000
    [ 1498.266793] FS: 00007f8e68859a00(0000) GS:ffff88007fc00000(0000) knlGS:0000000000000000
    [ 1498.266798] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    [ 1498.266802] CR2: 00007feb4a53b020 CR3: 00000000596b7000 CR4: 00000000000007f0
    [ 1498.266885] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [ 1498.266907] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    [ 1498.266911] Stack:
    [ 1498.266914] ffff88006a8c7b08 ffffffff810ca68a ffff8800368e08c0 0000000001a00000
    [ 1498.266923] ffff88006a8c7b60 ffffffffa066d687 0000000000000286 ffff88006a8c7b38
    [ 1498.266930] 01a0000000000000 00000000b957e74c ffff8800368e08c0 ffff8800368e0db8
    [ 1498.266938] Call Trace:
    [ 1498.266950] [<ffffffff810ca68a>] timecounter_init+0x1a/0x30
    [ 1498.266973] [<ffffffffa066d687>] e1000e_config_hwtstamp+0x247/0x420 [e1000e]
    [ 1498.266994] [<ffffffffa066dd75>] e1000e_reset+0x285/0x620 [e1000e]
    [ 1498.267012] [<ffffffffa066f04a>] e1000_probe+0xbaa/0xee0 [e1000e]
    [ 1498.267021] [<ffffffff81306f75>] local_pci_probe+0x45/0xa0
    [ 1498.267029] [<ffffffff813083e5>] ? pci_match_device+0xc5/0xd0
    [ 1498.267036] [<ffffffff81308529>] pci_device_probe+0xf9/0x150
    [ 1498.267046] [<ffffffff813d22b7>] driver_probe_device+0x87/0x390

    For more information, see Red Hat Bug Number.

  • A message similar to the following is written to the syslog.log file at /var/log/ on the ESXi 5.x host if multiple instances of the lsassd, netlogond or lwiod daemons are running at the same time:

    lsassd[<value>]: <value>:Terminating on fatal IPC exception

    This issue might occur during the ESXi host upgrade process.

    For more information, see KB 2051707.

  • When ServerView and Emulex CIM Provider co-exist on the same ESXi host, the Emulex CIM Provider (sfcb-emulex_ucn) might stop responding. As a result, the CIM Provider might fails to monitor the hardware status.

  • Applying host profile to a stateless ESXi host connected to a large number of storage LUNs might increase the time for the host to reboot when you enable stateless caching with 'esx' as first disk argument. This happens when you manually apply the host profile or when the host is rebooting.

  • When an ESXi 5.1 host receives a nonmaskable interrupt (NMI) triggered due to a hardware error, the host might fail without a purple screen.

    The ESXi host displays an error similar to the following:

    YYYY-MM-DDT03:12:21.546Z cpu0:16969934)@BlueScreen: LINT1/NMI (motherboard nonmaskable interrupt), undiagnosed. This may be a hardware problem; please contact your hardware vendor.

  • An ESXi 5.1 host might not receive SNMPv3 traps if you use a third-party management tool to collect SNMP data.
    Entries similar to the following are written to syslog file:

    <yyyy-mm-dd>T<time>snmpd: snmpd: snmp_main: rx packet size=151 from: 172.20.58.220:59313
    <yyyy-mm-dd>T<time>snmpd: snmpd: SrParseV3SnmpMessage: authSnmpEngineBoots(0) same as 0, authSnmpEngineTime(2772) within 0 +- 150 not in time window</time></yyyy-mm-dd></time></yyyy-mm-dd>

  • When multiple LUNs are connected to an ESXi host, after a LUN rescan, snmpwalk command times out and ESXi host might stop responding when SNMP queries are sent using a MIB file.

    Logs similar to the following are written to syslog file:

    YYYY-MM-DDT20:09:03Z snmpd: snmpd: Searching for requested instance of hrStorageDescr
    YYYY-MM-DDT20:09:20Z snmpd: GetStorageInfo: cache miss, loaded 56 entries into storage cache

  • When multiple virtual machines share storage space, the vSphere Client might display an incorrect output of Power CLI and the vSphere Web Client summary page might display incorrect values for the following:

    * Not-shared Storage in the VM Summary Page
    * Provisioned Space in the data Summary Page
    * Used Space on the VM tab of the host
  • A virtual machine configured with VMXNET3 virtual adapter might fail when you boot the VM from iPXE, which is an open source boot firmware.

  • An ESXi 5.x host might get disconnected from the vCenter Server when Small Footprint CIM Broker daemon (sfcbd) service exhausts all inodes. The host might fail to reconnect to the vCenter Server after entering this state.

    A message similar to the following is reported in var/log/hostd.log indicating that the ESXi 5.x host is out of space:

    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device

    A message similar to the following is written to /var/log/vmkernel.log indicating that the ESXi 5.x host is out of inodes:

    cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
    cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.

  • An ESXi host might get disconnected from the Active Directory domain and lose connection with vCenter Server when the cache consumes the entire memory allocated to lsassd. This issue is resolved by limiting the cache size to a default value of 10 MB.

  • A VMFS volume on an ESXi host might remain locked due to failed metadata operations.

    An error message similar to the following is written to vmkernel.log:

    WARNING: LVM: 12976: The volume on the device naa.50002ac002ba0956:1 locked, possibly because some remote host encountered an error during a volume operation and could not recover.

  • Under memory overload, an ESXi host might experience a purple diagnostic screen and fail to respond.

    A trace similar to the following is displayed on the host:

    @BlueScreen: PCPU 4: no heartbeat (2/2 IPIs received)
    Code start: 0x418023200000 VMK uptime: 0:00:58:25.198
    Saved backtrace from: pcpu 4 Heartbeat NMI
    SP_WaitLockIRQ@vmkernel#nover+0x168 stack: 0x1fe00000
    LPage_ReapPools@vmkernel#nover+0x14e stack: 0x1
    MemDistributeAllocateAndTestPages@vmkernel#nover+0x229 stack: 0x4123
    MemDistributeAllocPagesWait@vmkernel#nover+0x2b stack: 0x4123648a500
    MemDistributeAllocPagesWaitLegacy@vmkernel#nover+0xae stack: 0x0
    MemDistributeAllocVMPageWaitInt@vmkernel#nover+0xd9 stack: 0x0
    MemDistribute_AllocSharedVMPageWait@vmkernel#nover+0x25 stack: 0x418
    VmMem_AllocSharedPage@vmkernel#nover+0xf3 stack: 0x1
    VmMemCowSharePageInt@vmkernel#nover+0x12e stack: 0x0
    VmMemCow_SharePages@vmkernel#nover+0x341 stack: 0x410077a032a4
    VMMVMKCall_Call@vmkernel#nover+0x186 stack: 0x0
  • IPv6 Router Advertisements (RA) on a Linux virtual machine might not function as expected when VMXNET3 adapters tag with 802.1q tags. This occurs when IPv6 RA address intended for the VLAN interface is delivered to the base interface.

  • When you attempt to create a quiesced snapshot of a Linux virtual machine, the VM might fail after the snapshot operation and require a reboot.

    An error message similar to the following is written to vmware.log:

    <YYYY-MM-DD>T<time>Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: done with snapshot 'smvi_UUID': 0
    <YYYY-MM-DD>T<time>Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (40).
    <YYYY-MM-DD>T<time>Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
    <YYYY-MM-DD>T<time>Z| vmx| I120: Vix: [18631 guestCommands.c:1926]: Error VIX_E_TOOLS_NOT_RUNNING in
    MAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest</time></time></time></time>


    For more information, see KB 2116120.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.