VMware ESXi 6.0, Patch ESXi-6.0.0-20160804001-standard
search cancel

VMware ESXi 6.0, Patch ESXi-6.0.0-20160804001-standard

book

Article ID: 334516

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile Name
ESXi-6.0.0-20160804001-standard
Build
For build information, see KB 2145663.
Vendor
VMware, Inc.
Release Date
Aug 4, 2016
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
  • VMware:esx-base:6.0.0-2.43.4192238
  • VMware:vsanhealth:6.0.0-3000000.3.0.2.43.4064824
  • VMware:vsan:6.0.0-2.43.4097166
  • VMware:net-vmxnet3:1.1.3.0-3vmw.600.2.43.4192238
  • VMware:tools-light:6.0.0-2.43.4192238
  • VMware:esx-ui:1.4.0-3959074
  • VMware:misc-drivers:6.0.0-2.43.4192238
PRs Fixed
912743, 1390800, 1464308, 1464736, 1492308, 1503784, 1509011, 1569235, 1570739, 1584180, 1584677, 1585409, 1585860, 1587367, 1588508, 1588592, 1594808, 1597745, 1598668, 1600954, 1603451, 1603974, 1604145, 1604946, 1607871, 1613520, 1614135, 1625507, 1626207, 1633201, 1634222, 1636703, 1638971, 1640412, 1648907, 1649204, 1652951, 1653937, 1659449 1660217, 1673256, 1626609
Related CVE numbers
NA


Environment

VMware vSphere ESXi 6.0

Resolution

Summaries and Symptoms

This patch resolves the following issues:

  • In some cases, the configuration files are not preserved when you update the VIBs and are not removed when you remove the VIBs.

  • The Network Performance Charts in vCenter Server 6.0 shows dropped packets under the Receive Packets Dropped counter and the Transmit Packets Dropped counter as the packets filtered by the I/O chain are incorrectly recorded as dropped packets. This is a reporting issue, the packets are not dropped. For additional details, see KB 2052917.

  • When you migrate virtual machines from an ESXi host to maintenance mode, the ESXi host stops responding and displays a purple diagnostic screen with messages similar to the following:

    cpu14:33266)@BlueScreen: NOT_IMPLEMENTED bora/vmkernel/sched/cpusched.c:9556
    cpu14:33266)Code start: 0xnnnnnnnnnnnn VMK uptime: 4:18:56:59.724
    cpu14:33266)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PanicvPanicInt@vmkernel#nover+0x37e stack: 0xnnnnnnnnnnnn
    ...
    ...
    cpu14:33266)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CpuSchedIdleLoopInt@vmkernel#nover+0x22d stack: 0x4000
    cpu14:33266)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CpuSchedDispatch@vmkernel#nover+0x1576 stack: 0xnnnnnnnnnnnn

  • Attempts to deploy a VM from an OVF template created from a VM with snapshots fail to boot up the OS in a VVOL environment.

  • Attempts to start vApps after deployment might fail with error messages similar to the following:

    [ nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn ] Unable to start vApp "QA-2016ohno--nnnnnnnn".
    - A general system error occurred: DVS error: see faultCause
    vCenter Server task (moref: task-24910) failed in vCenter Server 'uss1-2-vc1' (nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn).
    - A general system error occurred: DVS error: see faultCause

    The issue occurs due to a race condition where concurrent applyDV* API calls and multiple threads are attempting the add, edit, or remove operation as required for individual port or portgroups on the same host.

  • When you take a quiesced snapshot of Fault Tolerance protected Windows 2008 (or above) using the VMware Tools VSS provider, the Fault Tolerance state changes from "Protected" to "Unprotected."

  • After you connect to a local CD/DVD ISO image on VMware Remote Console (VMRC), there is no option to disconnect the mounted ISO image other than closing the VMRC window.

  • ESXi does not support the descriptor format for SCSI sense data, hence ESXi hosts are unable to detect drives greater than 2TB returning the descriptor format sense data.

    This patch resolves the issue by converting the descriptor format to fixed format sense.

  • ESXi 6.0 hosts report a warning message similar to the following when you upgrade the VMFS volume(s) from VMFS3 to VMFS5:

    Deprecated VMFS volume(s) found on the host. Please consider upgrading volume(s) to the latest version.

  • The vSAN device queue depth as reported through the VMware Sysinfo Interface (VSI) and system logs might be inaccurate when compared to the actual device queue depth.

  • Attempts to apply host profile on an Auto Deployed ESXi host during the first boot up might fail. The issue occurs as the host is unable to enter the maintenance mode if the base image used for Auto Deploy contains an expired evaluation licence. For additional details, see KB 2116320.

  • ESXCLI commands network vm port list, network vm list, network nic vlan stats get, network port filter stats get, and network port stats get do not support retrieving data from opaque switches.

  • When you perform vMotion to migrate a hardware version 11 VM from one ESXi host to another, the vMotion takes approximately 60 seconds and the VM stops responding. Messages similar to the following are logged in the hostd.log file:

    succeeded with result: (vim.host.VMotionManager.SrcVMotionResult) {
    --> vmDowntime = 16676,
    --> vmPrecopyStunTime = 44100209,
    --> vmPrecopyBandwidth = 10095931
    --> }


    Here the high vmPrecopyStumTime value indicates the time consumption for installing the memory trace is longer.

  • The vendor name of an image profile gets changed to the hostname, if the VIBs of the image profile are modified.

  • When the Path Selection Policy is set to Round Robin, performing vMotion of active node results in the SQL database to become inaccessible. Attempts of writes to the database will fail. The cluster resources will appear failed. Also, the NTFS error 140 and the cluster event ID 1069 are registered.

  • When Fault Tolerance is enabled, the FT protocol might encounter a race condition that results in large increases in guest VM network latency and decrease in the VM network bandwidth.

  • ESXi generates shadow NIC virtual MAC address that are duplicated across multiple ESXi hosts in the environment. The issue impacts the healthCheck functionality.

    This patch resolves the issue, however, you need to un the following steps when you upgrade the ESXi host for the changes take effect:

    1. Upgrade to ESXi patch release, ESXi600-201608001.

    2. Remove all virtualMac entries from /etc/vmware/esx.conf file. The entries are similar to:

      /net/pnic/child[0001]/virtualMac = "00:50:56:df:ff:06"

    3. Run the esxcfg-nics -r command.

    4. Reboot the ESXi host.

    You can also edit the virtualMAC address using the following CLI command:

    #esxcli network nic set -n vmnic0 -V "00:50:56:59:82:e0"

  • Attempts to deploy large number of VMs by creating linked clones using the Horizon View 6.1.1 across multiple hosts might fail. Error messages similar to the following are logged in the hostd.log file:

    FileIOErrno2Result: Unexpected errno=116, Stale NFS file handle

    and

    Reconfigure failed: vim.fault.FileLocked

  • This patch introduces the ability to detect newer versions of VMFS and thus preventing accidental overwrites.

  • An ESXi host might stop responding and display a purple diagnostic screen due to certain memory conditions combined with the Virtual Machine Manager (VMM) leader exiting during Asynchronous IO operations in an IOFilter environment. You will see backtrace similar to the following:

    cpu6:97457)Backtrace for current CPU #6, worldID=97457, rbp=0xnnnnnnnn
    cpu6:97457)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UserPTE_GetMPN@<None>#<None>+0xa stack: 0x0, 0x0, 0xnnnnnnnnnnnn, 0x
    ...
    ...
    cpu6:97457)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UnpinPageCount@<None>#<None>+0x5a stack: 0xnnnnnnnnnnnn, 0x439d
    cpu6:97457)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UserAIOUnpinUserBuffer@<None>#<None>+0xa0 stack: 0x0, 0xnnnnnnnnnnnn
    cpu6:97457)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UserAIODestroyIO@<None>#<None>+0xb7 stack: 0xnnnnnnnnnnnn, 0xnnnnnnnn
    cpu6:97457)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UserAIOCleanupContext@<None>#<None>+0x113 stack: 0x0, 0x0, 0x0, 0x41
    cpu6:97457)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UserAIO_EarlyCartelCleanup@<None>#<None>+0x47 stack: 0x3, 0xnnnnnnn
    ...

  • During disk group creation or initial setup, the queue depth is set for a device; however, the device UUID is not entered to correlate with the log message to see for which device the queue depth is being set. You can also see log messages similar to the following in the vmkernel.log file:

    cpu6:32924)LSOMCommon: IORETRYQDepthChangeCB:1869: New Queue Depth for device nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn set to 180
    cpu6:32924)LSOMCommon: IORETRYQDepthChangeCB:1869: New Queue Depth for device nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn set to 180

  • An ESXi host displays a purple diagnostic screen when the node in the fcntl locklist is freed without being removed from the list leading to fcntl heap exhaustion.

  • ESXi 6.0 hosts that use mpt2sas or mptsas drivers might fail with a purple diagnostic screen containing entries similar to the following:

    @BlueScreen: #PF Exception 14 in world 33513:tq:tracing IP 0xnnnnnnnn addr 0xnnnnnnnn
    0xnnnnnnnn:[0xnnnnnnnn]Panic_Exception@vmkernel#nover+0x258
    0xnnnnnnnn:[0xnnnnnnnn]IDTReturnPrepare@vmkernel#nover+0x174
    0xnnnnnnnn:[0xnnnnnnnn]gate_entry_@vmkernel#nover+0x0
    0xnnnnnnnn:[0xnnnnnnnn]VmkDrv_WakeupAllWaiters@vmkernel#nover+0xc6
    0xnnnnnnnn:[0xnnnnnnnn]VmkTimerQueueWorldFunc@vmkernel#nover+0x21f
    0xnnnnnnnn:[0xnnnnnnnn]CpuSched_StartWorld@vmkernel#nover+0x

  • Virtual machines with the OpenServer 6.0.0V for VMware on ESXi 6.0 with E1000 network adapter display increased CPU usage to 100% and fail to respond after some time.

  • ESXi host smartd reports wrong smart result for disk temperature for the HGST HUS724030AL and HITACHI HDT721075SLA360 disks, thereby triggering alerts every 30 minutes. An error log similar to the following is logged in the syslog.log file:

    nnnn-nn-nnTnn:nn:nnZ smartd: [warn] naa.5000cca22cc043f6: above TEMPERATURE threshold (176 > 0)

    For additional details, see 2146233

  • Path Selection Policy as Round Robin (PSP_RR) with iops=1 set as default for XtremIO, model XtremApp Array.

  • In ESXi 6.0, the "hostname" command uname -n /gethostname() returns only the DNS Shortname (hostname) instead of the FQDN (hostname.domain name).

  • Attempts to create a VM on vSAN will fail if you create disk groups in a non-vSAN ESXi host by running the esxcli vsan storage add command before adding the host to vSAN cluster. The issue occurs as the vSAN disk management layer creates some components at disk group creation time only when vSAN is enabled and not during esxcli command execution.

  • ESXi 6.0 hosts become unresponsive when joined to an Active Directory domain and disconnects from the vCenter Server. You can see entries similar to the following in the /var/log/vmkernel.log file:

    ALERT: hostd detected to be non-responsive

  • Mutliple ESXi hosts might stop responding with a purple diagnostic screen with a backtrace similar to the following in a multicast snooping world if there is a multicast packet storm:

    (gdb) bt
    #0 RefCountIsSumZero (refCounter=0xnnnnnnnnnnnn, ra=<value optimized out>) at bora/vmkernel/main/refCount.c:355
    #1 RefCountBlockSpin (refCounter=0xnnnnnnnnnnnn, ra=<value optimized out>) at bora/vmkernel/main/refCount.c:571
    #2 RefCountBlock (refCounter=0xnnnnnnnnnnnn, ra=<value optimized out>) at bora/vmkernel/main/refCount.c:616
    #3 0xnnnnnnnnnnnnnnnn in RefCount_BlockWithRA (data=<value optimized out>) at bora/vmkernel/private/refCount.h:740
    #4 Portset_LockExclWithRA (data=<value optimized out>) at bora/vmkernel/net/portset.h:862
    #5 Portset_LockExcl (data=<value optimized out>) at bora/vmkernel/net/portset.h:870
    #6 McastFilter_ProcessIGMPQuery (data=<value optimized out>) at bora/modules/vmkernel/etherswitch/mcastFilterES.c:646
    #7 McastFilter_ProcessIGMPList (data=<value optimized out>) at bora/modules/vmkernel/etherswitch/mcastFilterES.c:1097
    #8 McastFilterSnoopingWorldCB (data=<value optimized out>) at bora/modules/vmkernel/etherswitch/mcastFilterES.c:1602
    #9 0xnnnnnnnnnnnnnnnn in CpuSched_StartWorld (destWorld=<value optimized out>, previous=0xnnnnnnnnnnnn) at bora/vmkernel/sched/cpusched.c:10756
    #10 0x0000000000000000 in ?? ()

  • Fault Tolerance protected virtual machines with Distributed Virtual Switch (DVS) backed NICS might lose network connectivity on failover after a secondary vMotion.

  • Unable to query memory statistics using SNMP as the Real Memory entry is missing from the hrStorageTable leading to uncertainty in the environment.

  • Deleting device nodes (/dev/XXX) in ESXi manually or using scripts will result in the host to stop responding and display a purple diagnostic screen with backtrace similar to the following:

    cpu21:nnnnn)@BlueScreen: #GP Exception 13 in world nnnnn:sshd @ 0xnnnnnnnnnnnn
    cpu21:nnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 0:00:35:50.862
    cpu21:nnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SPLockWork@vmkernel#nover+0x19 stack: 0xnnnnnnnnnnnn
    cpu21:nnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SemaphoreBeginReadInt@vmkernel#nover+0x1c stack: 0xnnnnnnnnnnnn
    cpu21:nnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]FSS_BeginIOShared@vmkernel#nover+0x11 stack: 0xnnnnnnnnnnnn
    cpu21:nnnnn)base fs=0x0 gs=0xnnnnnnnnnnnn Kgs=0x0

  • Attempts to configure a RHEL VM with GRUB to use the at_keyboard input to allow configuration of the AZERTY keymap fail to activate the countdown timer on boot.

  • The vCenter Server currently displays the VVol container capacity based only on the physical total (physicalTotalMB). When the logical space limit (logicalLimitMB) field is non-zero, the logicalLimitMB should supersede the physicalTotalMB as the container capacity, which does not happen currently.

    physicalTotalMB - Total storage space in the storage container or storage profile’s allocation pool, in megabytes (MB)
    logicalLimitMB - Total logical space limit set by the Storage Administrator for the storage container or storage profile's allocationpool, in megabytes (MB).

    This patch resolves the issue by deriving container capacity from the logicalLimitMB if it is non-zero and from physicalTotalMB otherwise.

  • Attempts to log in with certain Active Directory users might fail with an error message similar to the following in the syslog file:

    encoded packet size too big (11796 > 4096)

  • In vSphere Client, the End VMware Tools Install option displayed during the VMware Tools installation process on Linux, FreeBSD or Solaris virtual machines persists even after a successful installation. The option does not revert to Install/Upgrade VMware Tools option as expected either automatically or when you click on the End VMware Tools Install.

    The issue results in inability to perform vMotion as the VMware Tools installation is considered to still be in progress.

    Note: The same option in vSphere Web Client is seen as Unmount VMware Tools Installer.

  • When you create a VM with the CPU limit and reservation set to maximum and sched.cpu.latencySensitivity is set to high, the exclusive affinity for the vCPUs might not get enabled.

    In earlier releases, the VM did not display a warning message when the CPU is not fully reserved.

  • The cluster summary displays false alarm triggered for object version compliance check when the object is in inaccessible status. Alarm message similar to the following is displayed.

    there are old version Virtual SAN objects, upgrade to version 3 suggested

  • On an ESXi 6.0 host, the NFS volumes might not restore upon reboot. This issue occurs if there is a delay in resolving the NFS server hostname to IP address.

  • The ESXi host might stop responding and display a purple diagnostic screen with LINT1/NMI (motherboard nonmaskable interrupt) if the device memory is dumped in event of a vmx failure. Backtrace similar to the following is displayed:

    UserMem: nnnn: Failed to allocate pagetables for mmInfo: 0xnnnnnnnnnnnn, startAddr: nnnnnnnnn, length: nnnnnnn, pagePool: nnnnnnnnnnnnnnnnnnnn, status: Out of memory^
    @BlueScreen: LINT1/NMI (motherboard nonmaskable interrupt), undiagnosed. This may be a hardware problem; please contact your hardware vendor.
    PanicvPanicInt
    Panic_NoSave
    NMICheckLint1Bottom
    BH_DrainAndDisableInterrupts
    VMMVMKCall_Call

  • During the VM heartbeat lost and recovered event of a VM, the SNMP traps are sent with different vmwVmIDs for the same VM. The vmwVmID mismatch might result in the VM hearbeat alerts not being cleared with some third-party monitoring tools.

    Deleting device nodes (/dev/XXX) in ESXi manually or using scripts will result in the host to stop responding and display a purple diagnostic screen with backtrace similar to the following:

    cpu21:nnnnn)@BlueScreen: #GP Exception 13 in world nnnnn:sshd @ 0xnnnnnnnnnnnn
    cpu21:nnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 0:00:35:50.862
    cpu21:nnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SPLockWork@vmkernel#nover+0x19 stack: 0xnnnnnnnnnnnn
    cpu21:nnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SemaphoreBeginReadInt@vmkernel#nover+0x1c stack: 0xnnnnnnnnnnnn
    cpu21:nnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]FSS_BeginIOShared@vmkernel#nover+0x11 stack: 0xnnnnnnnnnnnn
    cpu21:nnnnn)base fs=0x0 gs=0xnnnnnnnnnnnn Kgs=0x0

  • An ESXi host might get temporarily disconnected when the Fault Tolerance (FT) operation is turned off. The issue occurs when:

    - The FT tie-breaker file is placed on a datastore not being used for primary and/or secondary configuration or disk files.
    - Both the Fault Tolerant primary and secondary hosts lose access to the datastore with the FT tie-breaker file.

  • NFS v4.1 datastores report frequent All Paths Down (APD) messages even after recovering from the APD. The issue occurs due to a problem in the heartbeat calculation after recovering from the APD.

  • Creation of application-consistent quiesced snapshot on VMs with Windows 2008 or higher GOS might result in incorrect (all modified) sectors to be returned by the CBT. The issue causes the time for backup and data to be backed up during every incremental backup cycle to be equivalent to a full backup. Thereby incremental backups are effectively same as full backup. No data is lost or corrupted.

  • This patch updates the net-vmxnet3 VIB to resolve an issue where decreased traffic throughput is observed in a Red Hat Enterprise Linux (RHEL) VM that uses e1000 network adapter in an nested ESXi environment that has VMXNET3 as the adapter.

    The issue is resolved by disabling Large Receive Offload (LRO) for VMXNET3 VMkernel driver by default.

    Note: To enable LRO, run the following command and reboot:

    esxcli system module parameters set -m vmxnet3 -p disable_lro=0

  • This patch updates the tools-light VIB to include the VMware Tools version 10.0.9. Refer to the VMware Tools 10.0.9 Release Notes to see the issues resolved in this release.

    NOTE: Attempts to take a quiesced snapshot in a Windows 2003 Guest OS using the VMware Tools VSS provider might fail with an error similar to the following:

    nnnn-nn-nnT03:05:22.371Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
    nnnn-nn-nnT03:05:22.371Z| vmx| I120: Msg_Post: Warning
    nnnn-nn-nnT03:05:22.371Z| vmx| I120: [msg.snapshot.quiesce.rpc_timeout] A timeout occurred while communicating with VMware Tools in the virtual machine.
    nnnn-nn-nnT03:05:22.371Z| vmx| I120: ----------------------------------------
    nnnn-nn-nnT03:05:22.374Z| vmx| I120: ToolsBackup: changing quiesce state: IDLE -> DONE
    nnnn-nn-nnT03:05:22.374Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Done with snapshot 'afea': 0
    nnnn-nn-nnT03:05:22.374Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (40).
    nnnn-nn-nnT03:05:33.304Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
    nnnn-nn-nnT03:05:40.550Z| vmx| I120: Tools: Tools heartbeat timeout.
    nnnn-nn-nnT03:05:43.300Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox-dnd timed out.
    nnnn-nn-nnT03:05:48.306Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.


    This issue is observed only with Windows 2003 Guest OS.

    Workaround: Use older version of VMware Tools. You can uninstall VMware Tools 10.0.9 and install VMware Tools 10.0.8.

  • This patch updates the esx-ui VIB to introduce a new version of VMware Host Client. For more information about VMware Host Client, see VMware Host Client documentation.

    The following Host Client binaries are attached in the attachment section of this KB:

    • esxui-offline-bundle-6.x-3959074.zip - VMware Host Client offline bundle for ESXi 6.0 (Standalone VIB for VMware Host Client can be installed on ESXi 6.0).
    • esxui-signed-3959074.vib - VMware Host Client standalone VIB (Offline bundle for VMware Host Client can be used with VMware Update Manager to install the VIB on multiple hosts)

  • This patch updates the misc-drivers VIB.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the VMware vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.

Additional Information

For translated versions of this article, see:

Attachments

esxui-offline-bundle-6.x-3959074.zip get_app
esxui-signed-3959074.vib get_app