Search the VMware Knowledge Base (KB)
View by Article ID

VMware ESXi 6.0, Patch ESXi-6.0.0-20160104001-standard (2135120)

  • 1 Ratings

Details

                                                                                                                                                                                                                                                                                                                                   

           
Profile Name
           
ESXi-6.0.0-20160104001-standard
           
Build
           
For build information, see KB 2135114.
           
Vendor
           
VMware, Inc.
           
Release Date
           
January 07, 2016
           
Acceptance Level
           
PartnerSupported
           
Affected Hardware
           
N/A
           
Affected Software
           
N/A
           
Affected VIBs
  • VMware:esx-base:6.0.0-1.26.3380124
  • VMware:tools-light:6.0.0-1.26.3380124                         
  • VMware:ehci-ehci-hcd:1.0-3vmw.600.1.26.3380124                                     
  • VMware:misc-drivers:6.0.0-1.26.3380124
  • VMware:xhci-xhci:1.0-3vmw.600.1.26.3380124
  • VMware:net-tg3:3.131d.v60.4-2vmw.600.1.26.3380124
  • VMware:net-e1000e:3.2.2.1-1vmw.600.1.26.3380124
           
PRs Fixed
1415954, 1432369, 1442513, 1443203, 1464133, 1466860, 1478381, 1482287, 1492021, 1492316, 1492988, 1493131, 1493547, 1495939, 1498069, 1498306, 1501929, 1505266, 1507055, 1509286, 1513846, 1519896, 1523756, 1528371, 1533994, 1534378, 1538038, 1543130         
           
Related CVE numbers
           
NA

Solution

Summaries and Symptoms

This patch resolves the following issues:

  • An ESXi host might take a long time to boot and fail to load the VMW_SATP_ALUA Storage Array Type Plug-In (SATP) module due to stale LUN entries in the esx.conf file of the LUNs which have gone into a Permanent Device Loss (PDL) condition.
  • Virtual machines protected by vShield App Firewall appliances lose network connectivity intermittently on the ESXi 6.0 hosts. You will see log messages similar to the following in the vShield App Firewall logs:

    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: d0: tx hang
    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: d0: resetting
    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: d0: intr type 3, mode 0, 3 vectors allocated
    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: RSS indirection table :

    See Knowledge base article 2128069 for further details.

  • Attempts to Assign Permissions on an ESXi host, only the Global Security Groups are displayed and the Universal Security Groups are not displayed in the Select Users and Groups list.

  • After upgrading the ESXi host with 1.5 TB of memory from 5.1 to 6.0 on a HP Server with AMD Processor, the host might unexpectedly stop responding or reboot. You will also see Uncorrectable Machine Check Exceptions (UMCEs) similar to the following in the Integrated Management log file.
    Critical","CPU","##/##/2014 15:41","##/##/2014 15:41","1","Uncorrectable Machine Check Exception (Board 0, Processor 3, APIC ID 0xnnnnnnnn, Bank 0xnnnnnnnn, Status 0xnnnnnnnn'nnnnnnnn, Address 0xnnnnnnnn'nnnnnnnn, Misc 0xnnnnnnnn'nnnnnnnn)",

    Mode of failure: Unexpectedly reboot. IML displays UMCE occurred.
  • When you consolidate a virtual machine snapshot hosted on an ESXi 6.0 host, the VM stops responding and creates a vmx-zdump file in the VM's working directory. The vmkernel.log file, located at /var/log/vmkernel.log, displays messages similar to the following:
    cpu17:xxxxxxxx)SVM: xxxx: Error destroying device xxxxxxxx-xxxxxxxx-svmmirror Busy)
    cpu2:xxxxx)FSS: xxxx: No FS driver claimed device 'xxxxxxxx-xxxxxxxx-svmmirror': No filesystem on the device
    cpu2:xxxxx)FSS: xxxx: No FS driver claimed device 'control': No filesystem on the device


    See Knowledge base article 2135631 for further details.
  • After you rename a powered on virtual machine, if you run the esxcli vm process list command to get the list of running VMs from the host, the list might display the old VM name.
  • Attempts to switch mode from emulation to Universal Pass-through (UPT) might fail in ESXi 6.0. The vCenter Server displays the following message to indicate that Direct Path I/O is disabled:
    The virtual machine does not have full memory reservation which is required to activate DirectPath I/O

    You can also see the following message logged in the vmkernel.log file:
    YYYY-MM-DDTHH:MM:SS.820Z cpu11:1000046564)Vmxnet3: VMKDevCanEnterPT:3193: port 0x3000007: memory reservation 0 smaller than guest memory size 262144
  • When you attempt to upgrade VMware Tools for a virtual machine running on VMware ESXi 6.0, the auto-upgrade fails with an error message similar to the following:
    vix error code = 21009

    The issue occurs if the following guest files exist on the virtual machine:
    C:\Windows\\Temp\\vmware-SYSTEM\\VMwareToolsUpgrader.exe

    Red Hat Enterprise Linux VM:
    /tmp/vmware-root

  • The vm-support command takes a long time to collect the logs as unnecessary .dvsData from all .dvsData folders from all datastores it can access are collected.
  • Orphaned vSAN object might be retained after an All Paths Down (APD) disk error is observed on the Solid State Disk (SSD).
  • When an ESXi host loses connectivity to the remote syslog server, the events GeneralHostWarningEvent and AlarmStatusChangedEvent indefinitely keep logging in too many alert messages. As a result, the vpx_event and vpx_event_arg tables fill up the vCenter database. The issue causes high vCenter latency and the vCenter Server to stop responding.
  • After you configure 4 Network Interface Controllers (NICs) for the vNetwork Distributed Virtual Switch (dvSwitch), the hostd repeatedly stops responding with Signal 11 error.
  • Attempts to remove a disk from a vSAN diskgroup might result in a purple diagnostic screen as the diskgroup validation fails due to invalid metadata of the SSD/MD disk partition. Error messages similar to the following are logged in the vmkernel.log file.

    YYYY-MM-DDTHH:MM:SS.772Z cpu13:xxxxx)PLOG: PLOGRelogCleanup:445: RELOG complete uuid xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx lsn xxxxxxx numRelogged 0
    YYYY-MM-DDTHH:MM:SS.772Z cpu33:xxxxx opID=xxxxxxxx)World: 9728: PRDA 0xnnnnnnnnnnnn ss 0x0 ds 0x10b es 0x10b fs 0x0 gs 0x13b
    YYYY-MM-DDTHH:MM:SS.772Z cpu33:xxxxx opID=xxxxxxxx)World: 9730: TR 0xnnnn GDT 0xnnnnnnnnnnnn (0xnnnn) IDT 0xnnnnnnnnnnnn (0xfff)
    .
    .
    .

  • Attempts to power on VMs with higher display resolution or a multiple monitor setup might cause several warning messages similar to the following to be written to the vmkernel.log file and might cause the host to fail due to excessive load of logging:

    XXXX-XX-XXTXX:XX:XX.XXXZ cpuXX:XXXXXXX)WARNING: VmMemPf: vm XXXXXXX: 654: COW copy failed: pgNum=0xXXXXX, mpn=0x3fffffffff
    XXXX-XX-XXTXX:XX:XX.XXXZ cpuXX:XXXXXXX)WARNING: VmMemPf: vm XXXXXXX: 654: COW copy failed: pgNum=0xXXXXX, mpn=0x3fffffffff
    XXXX-XX-XXTXX:XX:XX.XXXZ cpuXX:XXXXXXX)WARNING: VmMemPf: vm XXXXXXX: 654: COW copy failed: pgNum=0xXXXXX, mpn=0x3fffffffff

  • Attempts to join an ESXi host with a large number of CPUs to an Active Directory domain might fail. See Knowledge base article 2130395 for further details.
  • After an ESXi host failure, when the HA attempts to start the affected VMs on other hosts, some of the VMs might stop responding while booting up.
  • Attempts to perform a Storage vMotion or cold migration of a VM with a name beginning with "core" might fail. The vSphere Web Client displays error messages similar to the following:
    Relocate virtual machine core01 Cannot complete the operation because the file or folder
    ds:///vmfs/volumes/xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx/core01/core01-xxxxxxxx.hlog already exists
    Checking destination files for conflicts Administrator vCenter_Server_name


    See Knowledge base article 2130819 for further details.
  • During I/Os on a component, new space might be added to thin component files. For component files on Virsto Volumes, the disk capacity is not recalculated, and the old value is published until new component is created.
  • The esxtop utility reports incorrect statistics on VAAI supported LUNs for DAVG/cmd (average device latency per command) and KAVG/cmd (average ESXi VMkernel latency per command) due to an incorrect calculation.
  • The Windows 10 VM vmx process fails with an error message similar to the following is displayed in the VMware.log file:

    NOT_REACHED bora/devices/ahci/ahci_user.c:1530
  • Virtual machines within a vSAN cluster might underperform due to Solid-state disk (SSD) log congestion problem. The VMKernel system information (VSI) nodes for the SSD indicates that log space consumed is very high and does not reduce.
  • VMX might fail when running some 3D applications. Error messages similar to the following are logged in the vmware.log file:
    xxxx-xx-xxTxx:xx:xx.xxxZ| svga| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (svga)
    xxxx-xx-xxTxx:xx:xx.xxxZ| svga| I120+ NOT_REACHED bora/mks/hostops/shim/shimOps.c:674

  • In ESXi 6.0, VMDKs of eager zeroed type are expanded in the eager zeroed format, which takes a long time and might result in the VM being inaccessible.
  • Virtual machines that run SAP might randomly fail into the vmx.zdump with an error message similar to the following in the vmware.log file when executing too many VMware Tools statistics commands inside the VM.
    CoreDump error line 2160, error Cannot allocate memory.

    See Knowledge base article 2137310 for further details.
  • After you apply the ESXi patch release, ESXi600-201510001, you might encounter sys-alert messages similar to the following due to invalid vector and might generate several events in the vCenter Server.
    cpu48:88874)ALERT: IntrCookie: 3411: Interrupt received on invalid vector (cpu 48, vector 73); ignoring it.
  • In ESXi 6.0, when you run virtual machine backups which utilize Changed Block Tracking (CBT), the CBT API call QueryDiskChangedAreas() might return incorrect changed sectors that results in inconsistent incremental virtual machine backups. The issue occurs as the CBT fails to track changed blocks on the VMs having I/O during snapshot consolidation. See Knowledge base article 2136854 for further details.
  • vCenter Server prevents the creation of Virtual Flash File System (VFFS) datastore with an error message similar to the following:
    License not available to perform the operation. Feature 'vSphere Flash Read Cache' is not licensed with this edition

    The issue occurs due to incorrect check of the vSphere Flash Read Cache (VFRC) permissions.
  • This patch updates the tools-light VIB to include the VMware Tools version 10.0.0. Refer to the VMware Tools 10.0.0 Release Notes to see the issues resolved in this release.
  • This patch updates the ehci-ehci-hcd, misc-drivers, and xhci-xhci VIBs.
  • This patch updates the net-tg3 VIB.
  • This patch updates the net-e1000e VIB.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the VMware vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 1 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)




Please enter the Captcha code before clicking Submit.
  • 1 Ratings
Actions
KB: