Search the VMware Knowledge Base (KB)
View by Article ID

VMware ESXi 6.0, Patch ESXi-6.0.0-20150902001-standard (2124725)

  • 0 Ratings


Profile Name
For build information, see KB 2124715.
VMware, Inc.
Release Date
Sep 10, 2015
Acceptance Level
Affected Hardware
Affected Software
Affected VIBs
  • VMware:esx-base:6.0.0-1.17.3029758
  • VMware:tools-light:6.0.0-1.17.3029758
  • VMware:lsu-lsi-mpt2sas-plugin:1.0.0-4vmw.600.1.17.3029758
  • VMware:misc-drivers:6.0.0-1.17.3029758
  • VMware:xhci-xhci:1.0-2vmw.600.1.17.3029758
  • VMware:sata-ahci:3.0-22vmw.600.1.17.3029758
  • VMware:lsi-msgpt3:
  • VMware:lsi-mr3:6.605.08.00-7vmw.600.1.17.3029758
  • VMware:vsanhealth:6.0.0-3000000.
  • VMware:nvme:1.0e.0.35-1vmw.600.1.17.3029758
PRs Fixed
1370778, 1374957, 1378321, 1397891, 1398409, 1412807, 1429541, 1431568, 1433014, 1438192, 1442807, 1443749, 1445823, 1446166, 1446597, 1447259, 1447333, 1453060, 1453162, 1454166, 1457323, 1459527, 1460217, 1460630, 1464230, 1466235, 1468905, 1474510, 1475186, 1481577, 1488335, 1491513, 1419826, 1473444, 1443078, 1440001
Related CVE numbers


Summaries and Symptoms

This patch resolves the following issues:

  • On HP systems with ESXi 6.0, you might see excessive logging of VmkAccess messages in vmkernel.log for the following system commands that are executed during runtime:

    • esxcfg-scsidevs
    • localcli storage core path list
    • localcli storage core device list

    Excessive log messages similar to the following are logged in the VmkAccess logs:

    cpu7:36122)VmkAccess: 637: localcli: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)
    cpu7:36122)VmkAccess: 922: VMkernel syscall invalid (1025)
    cpu7:36122)VmkAccess: 637: localcli: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)
    cpu7:36122)VmkAccess: 922: VMkernel syscall invalid (1025)
    cpu0:36129)VmkAccess: 637: esxcfg-scsidevs: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)

  • This release introduces a new VMX option sched.cpu.latencySensitivity.sysContexts to address issues on vSphere 6.0 where most system contexts are still worldlets. The Scheduler utilizes the sched.cpu.latencySensitivity.sysContexts option for each virtual machine to automatically identify a set of system contexts that might be involved in the latency-sensitive workloads. For each of these system contexts, exclusive affinity to one dedicated physical core is provided. The VMX option sched.cpu.latencySensitivity.sysContexts denotes how many exclusive cores a low-latency VM can get for the system contexts.

  • A Linux guest OS booted on EFI firmware might fail to respond to the keyboard and mouse input if any motion of the mouse occurs during the short window of time when the OS boots up.

  • While the PCI information on a device is collected for passthrough, the error reporting for that device is disabled.

    This issue is resolved in this release by providing VMkernel boot option pcipDisablePciErrReporting to enable PCI passthrough devices to report errors. By default the option is set to TRUE implying error reporting is disabled.

  • The vFlash cache metric counters such as FlashCacheIOPs, FlashCacheLatency, FlashCacheThroughput might not be available when CBT is enabled on a virtual disk. Error messages similar to the following might be logged in the stats.log file:

    xxxx-xx-xxTxx:xx:xx.200Z [xxxxxxxx error 'Statssvc.vim.PerformanceManager'] CollectVmVdiskStats : Failed to get VFlash Cache stats for vscsi id scsi0:0 for vm 3
    xxxx-xx-xxTxx:xx:xx.189Z [xxxxxxxx error 'Statssvc.vim.PerformanceManager'] GetVirtualDiskVFCStats: Failed to get VFlash Cache stat values for vmdk scsi0:0. Exception VFlash Cache filename not found!

  • Applying host profile on stateless ESXi host with large number of storage LUNs might take long time to reboot when you enable stateless caching with esx as the first disk argument. This happens when you manually apply host profile or during the reboot of the host.

  • After you upgrade from ESXi 5.5 to 6.0, attempts to add a vmnic to a VMware ESXi host connected to a vSphere Distributed Switch (VDS) might fail. The issue occurs when ipfix is enabled and IPv6 is disabled.

    In the /var/log/vmkernel.log file on the affected ESXi host, you see entries similar to:

    cpu10:xxxxx opID=xxxxxxxx)WARNING: Ipfix: IpfixActivate:xxx: Activation failed for 'DvsPortset-1': Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)WARNING: Ipfix: IpfixDVPortParamWrite:xxx: Configuration failed for switch DvsPortset-1 port xxxxxxxx : Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)WARNING: NetDVS: xxxx: failed to init client for data com.vmware.etherswitch.port.ipfix on port xxx
    cpu10:xxxxx opID=xxxxxxxx)WARNING: NetPort: xxxx: failed to enable port 0x4000002: Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)NetPort: xxxx: disabled port 0x4000002
    cpu10:xxxxx opID=xxxxxxxx)Uplink: xxxx: vmnic2: Failed to enable the uplink port 0x4000002: Unsupported address family

  • You are unable to view the real time performance graph for Network of a virtual machine configured with VMXNET3 adapter in the VMware vSphere Client 6.0 as the option is not available in the Switch to drop-down list.

  • An ESXi 6.0 host might fail with a purple diagnostic screen when multiple vSCSI filters are attached to a VM disk. The purple diagnostic screen or backtrace contains entries similar to the following:

    cpu24:nnnnnn opID=nnnnnnnn)@BlueScreen: #PF Exception 14 in world 103492:hostd-worker IP 0x41802c2c094d addr 0x30
    cpu24:nnnnnn opID=nnnnnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 21:06:32:38.296
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_GetFilterPrivateData@vmkernel#nover+0x1 stack: 0xnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_IssueInternalCommand@vmkernel#nover+0xc3 stack: 0xnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FileSyncRead@<None>#<None>+0xb1 stack: 0x0
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_DigestRecompute@<None>#<None>+0xnnn stack: 0xnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FilterDigestRecompute@<None>#<None> +0x36 stack: 0x20
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x322 stack: 0xnnnnnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@<None>#<None>+0xef stack: 0x41245111df10
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn] User_UWVMKSyscallHandler@<None>#<None>+0x243 stack: 0xnnnnnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0xnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry@vmkernel#nover+0x64 stack: 0x0

  • When some VIBs are installed on the system, esxupdate constructs a new image in /altbootbank and changes the /altbootbank boot.cfg bootstate to be updated. When a live installable VIB is installed, the system saves the configuration change to /altbootbank. The stage operation deletes the contents of /altbootbank unless you perform a remediate operation after the stage operation. The VIB installation might be lost if you reboot the host after a stage operation.

  • vSphere APIs for I/O Filtering (VAIO) provide a framework that allows third parties to create software components called I/O filters. The filters can be installed on ESXi hosts and can offer additional data services to virtual machines by processing I/O requests that move between the guest operating system of a virtual machine and virtual disks.

  • Attempts to launch virtual machines with higher display resolution and multiple monitors setup from VDI using PCOIP solutions might fail. The VMs fail on launch and go into a Power Off state. In the /var/log/vmkwarning.log, you see entries similar to:

    cpu3:xxxxxx)WARNING: World: vm xxxxxx: 12276: vmm0:VDI-STD-005:vmk: vcpu-0:p2m update buffer full
    cpu3:xxxxxx)WARNING: VmMemPf: vm xxxxxx: 652: COW copy failed: pgNum=0x3d108, mpn=0x3fffffffff
    cpu3:xxxxxx)WARNING: VmMemPf: vm xxxxxx: 2626: PhysPageFault failed Failure: pgNum=0x3d108, mpn=0x3fffffffff
    cpu3:xxxxxx)WARNING: UserMem: 10592: PF failed to handle a fault on mmInfo at va 0x60ee6000: Failure. Terminating...
    cpu3:xxxxxx)WARNING: World: vm xxxxxx: 3973: VMMWorld group leader = 255903, members = 1
    cpu7:xxxxxx)WARNING: World: vm xxxxxx: 3973: VMMWorld group leader = 255903, members = 1
    cpu0:xxxxx)WARNING: World: vm xxxxxx: 9604: Panic'd VMM world being reaped, but no core dumped.

  • The new Host Profile Plugin is now available to collect the DEBUG log and enable Trace Log of ESXi host profile engine when the host is booted through Active Directory.

  • When multiple virtual machines share storage space, the vSphere Client summary page might display incorrect values for the following:

    • Not-shared Storage in the VM Summary Page
    • Provisioned Space in the data Summary Page
    • Used Space in the VM tab of the host

  • An ESXi host might stop responding and the virtual machines become inaccessible. Also, the ESXi host might lose connection to vCenter Server due to a deadlock during storage hiccups on Non-ATS VMFS datastores.

  • When resetting a large number of virtual machines at the same time with NVIDIA GRID vGPU device, there might be a reboot failure for some of the virtual machines. A reboot error similar to the following might be displayed:

    VMIOP: no graphics device is available for vGPU grid_k100.

  • Enabling vMotion on vmk10 or higher might cause vmk1 to have vMotion enabled on reboot of the ESXi host. This issue can cause excessive traffic over vmk1 and result in network issues.

  • Virtual machines using VMXNET3 virtual adapter might fail when attempting to boot from iPXE (open source boot firmware).

  • When ServerView CIM Provider and Emulex CIM Provider are installed on the same ESXi host, the Emulex CIM Provider (sfcb-emulex_ucn) might fail to respond resulting in failure to monitor hardware status.

  • Attempts to modify storage policies of Powered On virtual machine created from linked clones might fail in vCenter Server with an error message similar to the following:

    The scheduling parameter change failed.

  • Host Profiles become non-compliant with a simple change to SNMP syscontact or syslocation. The issue occurs as the SNMP host profile plugin applies only a single value to all hosts attached to the host profile. An error message similar to the following might be displayed:

    SNMP Agent Configuration differs

    This issue is resolved in this release by enabling per-host value settings for certain parameters like syslocation, syscontact, v3targets,v3users and engineid.
  • When using load balancing based on physical NIC load on VDS 6.0, if one of the uplinks is disconnected or shut down, failover is not initiated.

  • Attempts to migrate a secondary VM enabled with fault tolerance might fail and the VM might become unresponsive under heavy workload.

  • Unnecessary periodic device and file system rescan triggered by vSAN might cause the ESXi host and virtual machines within the environment to randomly stop responding.

  • When you deploy large number of VMs in the NFS datastore, the VM deployment fails with an error message similar to the following due to a race condition:

    Stale NFS file handle

    See Knowledge Based article 2130593 for details.

  • When you clone a virtual machine across different storage containers, the VMId of the source Virtual Volume (VVOL) is taken as the initial value for the cloned VVOL VMID.

  • The sfcb-vmware_raw might fail as the maximum default plugin resource group memory allocated is not enough.

    Add UserVars CIMOemPluginsRPMemMax for memory limits of sfcbd plugins using the following command and restart the sfcbd for the new plugins value to take effect:

    esxcfg-advcfg -A CIMOemPluginsRPMemMax --add-desc 'Maximum Memory for plugins RP' --add-default XXX --add-type int -- add-min 175 --add-max 500

    XXX being the memory limit you want to allocate. This value should be within the minimum (175) and maximum (500) values.


  • When applying host profile during Auto Deploy, you might lose network connectivity because the VXLAN Tunnel Endpoint (VTEP) NIC gets tagged as management vmknic.

  • VMFS volume on an ESXi host might remain locked due to failed metadata operations. An error message similar to the following is observed in vmkernel.log file:

    WARNING: LVM: 12976: The volume on the device naa.50002ac002ba0956:1 locked, possibly because some remote host encountered an error during a volume operation and could not recover.

  • When an All Paths Down (APD) event occurs, LUNs connected to ESXi might remain inaccessible after paths to the LUNs recover. You see the following events in sequence in the /var/log/vmkernel.log:

    1. Device enters APD.
    2. Device exits APD.
    3. Heartbeat recovery and filesystem operations on the device fail due to not found.
    4. The APD timeout expires despite the fact that the device exited APD previously.

  • Storage performance might be slow on virtual machines running on VSA-provisioned NFS storage
    Slow NFS storage performance is observed on virtual machines running on VSA-provisioned NFS storage. This is due to the delayed acknowledgements from the ESXi machine to NFS Read responses.

    This issue is resolved in this release by disabling delayed acks for LRO TCP packets.

  • Attempts to install or upgrade to VMware Tools version 9.10.0 available in ESXi 6.0 might fail on Dutch version of Windows Server 2008 R2. An error message similar to the following is displayed:

    VMware Tools Setup Wizard ended prematurely

  • When you compile and install open-vm-tools 9.10, you might encounter multiple errors as the vmxnet.c fails to compile with kernel version 3.3.0 or later. This issue is resolved for open-vm-tools 9.10.2 and you can install the open-vm-tools 9.10.2 with any kernel version.

    To install open-vm-tools 9.10, edit the vmxnet from ./configure or --without-kernel-modules.

  • When you perform a quiesced snapshot of a Linux virtual machine, the VM might fail after the snapshot operation. The following error messages are logged in the vmware.log file:

    <YYYY-MM-DD>T<TIME>Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: done with snapshot 'smvi_UUID': 0
    <YYYY-MM-DD>T<TIME>Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (40).
    <YYYY-MM-DD>T<TIME>Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
    <YYYY-MM-DD>T<TIME>Z| vmx| I120: Vix: [18631 guestCommands.c:1926]: Error VIX_E_TOOLS_NOT_RUNNING in MAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest

  • The lsu-lsi-mpt2sas-plugin VIB is updated.

  • The misc-drivers VIB is updated.

  • The xhci-xhci VIB is updated.

  • The sata-ahci VIB is updated.

  • The lsi-msgpt3 VIB is updated to resolve an issue where vSphere might not detect all the 18 drives in the system due to the lsi_msgpt3 driver being unable to detect a single drive per HBA if there are multiple HBAs in a system.

  • The lsi_mr3 VIB is updated to resolve an issue where the lsi_mr3 driver does not detect SAS drive greater than 2 TB. Error messages similar to the following are logged:

    cpu35:33396)WARNING: ScsiDeviceIO: 8469: Plugin entry point isSSD() failed on device naa.5000c50057b932a7 from plugin NMP: Failure
    cpu35:33396)ScsiDevice: 3025: Failing registration of device 'naa.5000c50057b932a7': failed to get device I/O error attributes.
    cpu35:33396)ScsiEvents: 545: Event Subsystem: Device Events, Destroyed!
    cpu35:33396)WARNING: NMP: nmp_RegisterDevice:673: Registration of NMP device with primary uid 'naa.5000c50057b932a7' failed. I/O error

  • The vsanhealth VIB is updated.

  • The nvme VIB is updated.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the VMware vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)

Please enter the Captcha code before clicking Submit.
  • 0 Ratings