VMware ESX 4.1 Patch ESX410-201010401-SG: Updates vmkernel64, VMX, CIM (1027013)
Release date: November 15, 2010
|Build||For build information, see KB 1027027.|
|Host Reboot Required||Yes|
|Virtual Machine Migration or Shutdown Required||Yes|
|PRs Fixed||600953, 554166, 514442, 583503, 582904, 582445, 586758, 612450, 579077, 601775, 606217, and 581205|
|VIBs Included||vmware-esx-apps, vmware-esx-backuptools, vmware-esx-cim, vmware-esx-esxcli, vmware-esx-iscsi, vmware-esx-lsi, vmware-esx-nmp, vmware-esx-perftools, vmware-esx-scripts, vmware-esx-srvrmgmt, vmware-esx-uwlibs, vmware-esx-vmkctl, vmware-esx-vmkernel64, vmware-esx-vmnixmod, vmware-esx-vmwauth, vmware-esx-vmx, vmware-hostd-esx, kernel, omc, and vmwprovider|
|Related CVE numbers||CVE-2010-0415, CVE-2010-0307, CVE-2010-0291, CVE-2010-0622, CVE-2010-1087, CVE-2010-1437, and CVE-2010-1088|
Summaries and Symptoms
This patch updates the service console kernel to fix multiple security issues. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-0415, CVE-2010-0307, CVE-2010-0291, CVE-2010-0622, CVE-2010-1087, CVE-2010-1437, and CVE-2010-1088 to these issues.
In addition, this patch fixes the following issues:
- When an user who is a member of more than 32 groups attempts to log into the service console of an ESX host by using KVM or SSH, any one of the following issues might occur:
- ESX host restarts
- ESX host becomes unresponsive
- ESX host displays a purple screen
- The Health Status tab shows false alerts and zero readings for voltage and temperature sensors when you connect a vSphere Client to the vCenter Server that manages ESX hosts running on Unisys ES7000 Model 7600R Enterprise Server or NEC Express5800/A1040.
- When a storage controller fails, an ESX software iSCSI initiator instance with default settings takes about 45 seconds to detect the problem and inform the ESX storage stack to initiate storage failover. For some array models, such as those that use LSI controllers, this problem results in storage failover taking more than 60 seconds to complete after the I/O is sent from the virtual machines. This can cause I/O errors to be reported by applications and the guest operating system running in the virtual machine. This patch resolves this issue.
After installing this patch, you can configure the parameters Noop Interval and Noop Timeout by using either vCenter Server or the vmkiscsi-tool in the service console. These parameters enable you to reduce the timeout value based on the storage arrays, so that the software iSCSI initiator can detect changes in the path state faster and initiate storage failovers. The default values of the parameters for Noop Interval is 40 and Noop Timeout is 10.
- Virtual machines are suspended and multiple storage alerts are raised on multiple ESX hosts resulting in an all-paths-down state. The VMkernel log file contains the following message:
NMP: nmp_DeviceAttemptFailover: Retry world failover device. "naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" - failed to issue command due to Not found (APD), try again...
This issue occurs because ESX does not accept new paths that are exported from EMC Symmetrix Storage after an ESX host boot-up or an all-paths-down state.
- The esxcfg-volume utility might fail to mount VMFS volumes with snapshots, and displays an error message similar to the following:
Error: Unable to resignature this VMFS3 volume due to duplicate extents found
If you dynamically add capacity from the storage device to the VMFS datastore and perform a VMFS rescan operation, the VFMS volumes on ESX hosts might not mount under /vmfs/volumes/ when you use the esxcfg-volume utility. This issue might occur due to dynamic expansion of snapshot volumes from storage having multiple extents. With this patch, ESX improves handling for multiple partitions or devices in VMFS volumes.
- The Network window in the vSphere 4 esxtop tool incorrectly reports a large number of dropped receive-packets (%DRPRX) for virtual machines that are using the E1000 virtual NIC with multicast traffic.
- After an ESX host is upgraded to ESX 4.1 or after ESX 4.1 is installed, you might experience the following symptoms:
- vSphere client might display incorrect values for the number of Processor sockets and cores per socket available in the ESX system. For example, for a ESX system that has 2 processor sockets and 6 cores per socket, the vSphere Client might display that the ESX system has 4 processor sockets and 3 cores per socket.
- Some ESX 4.1 systems might use double the number of licenses required.
- Some of ESX/ESXi hosts might lose their license.
- A potential race condition between the destroying slowpath agent function and the socket wakeup callback function might cause an ESX host to stop responding and display a purple screen.
- If VMware Tools of version ESX 3.5 or later is installed on Windows virtual machines that are configured with the automatic VMware Tools upgrade option, automatic upgrade to ESX 4.1 VMware tools on these virtual machines might fail with an error message similar to the following:
Error upgrading VMware Tools.
- If you start a Microsoft Windows Server 2003 32-bit virtual machine with /3GB switch defined in the boot.ini file on VMware ESX 4.1, you might see the following symptoms:
- Read or Write memory errors occur in the guest operating system.
- A Remote Procedure Call (RPC) error is reported and the virtual machine is forced to reboot often.
- A stop code of type 0x000000F4 occurs.
- Microsoft .NET or Java applications might fail with memory errors.
- The Microsoft Windows Event log might contain error messages similar to the following:
Event Type: Error
Event Source: .NET Runtime
Event Category: None
Description:.NET Runtime version 2.0.50727.3615 - Fatal Execution Engine Error (7A0979AE) (80131506)
- For ESX running on BULL servers, vSphere Client displays the names of the Processor and Power sensors starting with 96 on the Configuration tab. For example, Processor 96 or PowerSupply 97.
With this patch, the Processor and Power sensor names are displayed starting with Processor 0 or PowerSupply 1.
The required patch bundles and reboot information listed in the table above.
To configure Noop Interval and Noop Interval parameters in vCenter Server:
- Log in to the vCenter Server as administrator by using the vSphere Client.
- Select the Configuration tab.
- Click the Storage Adapters link in the Hardware panel.
- Select the vmhba for the iSCSI software adapter.
- Click the Properties link in the Details panel.
- Click Advanced in the iSCSI Initiator Properties window.
- Configure the required values for the Noop interval and Noop Timeout parameters.
The default values of the parameters are Noop Interval is 40 and Noop Timeout is 10.
To configure Noop Interval and Noop Interval parameters by using the vmkiscsi-tool:
- Log in as root in the ESX service console.
- Run the following command to set the Noop interval:
vmkiscsi-tool -W -a "noop_out_interval=15" vmhba<nn>
- Enter the following command to set the Noop Timeout:
vmkiscsi-tool -W -a "noop_out_timeout=10" vmhba<nn>
- Enter the following command to view the updated values:
vmkiscsi-tool -W -l vmhba<nn>
Patch Download and Installation
See the VMware vCenter Update Manager Administration Guide for instructions on using Update Manager to download and install patches to automatically update ESX 4.1 hosts.
To update ESX 4.1 hosts without using Update Manager, download the patch ZIP file from http://support.vmware.com/selfsupport/download/ and install the bulletin by using esxupdate from the command line of the host. For more information, see the ESX 4 Patch Management Guide.