Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

VMware ESXi 4.1, Patch ESXi410-201101201-SG: Updates ESXi 4.1 Firmware (1027919)

Details

 

Product Versions ESXi 4.1
Build For build information, see KB 1029354.
Patch Classification Security
Severity Security
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
PRs Fixed 537454, 403507, 575461, 540127, 566832, 562591, 580267, 505822, 572236, 580556, 558789, 574606, 583300, 572715, 587097, 539300, 562258, 591531, 596380, 587086, 582477, 599505, 619021, 619566, 574292, 562329, 568277, 583324, 605032, 581798, 607208, 635872, 637551, 560056, 563954, 544667, 598481, 631793, 600299, 583324, and 636892
Affected Hardware N/A
Affected Software N/A
Related CVE numbers CVE-2010-0734, CVE-2010-4573, CVE-2010-3864 and CVE-2010-2939


 

Solution

Summaries and Symptoms

This patch resolves the following security issues:
  • The userworld version of cURL is updated. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2010-0734 to the issue addressed in this update.

  • ESXi 4.1 update installer might introduce a SFCB Authentication flaw. Under certain conditions, the ESXi 4.1 installer that upgrades an ESXi 3.5 or ESXi 4.0 host to an ESXi 4.1 host incorrectly handles the SFCB authentication mode. The result is that SFCB authentication can allow login with any combination of username and password.
    An ESXi 4.1 host is affected if all of the following conditions apply:
    • ESXi 4.1 host was upgraded from an ESXi 3.5 or an ESXi 4.0 host.
    • The SFCB configuration file at /etc/sfcb/sfcb.cfg was modified prior to the upgrade.
    • The sfcbd daemon is running (sfcbd runs by default).
The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2010-4573 to the issue addressed in this update.
  • The version of ESXi userworld OpenSSL library is updated to 0.9.8p. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-3864 and CVE-2010-2939 to the issues addressed in this update.

This patch resolves the following issues:

  • In the esxupdate utility, the dependency resolution does not match with the bulletin building policy. While installing ESXi hosts, you do not see any errors or warning messages. The bulletin delivery policy should use the lowest version of VIB needed so that you can control the fixes you install and avoid unknown or unexpected updates to ESXi hosts.

  • SCSI warnings similar to the following are written to /var/log/vmkernel:
    Apr 29 04:10:55 localhost vmkernel: 0:00:01:08.161 cpu0:4096)WARNING: ScsiHost: 797: SCSI command failed on handle 1072: Not supported.
    You can ignore the messages. Such messages appear because certain SCSI commands are not supported in the storage array. With this patch, the warning messages are suppressed in /var/log/vmkwarning to reduce support calls.

  • ESXi hosts cannot revert virtual machines to an earlier snapshot after you upgrade from ESXi 3.5 Update 4 to ESXi 4.1 Update 1. The following message might be displayed in vCenter Server: The features supported by the processor(s) in this machine are different from the features supported by the processor(s) in the machine on which the checkpoint was saved. Please try to resume the snapshot on a machine where the processors have the same features. This issue might occur when you create virtual machines on ESX 3.0 hosts, perform vMotion and suspend virtual machines on ESXi 3.5 hosts, and resume them on ESXi 4.x hosts. With this patch, you can revert to snapshots created on ESXi 3.5 hosts, and resume the virtual machines on ESXi 4.x hosts.

  • Target information for LUNs is sometimes not displayed in the vCenter Server UI. To view this information in the Configuration tab, perform the following steps:
    1. Click Storage Adapters under Hardware.
    2. Click iSCSI Host Bus Adapter in the Storage Adapters pane.
    3. Click Paths in the Details pane.
    In releases earlier than ESXi 4.1 Update 1, some iSCSI LUNs do not show the target information. The issue is resolved by applying this patch.

  • An error message similar to the following is displayed in /var/log/messages of ESXi: CPU10:4118 - intr vector: 290:out of interrupt vectors. Before applying this fix, bnx2 devices in MSI-X mode and jumbo frame configuration support only 6 ports. The issue is resolved by applying this patch. With this patch, the bnx2 driver allocates only 1 RX queue in MSI-X mode, supports 16 ports, and saves memory resources.

  • When you create a virtual disk (.vmdk file) with a large size, for example, more than 1TB, on NFS storage, the creation process might fail with an error: A general system error occurred: Failed to create disk: Error creating disk. This issue occurs when the NFS client does not wait for sufficient time for the NFS storage array to initialize the virtual disk after the RPC parameter of the NFS client times out. By default the timeout value is 10 seconds. This fix provides the configuration option to tune the RPC timeout parameter using the esxcfg-advcfg -s <Timeout> /NFS/SetAttrRPCTimeout command.

  • ESXi hosts might log messages similar to the following in the VMkernel log files for LUNs not mapped to ESXi hosts: 0:22:30:03.046 cpu8:4315)ScsiScan: 106: Path 'vmhba0:C0:T0:L0': Peripheral qualifier 0x1 not supported. Such messages are logged either when ESXi hosts start, or when you initiate a rescan operation of the storage arrays from the vSphere Client, or every 5 minutes after ESXi hosts boot. With this patch, the messages are no longer logged.

  • The VMW_PSP_RR policy is set as the default path selection policy for NetApp storage arrays that support SATP_ALUA. You can set this policy by using vCenter Server or through esxcli. For information on setting the policy, see the Deployment Considerations section.

  • The VMW_PSP_RR policy is set as the default path selection policy for IBM 2810XIV storage arrays. You can set this policy by using vCenter Server or through esxcli. For information on setting the policy, see the Deployment Considerations section.

  • Software running on guest operating systems might use CPUID information to determine characteristics of underlying (virtual or physical) CPU hardware. In some instances, CPUID information returned by virtual hardware differs from physical hardware. Based upon these differences, certain components of guest software might malfunction. In this patch, the fix causes certain CPUID responses to more closely match that which physical hardware would return.

  • ESXi hosts might fail with a NOT_REACHED bora/modules/vmkernel/tcpip2/freebsd/sys/support/vmk_iscsi.c:648 message on a purple screen when you scan for LUNs from iSCSI storage array through vSphere Client (Inventory > Configuration > Storage Adapters > iSCSI Software Adapter). This issue might occur if the tcp.window.size parameter in /etc/vmware/vmkiscsid/iscsid.conf is modified manually. This patch resolves the issue, and also logs warning messages in /var/log/messages for ESXi, if the tcp.window.size parameter is modified to a value lower than its default.

  • ESXi might fail on some HP systems such as DL 980 G7 containing HP NC522SFP Dual Port 10GbE Gigabit Server Adapters with a message similar to loading 32.networking-drivers. This issue occurs when ESXi loads the NetXen driver usually during the boot up of ESXi hosts or installation of network drivers. This is dependent on some HP system configurations.

  • During storage rescan operation, some virtual machines stop responding when any LUN on the host is in an all-paths-down (APD) state. For more information, see KB 1016626. To work around the problem in the KB, manually set the advanced configuration option /VMFS3/FailVolumeOpenIfAPD to 1 before issuing the rescan and then reset it to 0 after the completion of the rescan operation. The issue is resolved in this patch. You need not apply the workaround of setting and not setting the advanced configuration option while starting the rescan operation. Virtual machines on non-APD volumes will no longer fail during a rescan operation, even if some LUNs are in an all-paths-down state.

  • If Tboot fails to boot an ESXi host in secure mode, an error similar to the following is displayed:
    Intel TXT boot failed on a previous boot attempt. TXT.ERRORCODE:<error code>. <description>

  • Cancelling the storage vMotion task when relocating a powered on virtual machine containing multiple disks on the same datastore to a different datastore on the same host might cause the ESXi 4.1 hosts to fail with the following error: Exception: NOT_IMPLEMENTED bora/lib/pollDefault/pollDefault.c:2059.

  • For blade servers that are running ESXi, vCenter Server incorrectly reports the service tag of the blade chassis instead of that for the blade. On a Dell or IBM blade server that is managed by vCenter Server, the service tag number is listed in the System section of the vCenter Server under vCenter Server > Configuration tab under processors. This issue occurs due to the incorrect value for the SerialNumber property of the Fixed CIM OMC_Chassis instance. The issue is resolved by applying this patch.

  • Warning messages similar to the following might appear in /var/log/messages when an ESXi host starts:
    WARNING: AcpiShared: 194 SSDTd+: Table length 11108 > 4096
    This warning is generated when an ESXi host reads an ACPI table from the BIOS where the table size is more than 4096 bytes. This warning is harmless and you can ignore it. This fix downgrades the warning to a log.

  • When using a NetXen 1G NX3031 or multiple 10G NX2031 devices, you might see an error message similar to the following written to logs of the ESXi 4.1 host after you upgrade from the ESXi 4.0 host: Out of Interrupt vectors. On ESXi hosts where NetXen 1G and NX2031 10G devices do not support NetQueue, the ESXi host might run out of MSI-X interrupt vectors. This issue can render ESXi hosts unbootable or other devices (such as storage devices) inaccessible. The issue is resolved by applying this patch.

  • Powering on a virtual machine running on ESXi 4.1 hosts fails and logs an Insufficient COS swap to power on error message in /var/log/vmware/hostd.log even though the machine has free space available. After applying this patch, you can power on virtual machines.  

  • In previous patches, Intel e1000e 1.1.2-NAPI driver was not bundled with ESXi but provided separately for download. In this patch, e1000e 1.1.2-NAPI driver is bundled with ESXi.

  • After you move a virtual machine running with memory reservation is moved to a different datastore by using storage vMotion, after the completion of storage vMotion, the virtual machine is seen to have a swap file equal in size to the configured memory. Messages similar to the following might be logged in the vmware.log file of the virtual machine:
    May 25 16:42:38.756: vmx| FSR: Decreasing CPU reservation by 750 MHz, due to atomic CPU reservation transfer of that amount. New reservation is 0 MHz.FSR: Decreasing memory reservation by 20480 MB, due to atomic memory reservation transfer of that amount. New reservation is 0 pages. CreateVM: Swap: generating normal swap file name.
    When ESXi hosts perform storage vMotion, the swap file size of virtual machines increases to memsize. After applying this patch, the swap file size remains the same after storage vMotion.

  • SCSI WRITE_SAME CDB issued from guest operating systems fails even though storage array supports CDB. Applications running on guest operating systems on an ESXi 4.1 host fail after displaying error messages. This issue occurs only on ESXi 4.1 when applications use the SCSI WRITE_SAME CDB. CDB reports a degraded performance when applications use an alternate write command. The issue is resolved by applying this patch.

  • When a DOS-based client software (for example, Altiris Deployment Solution) uses PXE to boot a DOS image, the PXE boot sequence might fail while bootstrapping the image from the server, and the boot loader might display the Status: 0xc0000001 error. The issue is resolved by applying this patch.

  • When you try to hot-add memory or CPU to a virtual machine whose reserved memory or CPU is more than half the available physical memory or CPU of the host machine, the operation might fail. After applying this fix, hot-add might fail sometimes when the available physical memory on an ESXi host is less than twice the overhead memory of the virtual machine.

    • To view the available physical memory or CPU in vSphere Client, select an ESXi host and click the Resource Allocation link. The available physical memory is displayed under Memory > Available Capacity. The available CPU is displayed under CPU > Available Capacity.
    • To view the overhead memory of a virtual machine, select the virtual machine and click the Resource Allocation link. The overhead memory is displayed under Memory > Overhead.

  • The stacked per-virtual machine performance chart data for networking displays incorrect information. You can access the chart from Chart Options in the Advanced Settings on the Performance tab. The network transmit and receive statistics of a virtual machine connected to the Distributed Virtual Switch (DVS) are interchanged, reversed, and incorrectly displayed. The fix in this patch ensures that the host agent on ESXi hosts collects the correct statistics and passes them to the performance charts UI. This fix also resolves receive, transmit, and usage network statistics at the host level. Before this fix, the values reported for each of these statistics were zero.

  • Data loss might occur on ESXi hosts using LSI SAS HBAs connected to SATA disks. This issue occurs when the maximum I/O size is set to more than 64KB in mptsas driver and LSI SAS HBAs are connected to SATA disks. The issue is resolved by applying this patch.

  • Windows guest operating systems installed with VMware Windows XP display driver model (XPDM) driver might fail with a vmx_fb.dll error and display a blue screen. The issue is resolved by applying this patch.

  • In rare cases, the system clock on ESXi hosts sometimes compute the incorrect time. The issue is resolved by applying this patch.

  • The minimum, default, and maximum recommended memory sizes in the virtual machine default settings for RHEL 32-bit and 64-bit guest operating systems are updated as per the latest RHEL 6 operating system specifications at http://www.redhat.com/.

  • The vSphere Client displays incorrect BIOS version and release date on the Processors page in the Configuration tab. The issue is resolved by applying this patch.

  • Powering on and the powering off a single virtual machine or running I/O on the virtual machine causes a consistent decrease in the available memory. The VMkernel log contains the memory allocation error messages. The issue is resolved by applying this patch.

  • In this patch, the Windows Display Driver Model (WDDM) driver is updated to fix some infrequent issues where a Windows virtual machine fails and displays a blue screen.

  • A warning message similar to the following is logged in the VMkernel log:
    x:x:x.x: Cannot change ownership to PASSTHRU (non-ACS capable switch in hierarchy) where x:x:x.x is pci device address
    This warning message is logged because certain devices cannot perform pass-through when you perform a direct assignment of a device to a virtual machine. Access Control Services (ACS) is introduced by PCI SIG to address potential data corruption with direct assignment of devices. In this release, pass-through of devices that are behind PCI Express (PCIe) switches and without ACS capability is not allowed.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

To set VMW_PSP_RR as path selection policy for NetApp storage arrays that support SATP_ALUA, through vCenter Server:
  1. Click the Configuration tab.
  2. In the left panel under Hardware Adapters, select Storage Adapters.
  3. On the right panel, select the vmhba that connects to the NetApp LUNs.
  4. Right-click the LUN whose path policy you want to change, and select Manage Paths.
  5. In the resulting dialog box, under Policy, set Path Selection to Round Robin.
To set this policy, through esxcli, run the following commands from a remote host setup for vSphere remote management:
# esxcli <conn_options> nmp satp addrule --satp="VMW_SATP_ALUA" --psp="VMW_PSP_RR" --claim-option="tpgs_on" --vendor="NETAPP" --description="NetApp arrays with ALUA support"
# esxcli <conn_options> conn_options corestorage claimrule load
# esxcli <conn_options> corestorage claimrule run

To set VMW_PSP_RR as the path selection policy for IBM 2810XIV storage arrays, through vCenter Server:
  1. Click the Configuration tab.
  2. In the left panel under Hardware Adapters, select Storage Adapters.
  3. On the right panel, select the vmhba that connects to the IBM LUNs.
  4. Right-click the LUN whose path policy you want to change, and select Manage Paths.
  5. In the resulting dialog box, under Policy, set Path Selection to Round Robin.

To set this policy through esxcli, run the following commands from a remote host setup for vSphere remote management:
# esxcli <conn_options> nmp satp addrule --satp="VMW_SATP_ALUA" --psp="VMW_PSP_RR" --claim-option="tpgs_on" --vendor="IBM" --model="2810XIV" --description="IBM 2810XIV arrays with ALUA support"
# esxcli <conn_options> nmp satp addrule --satp="VMW_SATP_DEFAULT_AA" --psp="VMW_PSP_RR" --claim-option="tpgs_off" --vendor="IBM" --model="2810XIV" --description="IBM 2810XIV arrays without ALUA support"
# esxcli <conn_options> corestorage claimrule load

# esxcli <conn_options> corestorage claimrule run

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vCenter Update Manager. For details, see the VMware vCenter Update Manager Administration Guide.

Download the patch zip file from http://support.vmware.com/selfsupport/download/ and install the bulletin using the vihostupdate command through the vSphere CLI. For more information, see the vSphere Command-Line Interface Installation and Scripting Guide and the vSphere Upgrade Guide.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 7 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 7 Ratings
Actions
KB: