Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

VMware ESX 3.5, Patch ESX350-200901401-SG: Updates VMkernel, VMX, and hostd (1006651)

Details

Release Date: Jan. 30, 2009

Download Size:
125MB
Download Filename:
ESX350-200901401-SG.zip
md5sum:
2769ac30078656b01ca1e2fdfa3230e9


Product Versions ESX 3.5
Patch Classification Security
Supersedes
ESX350-200712409-BG
ESX350-200712410-BG
ESX350-200802401-BG
ESX350-200802411-BG
ESX350-200802412-BG
ESX350-200804401-BG
ESX350-200804402-BG
ESX350-200804403-BG
ESX350-200806401-BG
ESX350-200806405-BG
ESX350-200808203-UG
ESX350-200806812-BG
Requires ESX350-200810201-UG
ESX350-200811401-SG
ESX350-200901402-SG
Virtual Machine Migration or Shutdown Required Yes
Host Reboot Required Yes
PRs Fixed 244262, 296430, 314375, 316863, 322967, 324029, 340775, 342355, 345656, 356915
Affected Hardware N/A
Affected Software N/A
RPMs Included

VMware-esx-backuptools
VMware-esx-vmkernel
VMware-esx-vmx
VMware-hostd-esx

Build 143198
Also see KB 1001179.
Related CVE numbers CVE-2008-4914



Solution

Summaries and Symptoms

This patch fixes the following security issue:
  • If the VMDK delta disk of a snapshot is corrupt, an ESX host might crash when the corrupted disk is loaded. The handling of corrupt VMDK delta disks in now more robust.

    VMDK delta files exist for virtual machines with one or more snapshots. This change ensures that a corrupt VMDK delta file cannot be used to crash ESX hosts.

    The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2008-4914 to this issue.
 
This patch also fixes the following issues:
  • In some cases, a guest using the E1000 virtual NIC sends packets that contain an unusually large number of scatter gather elements, which cause VMkernel to panic. As a result, the ESX host crashes with a pink screen (PSOD). This change allows VMkernel to drop these packets instead of crashing.

  • When you use the VI Client to add or remove an NTP server for an ESX host, the clock settings are not preserved. The timezone setting is changed unexpectedly from UTC=false to UTC=true in /etc/sysconfig/clock .

  • A virtual machine with a raw disk mapping (RDM) attached and configured for NPIV fails to power on after removing the virtual machine's WWN assignments. Error messages include:

    Failed to open disk scsi0:0: Bad wwnn format!
    Unable to create virtual SCSI device for scsi0:0, '/vmfs/volumes/<path to .vmdk>' Failed to configure scsi0.

    With this patch, the virtual machine powers on, even if the WWN assignments are removed.

  • VMware ESX and ESXi 3.5 U3 I/O failure on SAN LUNs, and LUN queue is blocked indefinitely. For a full description of this issue, see http://kb.vmware.com/kb/1008130.

  • This change fixes problems with network connectivity and vmnic's going offline, or repeatedly online and offline again. This problem affects ESX systems with the following configuration:

    • 64-bit SLES9 SP3 virtual machines with e1000 drivers
    • Broadcom BCM5700 NICs (bnx2)

    VMkernel logs messages similar to the following:

    Jun 27 17:53:53 RHBESX12 vmkernel: 1:01:31:59.547 cpu3:1301)WARNING: LinNet: 4288: Watchdog timeout for device vmnic1
    Jun 27 17:53:53 RHBESX12 vmkernel: 1:01:31:59.647 cpu2:1063)<3>bnx2: vmnic1 NIC Copper Link is Down


  • When the watchdog timer resets a NIC using the tg3 driver, the reset process sometimes fails to complete initialization and the NIC is left in an unresponsive state. When this happens, the following message appears in the VMkernel log file:

    tg3_init_rings failed, err -12 for device vmnicx

    Symptoms seen without this patch: NICs using the tg3 driver randomly become unresponsive. To work around this issue, you need to unload and reload the tg3 driver, or reboot the server.

    Note: This issue might affect other types of NICs, such as those using the e1000 and bnx2 drivers.

  • Test or real failover using Site Recovery Manager (SRM) 1.0 Update 1 might be slow. OS heartbeat timeouts occur within SRM. This problem existed because the default value of heartbeatDelayInSecs, found in /etc/vmware/hostd/config.xml, was set for 300 (5 minutes) or 600 seconds (10 minutes). The default value is now 40 seconds, which allows VMware Tools enough time to return a heartbeat without slowing down SRM failover. For a full description of this problem, see http://kb.vmware.com/kb/1008059.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table, above.

Patch Download and Installation

See the VMware Update Manager Administration Guide for instructions on using Update Manager to download and install patches to automatically update ESX Server 3.5 hosts.

To update ESX Server 3.5 hosts when not using Update Manager, download the most recent patch bundle from http://support.vmware.com/selfsupport/download/ and install the bundle using esxupdate from the command line of the host. For more information, see the ESX Server 3 Patch Management Guide.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 19 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 19 Ratings
Actions
KB: