Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

VMware ESXi, Patch ESXe350-200901401-I-SG: Firmware Update (1006661)

Details

Release Date: Jan. 30, 2009

 

Download Size:
212MB
Download Filename:
ESXe350-200901401-O-SG.zip
md5sum:
588dc7bfdee4e4c5ac626906c37fc784

Note: The three ESXi patches for Firmware "I", VMware Tools "T," and the VI Client "C" are contained in a single offline "O" download file.


Product Versions ESXi 3.5
Build 143129
Patch Classification Security
ESX Host Reboot Required Yes
Maintenance Mode Required, Power Off or Migrate Virtual Machines Yes
PRs Fixed 244262 277100 290564 296430 301443 314375 316863 322967 324029 324902 334941 340775 342355 345656 350002 354841 356915 357511 359666
Affected Hardware
  • E1000 NICs
  • Broadcom HT-1000 SATA controllers
  • SATA CD/DVD drives
  • SanDisk SD-USB cards
  • 64-bit SLES9 SP3 virtual machines with e1000 drivers
  • Broadcom BCM5700 NICs
Affected Software

N/A

Related CVE numbers CVE-2008-4914

Solution

Summaries and Symptoms

This patch fixes the following security issues:

  • If the VMDK delta disk of a snapshot is corrupt, an ESX host might crash when the corrupted disk is loaded. The handling of corrupt VMDK delta disks in now more robust.

    VMDK delta files exist for virtual machines with one or more snapshots. This change ensures that a corrupt VMDK delta file cannot be used to crash ESX hosts.

    The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2008-4914 to this issue.

  • This update improves the way the vm-support script collects data. VMware Technical Support might request you to run this script to collect logs and other configuration files from your ESX hosts to assist in troubleshooting problems with your system.

    For more information, see "Data Security Best Practices: SSL keys for communicating with VirtualCenter and other applications" at http://kb.vmware.com/kb/1008166.

This patch also fixes the following issues:

  • In some cases, a guest using the E1000 virtual NIC sends packets that contain an unusually large number of scatter gather elements, which cause VMkernel to panic. As a result, the ESX host crashes with a pink screen (PSOD). This change allows VMkernel to drop these packets instead of crashing.

  • Changes to iSCSI LUN path policy were not persistent across reboot. This fix makes sure the multi-path configurations are refreshed after iSCSI volumes are discovered, so changes persist after rebooting the ESX system.

    Symptoms seen without this patch: Setting changes to pathing or policy preferences on the iSCSI software initiator return to their defaults after rebooting the ESX host. (Unlinke iSCSI hardware initiator and fibre channel controller settings, which are saved across reboots.)


  • This fix changes the sata_svw driver to disable DMA and use PIO instead for ATAPI devices.

    Symptoms seen without this patch: Certain ESX 3.5 systems might stall when loading the sata_svw.o module, when the system is rebooted. The affected ESX 3.5 systems are configured with:

    • a Broadcom HT-1000 SATA controller onboard (or built-in)
    • a SATA CD/DVD drive connected


    If you try to install ESX 3.5 software from a CD-ROM on hosts with this hardware configuration, the ESX installer cannot detect the CD/DVD drive and installation cannot continue.


  • When you use the VI Client to add or remove an NTP server for an ESX host, the clock settings are not preserved. The timezone setting is changed unexpectedly from UTC=false to UTC=true in /etc/sysconfig/clock.

  • A virtual machine with a raw disk mapping (RDM) attached and configured for NPIV fails to power on after removing the virtual machine's WWN assignments. Error messages include:

    Failed to open disk scsi0:0: Bad wwnn format!
    Unable to create virtual SCSI device for scsi0:0, '/vmfs/volumes/<path to .vmdk>'
    Failed to configure scsi0.

    With this patch, the virtual machine powers on, even if the WWN assignments are removed.

  • ESXi embedded failed to install on certain SanDisk SD-USB cards using a recovery CD-ROM. This change updates usb-storage.o, the USB storage driver.


  • SRM creates large number of transient targets. The iSCSI daemon remembers the targets that were discovered (but not present now) and issues input/output controls (ioctls) to get the kernel driver to connect to these targets. Because the targets are no longer present, the ioctls fail. On ESXi, this results in a large number of simultaneous opens to /dev/vmkiscsictl. When the number of simultaneous opens exceeds 32, valid targets do not get discovered.

    To fix this issue, the limit of simultaneous opens to /dev/vmkiscsictl is changed to 256. Absent targets are also removed from the iSCSI daemon target list.

    Symptoms seen without this patch: Snapshot LUNs on an array are not found during SRM recovery.


  • VMware ESX and ESXi 3.5 U3 I/O failure on SAN LUNs, and LUN queue is blocked indefinitely. For a full description of this issue, see http://kb.vmware.com/kb/1008130.


  • Problems with network connectivity and vmnic's going offline, or repeatedly online and offline again, were reported. This problem affected ESX systems with the following configuration:

    • 64-bit SLES9 SP3 virtual machines with e1000 drivers
    • Broadcom BCM5700 NICs (bnx2)

  • When the watchdog timer resets a NIC using the tg3 driver, the reset process sometimes fails to complete initialization and the NIC is left in an unresponsive state. When this happens, the following message appears in the VMkernel log file:

    tg3_init_rings failed, err -12 for device vmnicx

    Symptoms seen without this patch: NICs using the tg3 driver randomly become unresponsive. To work around this issue, you need to unload and reload the tg3 driver, or reboot the server.

    Note: This issue might affect other types of NICs, such as those using the e1000 and bnx2 drivers.


  • Test or real failover using Site Recovery Manager 1.0 Update 1 might be slow. OS heartbeat timeouts occur within SRM. This problem existed because the default value of heartbeatDelayInSecs, found in /etc/vmware/hostd/config.xml, was set for 300 (5 minutes) or 600 seconds (10 minutes). The default value is now 40 seconds, which allows VMware Tools enough time to return a heartbeat without slowing down SRM failover. For a full description of this problem, see http://kb.vmware.com/kb/1008959.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table, above.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware Update Manager. For details, see the VMware Update Manager Administration Guide.

ESXi hosts can also be updated by downloading the most recent "O" (offline) patch bundle from http://support.vmware.com/selfsupport/download/ and installing the bundle using VMware Infrastructure Update or by using the vihostupdate command through the Remote Command Line Interface (RCLI). For details, see the ESX Server 3i Configuration Guide and the ESX Server 3i Embedded Setup Guide (Chapter 10, Maintaining ESX Server 3i and the VI Client) or the ESX Server 3i Installable Setup Guide (Chapter 11, Maintaining ESX Server 3i and the VI Client).

Note: ESXi hosts do not reboot automatically when you patch with the offline bundle.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 14 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)




Please enter the Captcha code before clicking Submit.
  • 14 Ratings
Actions
KB: