Search the VMware Knowledge Base (KB)
Products:
View by Article ID

VMware ESXi 4.0, Patch ESXi400-201006201-UG: Updates Firmware (1017739)

  • 8 Ratings

Details

Product Versions ESXi 4.0
Build 261974
Also see KB 1012514.
Patch Classification Critical
Host Reboot Required Yes
Virtual Machine Migration or Shutdown Required Yes
PRs Fixed

492664 488535 513434 422364 455718 451788 448767 393056 474105 425374 482833 457752 429039 489354 497446 488910 478813 482616 500082 481716 511174 505842 489765 509602 421829 505222 522529 514702 521848 533532 530815 516038 541677 462110 468183 502904 517176 532808 482172 490318 495397 502860 496809 479067

Affected Hardware NetXen NX3031 NIC, NC375T, SGI InfiniteStorage 4000, SGI InfiniteStorage 4100, SGI InfiniteStorage 4600, HP modular storage array
Affected Software wsman, Snapshot Manager, Small-Footprint CIM Broker daemon, vmkiscsi-tool utility
Related CVE numbers N/A

Solution

Summaries and Symptoms

This patch fixes the following issues:

  • This bulletin updates the Firmware component of VMware ESXi 4.0 to improve host stability, and may provide minor enhancements to functionality.
  • When the NetXen NX3031 NIC resides on a relatively high memory location (possibly above 512GB), the system becomes unresponsive as it tries to access an address that is out of range. For example, this issue might occur if the system has a physical memory of 96GB but the memory map configuration generates addresses beyond the 512GB memory location.

    A message similar to the following is logged before the system becomes unresponsive:
    <3>nommu_map_single: overflow a31407ad5a+54 of device mask 7fffffffff

    The following is an example of a backtrace of this issue:
    0x4100c00e7338:[0x41801d1984bf]nommu_map_single+0x4e stack: 0x0
    0x4100c00e7598:[0x41801d26f745]unm_nic_xmit_frame+0x4d8 stack: 0x4100c00e75c8
    0x4100c00e7688:[0x41801d1a1609]process_tx_queue+0x874 stack: 0x41000b884600

  • If a virtual machine accesses a CD drive that does not have a media, the vmware.log file is overloaded with redundant entries similar to the following:
    VIDE: ATAPI DMA 0x43 Failed: key 0x2, asc 0x3a, ascq 0x0
  • vDS virtual machine port changes persist the port configuration every five minutes. Since virtual machine port changes are not persisted immediately, virtual machines can lose network connectivity after a failover event if they have been configured to link to a vDS less than five minutes before the failover.
  • The suppression of Internet Group Management Protocol (IGMPv1 or IGMPv2) membership reports can result in the virtual machines dropping off the multicast group. This dropping of virtual machines occurs when the NIC teaming policy is set to Load Balancing Policy Source Port: Id or Source MAC Address.
  • To conform with the DMTF Profile Registration DSP1033 standard, Base Server is changed from Antecedent role to Dependent role in OMC_ReferencedBaseServerProfile class.
  • In previous releases, wsman indication subscription is not persistent after a system reboot. In this release, wsman indication subscription remains persistent after the system reboot.
  • The VIX C API CopyFileFromHostToGuest() and CopyFileFromGuestToHost() functions do not preserve any permission bits when copying a file between the host and guest operating systems. The permissions on the destination file are set to the owner's default values, which typically allow only read and write by the owner. For example, if the execute bit is set for the file owner on the Linux host, and you use CopyFileFromHostToGuest() to copy the file to a Linux guest operating system, the copy in the guest operating system has the execute bit cleared.
    This release fixes the issue of not preserving file access permissions when files are copied from Windows or Linux hosts to guest operating systems or from guest operating systems to Windows or Linux hosts.
  • The mouse wheel might not scroll up in FreeBSD virtual machines.
  • Now, Storage Array Type Plug-in (SATP) rules exist that allow VMW_SATP_ALUA plug-ins to own RSSM devices with TPGS capability and VMW_SATP_DEFAULT_AA plug-ins to own RSSM devices without TPGS capabiity.
  • The esxtop and resxtop utilities do not display various logical cpu power state statistics This issue is resolved in this release. A new Power screen is accessible with the esxtop utility (supported on ESX) and resxtop utility (supported on ESX and ESXi) that displays logical cpu statistics. To switch to the Power screen, press y at the esxtop or resxtop screens.
  • When the speed and duplex settings for a NIC are manually set and the ESX/ESXi host is rebooted, the values set for the speed and duplex settings might not be retained. After reboot, the network adapter might auto-negotiate its speed and duplex settings.
  • When using the Delete All option in Snapshot Manager, the snapshot farthest from the base disk is committed to its parent, causing that parent snapshot to grow. When the commit is complete, that snapshot is removed and the process starts over on the newly updated snapshot to its parent. This continues until every snapshot has been committed.

    This method can be relatively slow since data farthest from the base disk might be copied several times. More importantly, this method can aggressively use disk space if the snapshots are large, which is especially problematic if a limited amount of space is available on the datastore. The space issue is troublesome in that you might choose to delete snapshots explicitly to free up storage.

    This issue is resolved in this release in that the order of snapshot consolidation has been modified to start with the snapshot closest to the base disk instead of farthest. The end result is that copying data repeatedly is avoided.
  • Missing return statements in the user world copy utility cause double copying of data to the user world buffer. When return statements are missing after the data is copied to the user world buffer once, a second copy writes data to a VMkernel buffer that does not exist. This might lead to memory corruption or server unresponsiveness.
  • In previous ESXi 4.0 releases, Small-Footprint CIM Broker daemon (sfcbd) trace is not enabled.
    This issue is resolved in this release. The sfcbd trace is enabled by default in ESXi 4.0 Update 2 and later releases.
  • The issue is mostly seen on HP branded NetXen cards, such as NC375T. Both the esxcfg-nics -l command (supported on ESX) and the vSphere Client fail to display some ports on Quad port 1G NICs. These NetXen devices/ports are recognized by the hardware and are listed with the lspci command (supported on ESX). However, the driver fails to create or probe some ports. This issue is resovled in this release. The fix addresses how the driver creates devices.
  • The vmkiscsi-tool utility reads and displays all the attributes of the target except for the log-in status data. However, troubleshooting without log-in status data can be complex and cumbersome. This issue is resolved in this release. The vmkiscsi-tool now lists log-in errors, too.
  • Consider a vSwitch that has more than one uplink and has the promiscuous mode enabled. Some of the packets that come in from the uplinks that are not currently used by the promiscuous port, are not discarded. This behavior might mislead some applications, such as the CARP protocol instance.
    This issue is resolved in this release. Starting with this release the Net.ReversePathFwdCheckPromisc configuration option is provided to explicitly discard all the packets coming in from the currently unused uplinks, for the promiscuous port.
    Note: If the value of the Net.ReversePathFwdCheckPromisc configuration option is changed when the ESXi instance is running, you need to enable or re-enable the promiscuous mode for the change in the configuration to take effect.
  • A Physical Address Extension (PAE) enabled multiprocessor Windows 2000 virtual machine might stop responding on reboot, or fail randomly.
  • In this release, LSI storelib library is updated and the two following issues are resolved:
    • VMware_HHRCAlertIndication classes are not getting generated after rebooting ESXi hosts.
    • IR card storelib indication displays incorrect timestamp settings.
  • If a device that is controlled by the roundrobin PSP is configured to use the --iops option, the value set for the --iops option is not retained if the ESXi Server is rebooted.
  • This patch provides support for the following storage arrays from SGI: SGI InfiniteStorage 4000, SGI InfiniteStorage 4100, and SGI InfiniteStorage 4600. These SGI controllers are managed by LSI Storage Array Type Plugin of the VMware Native Multipath Plugin (NMP).
  • The HP modular storage array has a failover feature that allows the active port to assume the identity of another port including the other port's World Wide Port Name (WWPN). When this happens, the target port has the same destination ID (DID), but a different WWPN. This differentiation causes the driver to mark one of the two target ports as NPR with a port login (PLOGI) that is outstanding while the other target port proceeds normally with the identity it assumed from the first target port. The fix detects this condition and reissues the PLOGI to correctly rediscover the targets.
  • Storage arrays returning less data than expected (an underrun condition) are incorrectly reported by the Qlogic driver as having dropped frames with a SCSI status of Busy instead of an underrun condition with the original SCSI status.
  • Autostart and autostop functionalities break when lockdown mode is enabled in ESXi. However, code changes in ESXi 4.0 U2 prevent this issue from occurring when a vpxuser account exists on the ESXi host. The vpxuser account exists when the host is managed by Virtual Center.
  • VMotion fails after a third-party security tool performs a port scan of the ESX/ESXi hosts (KB 1010672)
  • HaltingIdleMsecPenalty Parameter: Guidance for Modifying vSphere's Fairness/Throughput Balance (KB 1020233)
  • Rebooting the ESXi host when Storage vMotion is in progress (approximately 50% complete) might result in vSphere Client displaying two virtual machines with the same name and configuration. However, only one of the virtual machines boots properly.

    Though an aspect of this issue is resolved in this release, be aware that you might still see two virtual machines. Now, ESXi prevents you from powering on the wrong virtual machine and issues an error message indicating that you should power on the other virtual machine.
  • A virtual machine fails to respond or power on when its network interface card (NIC) is attached to a port group with a name that exceeds 50 characters. This issue is resolved in this release. Port group names now cannot be changed to a name that exceeds 50 characters. Previously assigned port group names that exceed 50 characters are not accessible by virtual machines.
  • Invalid pin number in the BIOS descriptions of interrupt routing entries causes undefined ESX/ESXi behavior.
  • The vDS state and the proxy switch are saved in separate files and are processed independently. When vDS is created, the same maxPorts setting is used for both. However, when you change the maxPorts setting, only the proxy switch configuration is updated. Because vDS continues to use the old maxPorts value, it reports an error when adding the new port because the total number of the proxy ports is greater than the old maxPorts value.
  • When a virtual machine with a Linux guest operating system has been running for 30 days or more, rebooting the guest might cause the virtual machine to power off. In such a case, an error message similar to the following is logged in the vmware.log file:

    Jun 10 09:57:40.347: vcpu-0| MONITOR PANIC: vcpu-0:VMM fault: regs=0x2f94, exc=0, eip=0x84c91
  • The CPU frequency reported by 64-bit Windows 2003 and 64-bit Windows XP guest operating systems when run in a virtual machine might be inaccurate. In the virtual machine, QueryPerformanceCounter might run at a rate that is higher than the frequency reported by QueryPerformanceFrequency.
  • Disks connected using PVSCSI controllers are accessed through both the BIOS and the PVSCSI driver. When you boot a virtual machine from a PVSCSI disk, the BIOS is utilized initially. Under certain conditions, the virtual machine boot process might encounter stale data, resulting in guest operating system misbehavior or an unresponsive guest operating system.
  • At shutdown, or possibly at other times if certain conditions apply, a vmx process might hang, causing the respective virtual machine to hang.
  • When a guest operating system lacks an APIC and the virtual machine configuration file does not disable APIC, the virtual machine might stop responding when it resumes from the S1 sleep state.
  • This issue has been observed after a vMotion migration when a virtual machine has been up for a relatively long time, such as for one hundred days.
  • If a USB device is plugged out and plugged back in into the same USB port of an ESXi system, a virtual machine might fail to detect the re-plugged in USB device.
  • The following assertion, which indicates that a block free operation failed, is logged for this issue: WARNING: J3: 1644: Error freeing journal block (returned 0) for 4ac5183a-d1b537f3-2627-00237dce6676: Lock was not free
  • When read lengths are not 4-byte aligned, the RPC reply from the ESXi host has padding bytes to 4-byte align the message. This incorrectly fills the SG array with padding bytes, and might cause ESXi to stop responding when the SG array does not have space for the padding bytes.
  • When a Cisco M81KR virtual interface card is added to a host in an Inter-VSAN Routing (IVR) zone with the target on a Cisco MDS switch, a log-in attempt to the target might time out and the host might not see the target storage LUNS.
  • In this release, to describe the power supply Health Status in the Health Status tab of the vSphere Client, the failure detected phrase is replaced with the failure status phrase.
  • This bulletin updates the VMware ESXi 4.0 ixgbe device driver to version 2.0.38.2.5-2vmw.
  • vSphere 4.0 U2 includes an enhancement of the performance monitoring utility, resxtop. The resxtop utility now provides visibility into the performance of NFS datastores in that it displays the following statistics for NFS datastores: Reads/s, writes/s, MBreads/s, MBwrtn/s, cmds/s, GAVG/s (guest latency).

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware Update Manager. For details, see the VMware vCenter Update Manager Administration Guide.

ESXi hosts can also be updated using vSphere Host Update Utility or by manually downloading the patch zip file from http://support.vmware.com/selfsupport/download/ and installing the bulletin by using the vihostupdate command through the vSphere CLI. For details, see the vSphere CLI Installation and Reference Guide and the vSphere Upgrade Guide.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 8 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)




Please enter the Captcha code before clicking Submit.
  • 8 Ratings
Actions
KB: