Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

VMware ESXi 4.0, Patch ESXi400-201105201-UG: Updates Firmware (1031738)

Details

Release date: May 05, 2011

Patch Classification

Critical

Build Information

For build information, see KB 1031736.
Also see KB 1012514.

Host Reboot Required

Yes

Virtual Machine Migration or Shutdown Required

Yes

PRs Fixed

353177, 446450, 454804, 469941, 471479, 472943, 474939, 484269, 488010, 492120, 510150, 515093, 515397, 515772, 517392, 529302, 530420, 531651, 532369, 532459, 532876, 534682, 537552, 545093, 547440, 548060, 548419, 549357, 551064, 551252, 554097, 556538, 558155, 560002, 564152, 565001, 567922, 572093, 572180, 572868, 578895, 582476, 585115, 593507, 593672, 596439, 598817, 605493, 605963, 606647, 610111, 614612, 614821, 615891, 616721, 619252 6 27780, 629160, 629294, 630687, 635873, 637636, 651578, 671643

Affected Hardware

N/A

Affected Software

N/A

Related CVE numbers

N/A

 

 

 

Solution

Summaries and Symptoms

This patch resolves the following issues:
  • If your ESXi Server system is connected to a tape library with an Adaptec HBA (for example: AHA-39320A) that uses the aic79xx driver, you might encounter a server failure when the driver tries to access a freed memory area. Such a condition is accompanied by an error message similar to:

    Loop 1 frame=0x4100c059f950 ip=0x418030a936d9 cr2=0x0 cr3=0x400b9000.

  • An ESXi host might fail with a purple diagnostic screen while mounting virtual CD-ROM drive from the server Remote Supervisor Adapter (RSA)


  • The Performance chart network transmit and receive statistics of a virtual machine connected to the Distributed Virtual Switch (DVS) are interchanged, reversed, and incorrectly displayed.


  • If you configure the traffic shaping value on a vDS or vSwitch to greater than 4GB, the value is reset to below 4GB after you restart the ESXi host. This issue causes the traffic shaper to shape traffic using much lower values, resulting in very low network bandwidth. For example, if you set the traffic shaping to a maximum bandwidth of 6Gbps, the value changes to about 1.9Gbps after you restart the ESXi host.

  • All snapshots are created in a default virtual machine directory. However, if you specify a custom directory to create snapshots, snapshot delta files might remain in this directory when you delete snapshots. These redundant files eventually fill up disk space and need to be deleted manually.

  • An ESXi host might fail with a purple diagnostic screen and display error messages similar to the following:

    [7m21:04:02:05.579 cpu10:4119)WARNING: Fil3: 10730: Found invalid object on 4a818bab-b4240ea4-5b2f-00237de12408 expected
    21:04:02:05.579 cpu10:4119)FSS: 662: Failed to get object f530 28 2 4a818bab b4240ea4 23005b2f 824e17d 4 1 0 0 0 0 0 :Not found
    21:04:02:05.579 cpu0:4096)VMNIX: VMKFS: 2521: status = -2

    This issue occurs when a VMFS volume has a corrupt address in the file descriptor.

  • A virtual machine command such as PowerOn that you issue through hostd immediately after a virtual machine powers off might fail with an error message similar to the following:
    A specified parameter was not correct

    An error message similar to the following might be written to the vCenter log:
    [2009-11-16 15:06:09.266 01756 error 'App'] [vm.powerOn] Received unexpected exception

  • Under certain conditions the QLogic iSCSI HBA driver might block interrupts for a long time and cause an ESXi host to fail with a purple screen.

  • Resetting the storage processor of the HP MSA2012fc storage array causes the ESX/ESXi native multipath driver (NMP) module to send alerts or critical entries to vmkernel logs. These alert messages indicate that the physical media has changed for the device. However, these messages do not apply to all LUN types. They are only critical for data LUNs but do not apply to management LUNs.

  • On blade servers running with ESX, vCenter Server incorrectly reports the service tag of the blade chassis instead of the blade server's service tag. When a blade server is managed by vCenter Server, the service tag number is listed in the System Section in vCenter Server > Configuration tab > Processors. This issue is reported on Dell and IBM blade servers.

  • During storage rescan operations, some virtual machines stop responding when any LUN on the host is in an APD state. For more information, see KB 1016626. To work around the issue described in the KB article while using an earlier version of ESXi host, you must manually set the advanced configuration option /VMFS3/FailVolumeOpenIfAPD to 1 before rescanning, and then reset it to 0 after completing the rescan. This issue is resolved by applying this patch. Now you need not apply the workaround of setting and resetting the advanced configuration option while starting the rescan operation. Virtual machines on non-APD volumes do not fail during a rescan operation, even if some LUNs are in an all-paths-down state.

  • Target information for LUNs is sometimes not displayed in the vCenter Server UI. In releases earlier than ESXi 4.0 Update 3, some iSCSI LUNs do not show the target information.To view this information in the Configuration tab, perform the following steps:
    1. Click Storage Adapters under Hardware.
    2. Click iSCSI Host Bus Adapter in the Storage Adapters pane.
    3. Click Paths in the Details pane.

  • ESXi hosts might log messages similar to the following in the VMkernel log files for LUNs not mapped to ESXi hosts: 0:22:30:03.046 cpu8:4315)ScsiScan: 106: Path 'vmhba0:C0:T0:L0': Peripheral qualifier 0x1 not supported. Such messages are logged either when ESXi hosts start, or when you initiate a rescan operation of the storage arrays from the vSphere Client, or every 5 minutes after ESXi hosts boot.

  • Performing a downgrade or upgrade by using the Dell Update Package (DUP) utility might fail on ESXi 4.0 hosts.

  • An ESXi host does not consider the memory reservation while calculating the provisioned space of a powered-off virtual machine. As a result, the vSphere Client might display a discrepancy in the provisioned space values while the virtual machine is powered-on or powered-off.

  • An ESXi host might become unresponsive if you unexpectedly remove one of the mirrored installation drives from the server that is connected to an LSI SAS controller. The ESXi host shows messages similar to the following:

    [329125.915302] sd 0:0:0:0: still retrying 0 after 360 s
    [329175.056594] sd 0:0:0:0: still retrying 0 after 360 s
    [329226.201904] sd 0:0:0:0: still retrying 0 after 360 s
    [329276.339208] sd 0:0:0:0: still retrying 0 after 360 s
    [329326.478513] sd 0:0:0:0: still retrying 0 after 360 s
  • When you create a virtual disk (.vmdk file) with a large size, for example, more than 1TB, on NFS storage, the creation process might fail with an error: A general system error occurred: Failed to create disk: Error creating disk. This issue occurs when the NFS client does not wait for sufficient time for the NFS storage array to initialize the virtual disk after the RPC parameter of the NFS client times out. By default the timeout value is 10 seconds.This patch provides the configuration option to tune the RPC timeout parameter using the esxcfg-advcfg -s <Timeout> /NFS/SetAttrRPCTimeout command.

  • If you remove a virtual machine snapshot, the VMware host agent service might fail and display a backtrace similar to the following:
    [2010-02-23 09:26:36.463 F6525B90 error 'App']
    Exception: Assert Failed: "_config != __null && _view != __null" @ bora/vim/hostd/vmsvc/vmSnapshot.cpp:1494

    This is because the <vm_name>-aux.xml located in the same directory as the virtual machine configuration file is empty. When a virtual machine is created or registered on a host, the contents of the <vm_name>-aux.xml file is read and the _view object is populated. If the XML file is empty the _view object is not populated. This results in an error when consolidating the snapshot.

  • ESXi hosts cannot revert to a previous snapshot after upgrading from ESXi 3.5 Update 4 to ESXi 4.0 Update 3, and the following message might be displayed in vCenter Server: The features supported by the processor(s) in this machine are different from the features supported by the processor(s) in the machine on which the checkpoint was saved. Please try to resume the snapshot on a machine where the processors have the same features.
    This issue might occur when you create virtual machines on ESXi 3.0 hosts, perform vMotion and suspend virtual machines on ESXi 3.5 hosts, and resume them on ESXi 4.x hosts. This issue is resolved by applying this patch. The error message does not appear. You can revert to snapshots created on ESXi 3.5 hosts, and resume the virtual machines on ESXi 4.x hosts.

  • During storage LUN path failover, if you perform any virtual machine operation that causes delta disk meta data updates such as creating or deleting snapshots, the ESXi host might fail with a purple diagnostic screen.

  • An issue in the Virtual Machine Interface (VMI) timer causes timer interrupts to be delivered to the guest operating system at an excessive rate. This issue might occur after a vMotion migration of a virtual machine that was up for a relatively long time, such as for one hundred days.

  • Virtual machines fail to power on in some cases even when swap space exists on ESXi hosts Powering on a virtual machine running on ESXi hosts fails with Insufficient COS swap to power on error in /var/log/vmware/hostd.log though the host has 800MB and swap enabled. Also, running the free -m command on the host shows greater than 20MB free. This fix enables to power on virtual machines when the swap space exists on ESXi hosts.

  • Due to memory allocation failure on a system that is under memory constraints when allocating Async_Token for handling I/O, the ESX host fails with a purple screen and displays an error message similar to the following:

    Unhandled Async_Token ENOMEM Condition
  • A bnx2 NIC reset might fail due to firmware synchronization timeout, and in turn cause the ESXi host to fail with a purple screen. The following is an example of a backtrace of this issue:

    0x4100c178f7f8:[0x41802686d9f4]bnx2_poll+0x167 stack: 0x4100c178f838
    0x4100c178f878:[0x4180267a3ec6]napi_poll+0xed stack: 0x4100c178f898
    0x4100c178f938:[0x41802642abaf]WorldletBHHandler+0x426 stack: 0x417fe726c680
    0x4100c178f9a8:[0x4180264218f7]BHCallHandlersInt+0x106 stack: 0x4100c178f9f8
    0x4100c178f9f8:[0x418026421dc1]BH_Check+0x144 stack: 0x4100c178fae0
    0x4100c178fa28:[0x41802642e524]IDT_HandleInterrupt+0x12b stack: 0x418040000000
    0x4100c178fa48:[0x41802642e9f2]IDT_IntrHandler+0x91 stack: 0x0
    0x4100c178fb28:[0x4180264a9b16]gate_entry+0x25 stack: 0x1


    This issue is resolved by applying this patch. The fix prevents the ESXi host from failing, by forcing the NIC to a link down state when the firmware synchronization times out. The following message is written to the VMkernel log:
    bnx2: Resetting... NIC initialization failed: vmnicX.

  • Memory hot-add fails if the assigned virtual machine memory equals the size of its memory reservation An error message similar to the following error is displayed on the vSphere Client:
    Hot-add of memory failed. Failed to resume destination VM: Bad parameter. Hotplug operation failed

    Messages similar to the following are written to /var/log/vmkernel.log on the ESXi host:
    WARNING: FSR: 2804: 1270734344 D: Received invalid swap bitmap lengths: source 0, destination 32768! Failing migration.
    WARNING: FSR: 3425: 1270734344 D: Failed to transfer swap state from source VM: Bad parameter
    WARNING: FSR: 4006: 1270734344 D: Failed to transfer the swap file from source VM to destination VM.
    WARNING: Migrate: 295: 1270734344 D: Failed: Bad parameter (0xbad0007) @0x41800847ae89
    WARNING: Migrate: 295: 1270734344 S: Failed: Bad parameter (0xbad0007) @0x4180084784ba


    This issue occurs if the Fast Suspend Resume (FSR) fails during the hot-add because the source does not have a swap file but the destination does. This issue only applies to the FSR on memory hot-add. vMotion and Storage vMotion are not affected. This issue is resolved by applying this patch.

  • If you do not set up the Syslog settings on an ESXi host no alarm or configuration error message is generated. This issue is resolved by applying this patch. Now a warning message similar to the following appears in the Summary tab of the ESXi host if you do not configure Syslog: Configuration Issues
    Issue detected on [host name] in : Warning: Syslog not configured.Please check Syslog options under Configuration.Software.AdvancedSettings in vSphere Client

  • When you connect USB devices (including baseboard management controllers (BMC) such as iLo or DRAC-based USB devices) to EHCI controllers, an issue with memory corruption might sometimes cause an ESXi host to fail with a purple screen and display error messages similar to the following:
    2010-12-11T10:11:30.683Z cpu1:2606)@BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4803 - Usage error in dlmalloc
    2010-12-11T10:11:30.684Z cpu1:2606)Code start: 0x418031000000 VMK uptime: 0:03:35:09.476
    2010-12-11T10:11:30.685Z cpu1:2606)0x412208b87b10:[0x418031069302]Panic@vmkernel#nover+0xa9 stack: 0x410014b12e80
    2010-12-11T10:11:30.687Z cpu1:2606)0x412208b87b30:[0x4180310285eb]DLM_free@vmkernel#nover+0x602 stack: 0x1
    2010-12-11T10:11:30.688Z cpu1:2606)0x412208b87b70:[0x418031038ef1]Heap_Free@vmkernel#nover+0x164 stack: 0x3e8


  • When using QLogic HBAs, ESXi 4.x hosts might become unresponsive because of heap memory depletion. The hosts are disconnected in vCenter Server and are inaccessible through SSH or vSphere Client. Error messages similar to the following are written to the VMkernel log:
    vmkernel: 17:12:38:35.647 cpu14:6799)WARNING: Heap: 1435: Heap qla2xxx already at its maximumSize. Cannot expand.
    vmkernel: 17:12:38:35.647 cpu14:6799)WARNING: Heap: 1645: Heap_Align(qla2xxx, 96/96 bytes, 64 align) failed. caller: 0x418011ea149b


  • ESXi host fails with a purple diagnostic screen due to a race in Dentrycache initialization.

  • An ESXi host might fail with a purple diagnostic screen that displays an error message similar to the following when VMFS snapshot volumes are exposed to multiple hosts in a vCenter server cluster.
    WARNING: LVM: 8703: arrIdx (1024) out of bounds

  • When you perform a RevertSnapshot or RevertToCurrentSnapshot operation the VMware Host Agent fails and the vCenter Server displays the ESXi host as disconnected.

  • Canceling a storage vMotion task when relocating a powered-on virtual machine containing multiple disks on the same datastore to a different datastore on the same host might cause the ESXi 4.0 hosts to fail with the following error: Exception: NOT_IMPLEMENTED bora/lib/pollDefault/pollDefault.c:2059.

  • Reverting a snapshot for a virtual machine that has Changed Block Tracking (CBT) enabled to a snapshot older than its last incremental backup can cause inconsistencies in incremental backups of that virtual machine.

  • Stopping VMware Tools while taking the quiesced snapshot of a virtual machine causes hostd to fail. After appylying this patch, the quiesced snapshot operation exits gracefully if you stop VMware Tools.

  • When Fault Tolerance is enabled, you cannot hot-remove devices such as NICs and SCSI controllers from the vSphere Client. However, these appear as removable devices in the Windows system tray of the virtual machine and you can remove them from within the guest operating system. This issue is resolved by applying this patch. Now you cannot remove devices from the virtual machine's system tray when Fault Tolerance is enabled.

  • A networking issue might cause an ESXi host to fail with a purple diagnostic screen that displays an error message similar to the following:
    Spin count exceeded (rentry) -possible deadlock with PCPU6
    This issue occurs if the system is sending traffic and modifying the routing table at the same time.

  • While performing snapshot operations, if you simultaneously perform another task such as browsing a datastore, the virtual machine might sometimes be abruptly powered off. Error messages similar to the following are written to vmware.log:
    vmx| [msg.disk.configureDiskError] Reason: Failed to lock the file
    vmx| Msg_Post: Error
    vmx| [msg.checkpoint.continuesync.fail] Error encountered while restarting virtual machine after taking snapshot. The virtual machine will be powered off.

    The issue occurs when a file required by the virtual machine for one operation is accessed by another process.

  • Virtual machines configured with CPU limits running on ESXi 4.x hosts experience performance degradation.

  • Guest software might use CPUID information to determine characteristics of underlying (virtual or physical) CPU hardware. In some instances, CPUID information returned by virtual hardware differs from physical hardware. Based upon these differences, certain components of guest software might malfunction. This issue is resolved by applying this patch. The fix causes certain CPUID responses to more closely match that which physical hardware would return.

  • FalconStor IPStor fail over results in APD (All Paths Down) state in ESXi 4.x when using Qlogic FC HBAs
    Messages similar to the following are written to /var/log/vmkernel:
    vmkernel: 1:10:57:57.524 cpu4:4219)<3>rport-4:0-0: blocked FC remote port time out: saving binding
    vmkernel: 1:10:57:57.712 cpu2:4206)<3>rport-3:0-1: blocked FC remote port time out: saving binding


  • ESXi hosts might fail with a NOT_REACHED bora/modules/vmkernel/tcpip2/freebsd/sys/support/vmk_iscsi.c:648 message on a purple screen when you scan for LUNs from iSCSI storage array by using the esxcfg-swiscsi command from the service console or through vSphere Client (Inventory > Configuration > Storage Adapters > iSCSI Software Adapter). This issue might occur if the tcp.window.size parameter in /etc/vmware/vmkiscsid/iscsid.conf is modified manually. Applying this patch resolves this issue. Warning messages are now logged in /var/log/vmkiscsid.log for ESXi if the tcp.window.size parameter is modified to a value lower than its default.

  • In this patch release, e1000e 1.1.2-NAPI driver is bundled with ESXi. In earlier releases, Intel e1000e 1.1.2-NAPI driver was not bundled with ESXi but provided separately for download.

  • Rescan or add-storage operations that you run from the vCenter Client might take a long time to complete or fail with a timeout, and a log spew of messages similar to the following is written to /var/log/vmkernel: Jul 15 07:09:30 <vmkernel_name>: 29:18:55:59.297 <cpu id>ScsiDeviceToken: 293: Sync IO 0x2a to device "naa.60060480000190101672533030334542" failed: I/O error H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
    Jul 15 07:09:30 [vmkernel name]: 29:18:55:59.298 cpu29:4356)NMP: nmp_CompleteCommandForPath: Command 0x2a (0x4100b20eb140) to NMP device "naa.60060480000190101672533030334542" failed on physical path "vmhba1:C0:T0:L100" H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
    Jul 15 07:09:30 [vmkernel_name]: 29:18:55:59.298 cpu29:4356)ScsiDeviceIO: 747: Command 0x2a to device "naa.60060480000190101672533030334542" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.

    VMFS continues trying to mount the volume even if the LUN is read-only. This issue is resolved by applying this patch. Now VMFS does not attempt to mount the volume when it receives the read-only status.

  • Connecting certain USB storage devices might cause an ESXi host to fail with a purple screen and display an error message similar to the following:

    @BlueScreen: #UD Exception(6) in world 1037009:usb-storage @ 0x41803a844eb6 Code starts at 0x41803a400000 0x4100c168fe90:[0x41803a844eb6]usb_stor_invoke_transport+0x73d stack: 0x10 0x4100c168feb0:[0x41803a842158]usb_stor_transparent_scsi_command+0x1b stack: 0x7d41af17b64564
    0x4100c168ff30:[0x41803a8467b9]usb_stor_control_thread+0x7c4 stack: 0x4100b0fdbaa0
    0x4100c168ff60:[0x41803a7c51f2]kthread+0x79 stack: 0x41000000000e
    0x4100c168ffa0:[0x41803a7c2b62]LinuxStartFunc+0x51 stack: 0xe
    0x4100c168fff0:[0x41803a49870b]vmkWorldFunc+0x52 stack: 0x0

    ESXi now detects these protocol violations and handles them.
  • This release provides an updated version of PVSCSI driver which enables you to install Windows XP guest operating system.

  • When you install new NetXen NICs on an ESXi 4.0 host or when you upgrade from ESXi 3.5 to ESXi 4.0, you might see an error message similar to the following on the ESXi 4.0 host: Out of Interrupt vectors. On ESXi hosts, where NetXen 1G and NX2031 10G devices do not support NetQueue, the ESXi host might run out of MSI-X interrupt vectors. ESXi hosts might not start or other devices (such as storage devices) might become inaccessible because of this issue.
  • ESXi hosts using software iSCSI initiators might fail with a purple diagnostic screen that displays iscsi_vmk messages similar to the following:

    @BlueScreen: #PF Exception(14) in world 4254:iscsi_trans_ ip 0x41800965fddb addr 0x8
    Code starts at 0x418009000000
    0x4100c04f7e50:[0x41800965fddb]iscsivmk_ConnShutdown+0x486 stack: 0x410000000000
    0x4100c04f7eb0:[0x418009665e93]iscsivmk_StopConnection+0x286 stack: 0x4100c04f7ef0
    0x4100c04f7ef0:[0x418009663e4c]iscsivmk_TransportStopConn+0x12b stack: 0x4100c04f7f6c
    0x4100c04f7fa0:[0x418009481654]iscsitrans_VmklinkTxWorld+0x36f stack: 0x1d
    0x4100c04f7ff0:[0x41800909870b]vmkWorldFunc+0x52 stack: 0x0
    0x4100c04f7ff8:[0x0]Unknown stack: 0x0

    This issue is known to occur due to any I/O delays that cause I/O requests to timeout and abort.

  • The minimum and default recommended memory sizes in the virtual machine default settings for RHEL 32-bit and 64-bit guest operating systems are updated as follows:
    For RHEL 6 32-bit, minimum memory is updated from 1GB to 512MB, default recommended memory is updated from 2GB to1GB, maximum recommended memory is updated from 64GB to 16GB, and hard disk size is updated from 8GB to 16 GB.
    For RHEL 6 64-bit, default recommended memory is updated from 2GB to1GB, and hard disk size is updated from 8GB to 16GB.
  • VMFS volumes might write misleading error messages similar to following that indicate disk corruption, instead of benign uninitialized log buffer:
    Aug 4 21:45:43 esx18-m1f4 vmkernel: 114:02:53:33.345 cpu9:21627)FS3: 3833: FS3DiskLock for [type bb9c7cd0 offset 13516784692132593920 v 13514140778984636416, hb offset 16640
    Aug 4 21:45:43 esx18-m1f4 vmkernel: gen 0, mode 16640, owner 00000006-4cd3bbfe-fece-e61f133cdd37 mtime 35821792] failed at 60866560 on volume 'QC_DS6_R1


  • The minimum recommended memory size in the virtual machine default settings for Ubuntu 32-bit and 64-bit guest operating systems is updated from 64MB to 256MB.

  • A vMotion operation might fail if the NSCD (Linux Name Server Cache Daemon) that runs in the service console is not able to resolve the FQDN and LDAP names.

  • An ESXi host connected to an NFS datastore might fail with a purple diagnostic screen that displays error messages similar to the following:
    Saved backtrace from: pcpu 16 SpinLock spin out NMI
    0x4100c00875f8:[0x41801d228ac8]ProcessReply+0x223 stack: 0x4100c008761c
    0x4100c0087648:[0x41801d18163c]vmk_receive_rpc_callback+0x327 stack: 0x4100c0087678
    0x4100c0087678:[0x41801d228141]RPCReceiveCallback+0x60 stack: 0x4100a00ac940
    0x4100c00876b8:[0x41801d174b93]sowakeup+0x10e stack: 0x4100a004b510
    0x4100c00877d8:[0x41801d167be6]tcp_input+0x24b1 stack: 0x1
    0x4100c00878d8:[0x41801d16097d]ip_input+0xb24 stack: 0x4100a05b9e00
    0x4100c0087918:[0x41801d14bd56]ether_demux+0x25d stack: 0x4100a05b9e00
    0x4100c0087948:[0x41801d14c0e7]ether_input+0x2a6 stack: 0x2336
    0x4100c0087978:[0x41801d17df3d]recv_callback+0xe8 stack: 0x4100c0087a58
    0x4100c0087a08:[0x41801d141abc]TcpipRxDataCB+0x2d7 stack: 0x41000f03ae80
    0x4100c0087a28:[0x41801d13fcc1]TcpipRxDispatch+0x20 stack: 0x4100c0087a58

    This issue might occur due to a corrupted response received from the NFS server for any read operation that you perform on the NFS datastore.

  • An ESXi host with Broadcom bnx2x driver might exhibit the following symptoms:
    • The ESXi host might frequently disconnect from the network.
    • The ESXi host might stop responding with a purple diagnostic screen that displays messages similar to the following:

      [0x41802834f9c0]bnx2x_rx_int@esx:nover: 0x184f stack: 0x580067b28, 0x417f80067b97, 0x
      [0x418028361880]bnx2x_poll@esx:nover: 0x1cf stack: 0x417f80067c64, 0x4100bc410628, 0x
      [0x41802825013a]napi_poll@esx:nover: 0x10d stack: 0x417fe8686478, 0x41000eac2b90, 0x4
    • The ESXi host might stop responding with a purple diagnostic screen that displays messages similar to the following:

      0:18:56:51.183 cu10:4106)0x417f80057838:[0x4180016e7793]PktContainerGetPkt@vmkernel:nover+0xde stack: 0x1
      0:18:56:51.184 pu10:4106)0x417f80057868:[0x4180016e78d2]Pkt_SlabAlloc@vmkernel:nover+0x81 stack: 0x417f800578d8
      0:18:56:51.184 cpu10:4106)0x417f80057888:[0x4180016e7acc]Pkt_AllocWithUseSizeNFlags@vmkernel:nover+0x17 stack: 0x417f800578b8
      0:18:56:51.185 cpu10:4106)0x417f800578b8:[0x41800175aa9d]vmk_PktAllocWithFlags@vmkernel:nover+0x6c stack: 0x1
      0:18:56:51.185 cpu10:4106)0x417f800578f8:[0x418001a63e45]vmklnx_dev_alloc_skb@esx:nover+0x9c stack: 0x4100aea1e988
      0:18:56:51.185 cpu10:4106)0x417f80057918:[0x418001a423da]__netdev_alloc_skb@esx:nover+0x1d stack: 0x417f800579a8
      0:18:56:51.186 cpu10:4106)0x417f80057b08:[0x418001b6c0cf]bnx2x_rx_int@esx:nover+0xf5e stack: 0x0
      0:18:56:51.186 cpu10:4106)0x417f80057b48:[0x418001b7e880]bnx2x_poll@esx:nover+0x1cf stack: 0x417f80057c64
      0:18:56:51.187 cpu10:4106)0x417f80057bc8:[0x418001a6513a]napi_poll@esx:nover+0x10d stack: 0x417fc1f0d078
    • The bnx2x driver or firmware sends panic messages and writes a backtrace with messages similar to the following in the /var/log/vmkernel log file:

      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3379(vmnic0)]MC assert!
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3384(vmnic0)]driver assert
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_panic_dump:634(vmnic0)]begin crash dump


  • If you configure the port group policies of NIC teaming for any of the following parameters such as load balancing, network failover detection, notify switches, or failback, and then restart the ESXi host, the ESXi host might send traffic only through one physical NIC. This issue is resolved by applying this patch.
  • This patch upgrades HPSA driver on ESXi 4.0 Update 3. This driver supports products based on HP Smart Array controllers.
  • If you perform operations that utilize the Logical Volume Manager (LVM) such as write operations, volume re-signature, volume tt, or volume growth, the ESXi host might fail with a purple diagnostic screen. Error messages similar to the following might be written to the logs: 63:05:21:52.692 cpu1:4135)OC: 941: Could not get object from FS driver: Permission denied
    63:05:21:52.692 cpu1:4135)WARNING: Fil3: 1930: Failed to reserve volume f530 28 1 4be17337 9c7dae2 23004d45 22b547d 0 0 0 0 0 0 0
    63:05:21:52.692 cpu1:4135)FSS: 666: Failed to get object f530 28 2 4be17337 9c7dae2 23004d45 22b547d 4 1 0 0 0 0 0 :Permission denied
    63:05:21:52.706 cpu1:4135)WARNING: LVM: 2305: [naa.60060e80054402000000440200000908:1] Disk block size mismatch (actual 512 bytes, stored 0 bytes)

  • An ESXi host might fail with a purple screen that displays an error message similar to the following when multiple threads trying to use the same TTY cause a race condition.

    ASSERT bora/vmkernel/user/userTeletype.c:969
    cr2=0xff9fcfec cr3=0xa114f000 cr4=0x128


  • When an I/O failure occurs in the megaraid_sas driver because of a device error, the sense buffer is not filled properly. This causes the ESXi host to fail.

  • You might not be able to view the CDP network location information by using the ESXi command line or through the vSphere Client.

  • If the NFS volume hosting a virtual machine encounters errors, the NVRAM file of the virtual machine might become corrupted and grow in size from the default 8K up to a few gigabytes. At this time, if you perform a vMotion or a suspend operation, the virtual machine fails with an error message similar to the following:
    unrecoverable memory allocation failures at bora/lib/snapshot/snapshotUtil.c:856

  • Syslog service does not start if it is configured to log to a remote host that cannot be resolved through DNS when the syslogd daemon is started. This causes remote as well as local logging processes to fail.
    This issue is resolved by applying this patch. Now if the remote host cannot be resolved, local logging is unaffected. When the syslogd daemon starts, it retries resolving and connecting to the remote host every 10 minutes.

  • Presenting snapshots of VMFS3 volumes upgraded from VMFS2 with block size greater than 1MB might fail to mount on ESXi 4.x hosts. Running the command esxcfg-volume -l to list the detected VMFS snapshot volumes fail with the following error message:
    ~ # esxcfg-volume -l
    Error: No filesystem on the device

    This issue is resolved by applying this patch. Now you can mount or re-signature snapshots of VMFS3 volumes upgraded from VMFS2.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware Update Manager. For details, see the VMware vCenter Update Manager Administration Guide.

ESXi hosts can also be updated using vSphere Host Update Utility or by manually downloading the patch zip file from http://support.vmware.com/selfsupport/download/ and installing the bulletin by using the vihostupdate command through the vSphere CLI. For details, see the vSphere CLI Installation and Reference Guide and the vSphere Upgrade Guide.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 0 Ratings
Actions
KB: