Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

VMware ESXi 5.0 Patch Image Profile ESXi-5.0.0-20131002001-no-tools (2055561)

Details

Release date: October 17, 2013

 
Profile Name ESXi-5.0.0-20131002001-no-tools
Build For build information, see KB 2055559.
Vendor VMware, Inc
Release Date
October 17, 2013
Acceptance Level PartnerSupported
Affected Hardware N/A
Affected Software N/A
Affected VIBs VMware: esx-base_5.0.0-3.41.1311175
VMware: misc-drivers_5.0.0-3.41.1311175
VMware: scsi-hpsa_5.0.0-17vmw.500.3.41.1311175
PRs Fixed 751684, 1000945, 959867, 957231, 1005946, 962663, 922238, 884916, 948809, 995516, 917813, 938064, 934743, 917514, 878758, 987104, 976053, 950260, 984095, 1052280, 912673, 987134, 989317, 1044343, 992794, 993325, 994312, 1043831, 979008, 995822, 978685, 999754, 1000151, 1000497, 1000746, 1034105, 978538, 1006824, 1007171, 912574, 868845, 977045, 1025853, 974386, 973244, 1019918, 973050, 932484, 899557, 1021078, 1019019, 971056, 1013285, 1026980, 1012906, 967743, 923580, 1036718, 1040423, 1012453, 1011366, 1010969, 964970, 923427, 890342, 816439, 922606, 981030, 962267
Related CVE numbers N/A

 

 
For information on patch and update classification, see KB 2014447.
 

Solution

Summaries and Symptoms

This patch updates the esx-base, misc-drivers, and scsi-hpsa VIBs to resolve the following issues:

  • PR 751684: When a VMkernel interface has acquired its IP lease from a DHCP server in another subnet, error messages similar to the following might be displayed by the DHCP server:
    2012-08-29T21:36:24Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:36:35Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:36:49Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:37:08Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:37:24Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:37:39Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:37:52Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:38:01Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:38:19Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:38:29Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:38:41Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:38:53Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:39:09Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67 2012-08-29T21:39:24Z dhclient-uw[4884]: DHCPREQUEST on vmk1 to 192.168.2.210 port 67
    Providing the DHCP server an interface on the same subnet as the VMkernel port to allow DHCP renewal resolves this issue.
  • PR 1000945: On an ESXi host, Small-Footprint CIM Broker daemon (sfcbd) might stop responding frequently when CIM provider fails during idle timeout.
  • PR 959867: When you run esxcfg-nas -l command, ESXi host displays warning message similar to the following:
    PREF Warning: PreferenceGet(libdir) before Preference_Init, do you really want to use default?
  • PR 957231: WS-Management GetInstance () action against "http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_SoftwareIdentity?InstanceID=46.10000" might issue a wsa:DestinationUnreachable fault on some ESXi server. The OMC_MCFirmwareIdentity object path is not consistent for CIM gi/ei/ein operations on the system with Intelligent Platform Management Interface (IPMI) Baseboard Management Controller (BMC) Sensor. As a result, WS-Management GetInstance () action issues a wsa:DestinationUnreachable fault on ESXi server.
  • PR 1005946: When you attempt to retrieve the value for the resourceCpuAllocMax and resourceMemAllocMax system counters against the host system, the ESXi host returns incorrect values. This issue is observed on a vSphere client connected to a vCenter server.
  • PR 962663: A page fault in the virtual machine monitor might cause an ESXi host to fail with a purple diagnostic screen and report a page fault exception error similar to the following in vmware.log:
    2013-05-15T12:48:25.195Z| vcpu-1| W110: A core file is available in "/vmfs/volumes/5088c935-f71201bf-d750-90b11c033174/BF-TS5/vmx-zdump.000" 2013-05-15T12:48:25.196Z| vcpu-1| I120: Msg_Post: Error 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1) 2013-05-15T12:48:25.196Z| vcpu-1| I120+ vcpu-1:VMM fault 14: src=MONITOR rip=0xfffffffffc243748 regs=0xfffffffffc008e98 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5088c935-f71201bf-d750-90b11c033174/BF-TS5/vmware.log". 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support. 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86] 2013-05-15T12:48:25.196Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support". 2013-05-15T12:48:25.196Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement. 2013-05-15T12:48:25.196Z| vcpu-1| I120:

  • PR 922238: When you add a new host to the cluster and reconfigure the High Availability, the ESXi host fails with a purple diagnostic screen. This issue occurs when you change certain NFS parameters.
  • PR 884916: When you view the power usage performance chart for an IBM System x iDataPlex dx360 M3, the chart shows a constant 0 watts. This issue occurs due to a change in the IPMI sensor IDs used by IBM System x iDataPlex dx360 M3 servers.
  • PR 948809: In ESXi 4.x, the configuration of Syslog.Local.DatastorePath path is stored in the /etc/syslog.conf file.
    However, in ESXi 5.x, the /etc/syslog.conf file is replaced by /etc/vmsyslog.conf file and the configuration of Syslog.global.logDir directory is stored in the /etc/vmsyslog.conf file.
    As a result, the Syslog server configurations of logfile and loghost attributes in the /etc/syslog.conf file are not migrated to the newly configured logdir and loghost attributes in the new /etc/vmsyslog.conf file. Hence, while upgrading multiple ESXi 4.x servers to ESXi 5.x servers, you need to manually configure the Syslog.global.logDir directory each time after the upgrade is complete.

    This issue is resolved in this release by updating the attributes in the following ways:
    1. The loghost attribute in the /etc/syslog.conf file is retained in the new /etc/vmsyslog.conf file.
    2. The logfile attribute is no longer valid now. This attribute is migrated to logdir attribute in the new /etc/vmsyslog.conf file. The value of logdir attribute is the directory name of the old logfile attribute value. The migration only happens when the directory is still a valid directory on the upgraded system.
  • PR 995516: When you take a quiesced snapshot of a virtual machine, the vmx file does not get updated until the next power on.The vmx configuration is out dated and it points to the original vmdk. If the virtual machine fails between the snapshot and the next power on, it would result in data loss and an orphaned vmdk. 
  • PR 917813: The migration rates of some lazy zeroed thick disks might be slow on some ESXi servers when compared to other disk transfers of the same size. This issue occurs due to higher latencies while accessing some memory regions by the file system cache (buffer cache) component. As a result, the migration rates become slow for ESXi servers.
  • PR 938064: NetApp has requested an update to the SATP claim rule which prevents the reservation conflict for a Logical Unit Number (LUN). The updated SATP claim rule uses the reset option to clear the reservation from the LUN and allows other users to set the reservation option.
  • PR 934743: When you perform certain virtual machine operations, an issue related to metadata corruption of LUNs might sometimes cause an ESXi host to fail with a purple screen and display error messages similar to the following:
    @BlueScreen: #DE Exception 0 in world 4277:helper23-15 @ 0x41801edccb6e3:21:13:31.624 cpu7:4277)Code start: 0x41801e600000
    VMK uptime: 3:21:13:31.6243:21:13:31.625 cpu7:4277)0x417f805afed0:[0x41801edccb6e]Fil3_DirIoctl@esx:nover+0x389 stack:
    0x410007741f603:21:13:31.625 cpu7:4277)0x417f805aff10:[0x41801e820f99]FSS_Ioctl@vmkernel:nover+0x5c stack:
    0x2001cf5303:21:13:31.625 cpu7:4277)0x417f805aff90:[0x41801e6dcf03]HostFileIoctlFn@vmkernel:nover+0xe2 stack:
    0x417f805afff03:21:13:31.625 cpu7:4277)0x417f805afff0:[0x41801e629a5a]helpFunc@vmkernel:nover+0x501 stack: 0x03:21:13:31.626 cpu7:4277)0x417f805afff8:[0x0] stack: 0x0


    Metadata corruption of LUNs will now result in an error message.
  • PR 917514: A RedHat Enterprise Linux 4.8 32-bit virtual machine with a workload that is mostly idle with intermittent or simultaneous wakeup of multiple tasks, might show a higher load average on ESXi 5.0 as compared to ESX/ESXi 4.0.
  • PR 878758: LSI CIM provider (one of sfcb process) leaks file descriptors. This could stop the working of sfcb-hhrc and restart of sfcbd. syslog file might display messages similar to the following:

    sfcb-LSIESG_SMIS13_HHR[ ]: Error opening socket pair for getProviderContext: Too many open files
    sfcb-LSIESG_SMIS13_HHR[ ]: Failed to set recv timeout (30) for socket -1. Errno = 9
    ...
    ...

    sfcb-hhrc[ ]: Timeout or other socket error
    sfcb-hhrc[ ]: TIMEOUT DOING SHARED SOCKET RECV RESULT ( )

  • PR 987104, 976053: When a stateless ESXi hosts is rebooted and the host is configured to obtain DNS configuration and host name from a DHCP server, the syslog file displays the host's name as localhost instead of the host name obtained from the DHCP server. As a result, for a remote syslog collector, all ESXi hosts appear to have the same host name.
  • PR 950260: Upgrading an ESXi host in a High Availability (HA) cluster might fail with an error message similar to the following with vCenter Update Manager (VUM):
    the host returned esx update error code 7
    This issue occurs when multiple staging operations are performed with different baselines in Update Manager.
  • PR 984095: When you use the virtual disk metric to view performance charts, you only have the option of viewing the virtual disk performance charts for the available virtual disk objects.

    This patch allows you to view virtual disk performance charts for virtual machine objects as well. This is useful when you need to trigger alarms based on virtual disk usage by virtual machines.
  • PR 1052280: When ESXi hosts fail with a purple screen, the memory controller error messages might be incorrectly reported as Translation Look-aside Buffer (TLB) error messages, Level 2 TLB Error.
  • PR 912673: When you boot from a SAN, the boot device discovery process might take more time to complete. Passing a rescan timeout parameter to ESX command line before the boot process will allow the users to configure the timeout value, this resolves the issue.
  • PR 987134: Netlogond might consume high memory in Active Directory environment with multiple unreachable Domain Controllers. As a result, Netlogond might fail and ESXi host might loose Active Directory functionality.
  • PR 989317: When you create a virtual machine disk file (VMDK) of thick provisioning with a size of 2TB, the disk size reports in the datastore browser incorrectly displays it as 0.00 Bytes
  • PR 1044343: ESXi hosts might be disconnected from the vCenter Server when hostd fails with error messages similar to the following:
    2013-06-04T11:47:30.242Z [6AF85B90 info 'ha-eventmgr'] Event 110 : /sbin/hostd crashed (1 time(s) so far) and a core file might have been created at/var/core/hostd-worker-zdump.000. This might have caused connections to the host to be dropped.
    This issue occurs when a check is performed to ensure correct cache configuration.
  • PR 992794: If an ESX host is not configured with a core dump partition and is not configured to direct core dumps to a dump collector service, then we might lose important troubleshooting information. Including a check for configuration and core dump partition at the start of hostd service solves this issue.
  • PR 993325: When you execute ESXCLI commands or if you use monitoring tools that rely on data from the SNMP agent in ESXi, the connection to the ESXi host might be lost due to failure of the hostd service.
  • PR 994312: You might not be enable a High Availability (HA) cluster after a single host of the same HA cluster is placed in a maintenance mode. This issue occurs when the value of the inode descriptor number is not correctly set in ESX root file system (VisorFS) and as a result the stat calls on those inodes fail.
  • PR 1043831: You might be unable to mount NFS datastore that has a remote path name of 115 characters or more and displays error message similar to the following:
    Unable to get Console path for Mount
    ESXi host maintains NAS volume as a combination of NFS server IP address and the complete path name of the exported share. This issue occurs when this combination exceeds 128 characters.

    This issue is resolved in this patch by increasing the size of NAS volume to 1024.
  • PR 979008: ESXi hosts disconnect from vCenter Server and you cannot re-connect the host to vCenter Server. This issue is caused by the hardware monitoring service (sfcdb) that populates the /var/run/sfcb directory with over 5000 files.

    The hostd.log file located at /var/log/ indicates that the host is out of space:
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device


    The vmkernel.log file located at /var/log indicates that it is out of inodes:
    cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
    cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.

  • PR 995822: You might be unable to clone and perform cold migration of virtual machines with large virtual machine disk (VMDK) and snapshot files to other datastores. This issue occurs when the vpxa process exceeds the limit of memory allocation during cold migration. As a result, the ESXi host loses the connection from the vCenter Server and the migration fails.

  • PR 978685: When using vShield Endpoint and Deep security, an issue with the DvFilter module might lead to netGPHeap depletion and memory leak. This might cause virtual machines to disconnect from the network after they are restarted or migrated by using vMotion.
    The log files might display messages similar to the following:
    2012-11-30T11:29:06.368Z cpu18:923386)WARNING: E1000: 8526: failed to enable port 0x30001b6 on vSwitch1: Out of memory
  • PR 999754: When there is invalid CIM subscription in the system and you perform a host profile compliance check against a host, an error message similar to the following might be displayed:

    Error extracting indication configuration: (6, u'The requested object could not be found')

    You cannot apply the host profile on the host.

    You can apply host profiles even if there is a invalid indication in the host profile.
  • PR 1000151: Insufficient memory allocation to the logging resource pool might cause ESXi to stop logging messages in the log files. Messages similar to the following are displayed in the log files:
    <TimeStamp> vmsyslog.main : ERROR ] Watchdog 2625 fired (child 2626 died with status 256)!
    <TimeStamp> vmsyslog : CRITICAL] vmsyslogd daemon starting (69267)
    <TimeStamp> vmsyslog.main : ERROR ] Watchdog 2625 exiting
    <TimeStamp> vmsyslog.loggers.file : ERROR ] Gzip logfile /scratch/log/hostd0.gz failed
    <TimeStamp> vmsyslog.main : ERROR ] failed to write log, disabling
    <TimeStamp> vmsyslog.main : ERROR ] failed to send vob: [Errno 28] No space left on device


    The logging memory pool limit is increased to 48 MB.
    The logging memory pool limit is increased to 48 MB.
  • PR 1000497: When VMX passes a VPN value to read a page, VMKernel fails to find a valid machine page number for that VPN value which results in the host failing with a purple diagnostic screen.

  • PR 1000746: ESXi hosts might fail with a purple screen and display error messages similar to the following:
    2013-02-22T15:33:14.296Z cpu8:4104)@BlueScreen: #PF Exception 14 in world 4104:idle8 IP 0x4180083e796b addr 0x1
    2013-02-22T15:33:14.296Z cpu8:4104)Code start: 0x418007c00000 VMK uptime: 58:11:48:48.394
    2013-02-22T15:33:14.298Z cpu8:4104)0x412200207778:[0x4180083e796b]ether_output@ # +0x4e stack: 0x41000d44f360
    2013-02-22T15:33:14.299Z cpu8:4104)0x4122002078b8:[0x4180083f759d]arpintr@ # +0xa9c stack: 0x4100241a4e00

    This issue occurs due to race conditions in ESXi TCP/IP stack.

  • PR 1034105: Incorrect error messages similar to the following might be displayed by CIM providers: :
    \"Request Header Id (886262) != Response Header reqId (0) in request to provider 429 in process 5. Drop response.\"
    This issue is resolved in this release by updating the error log and restarting the sfcbd management agent to display correct error messages similar to the following:
    Header Id (373) Request to provider 1 in process 0 failed. Error:Timeout (or other socket error) waiting for response from provider.

  • PR 978538: On an ESXi host, virtual machine fails to power on if its VMDK file is not accessible and its VMX file has disk.powerOnWithDisabledDisk set to TRUE and answer.msg.disk.notConfigured set to Yes. The following error message is displayed:
    The system cannot find the file specified.

  • PR 1006824: Under certain conditions provisioned space value for NFS datastore might be incorrectly calculated and hence false alarms might be generated.

  • PR 1007171: After you install VMware Tools, Windows Server 2008 virtual machines might stop responding on subsequent restart from the system login page. This issue occurs when the default settings of the SVGA drivers that are installed with VMware Tools are not proper. The virtual machines might also stop responding if you move the mouse and press any key during the restart process.

  • PR 912574: Datastore names with space or parentheses are handled incorrectly by the esxcli command. This issue is observed when the user attempts an ESXi upgrade using the esxcli command.

  • PR 868845: When virtual machines are configured with e1000 network adapters, ESXi hosts might fail with a purple diagnostic screen and displays messages similar to the following:
    @BlueScreen: #PF Exception 14 in world 8229:idle37 IP 0x418038769f23 addr 0xc
    0x418038600000 VMK uptime: 1:13:10:39.757
    0x412240947898:[0x418038769f23]E1000FinalizeZeroCopyPktForTx@vmkernel#nover+0x1d6 stack: 0x41220000
    0x412240947ad8:[0x41803877001e]E1000PollTxRing@vmkernel#nover+0xeb9 stack: 0x41000c958000
    0x412240947b48:[0x418038771136]E1000DevAsyncTx@vmkernel#nover+0xa9 stack: 0x412240947bf8
    0x412240947b98:[0x41803872b5f3]NetWorldletPerVMCB@vmkernel#nover+0x8e stack: 0x412240947cc0
    0x412240947c48:[0x4180386ed4f1]WorldletProcessQueue@vmkernel#nover+0x398 stack: 0x0
    0x412240947c88:[0x4180386eda29]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x2
    0x412240947ce8:[0x4180386182fc]BHCallHandlers@vmkernel#nover+0xbb stack: 0x100410000000000
    0x412240947d28:[0x4180386187eb]BH_Check@vmkernel#nover+0xde stack: 0xf2d814f856ba
    0x412240947e58:[0x4180387efb41]CpuSchedIdleLoopInt@vmkernel#nover+0x84 stack: 0x412240947e98
    0x412240947e68:[0x4180387f75f6]CpuSched_IdleLoop@vmkernel#nover+0x15 stack: 0x70
    0x412240947e98:[0x4180386460ea]Init_SlaveIdle@vmkernel#nover+0x13d stack: 0x0
    0x412240947fe8:[0x4180389063d9]SMPSlaveIdle@vmkernel#nover+0x310 stack: 0x0

  • PR 977045: ESX hosts might fail with a purple diagnostic screen due to failure in memory allocation from world heap for firing traces, a mechanism used by vmkernel to batch up after write traces on guest pages. This issue occurs when this failure of memory allocation is not handled properly.

  • PR 1025853: If the vSphere Client network connection is interrupted when a virtual machine is using a client device CD-ROM, the virtual machine might stop responding and might not be accessible on the network for some time.

  • PR 974386: If you use virtual network adapters in promiscuous mode to track network activity, a certain issue with the port mirroring feature can disable a mirror port and cause virtual machines to stop tracking the outbound network traffic.

  • PR 973244: When the IBM Systems Director (ISD) is used for monitoring the ESX server, the CIM server returns incorrect PerceivedSeverity value for indication to ISD. Correcting the sensor type and the PerceivedSeverity return value solves this issue.

  • PR 1019918: During hostd performance tests, when you perform some virtual machine operations like Create nVMs, Reconfig nVMs, Clean nVMs, it might result in regression issues. This happens because a datastore refresh call is processed for every vdiskupdate message. Modifying the datastore refresh logic resolves this issue.

  • PR 973050: When you take a snapshot of a virtual machine with vmxnet3 NIC, the virtual machine’s network interface will be disconnected and re-connected, which resets the broadcast counter resulting in an incorrect representation of network statistics.

  • PR 932484: When a virtual disk with Thick Provision Lazy zeroed format is created on a VAAI supported NAS in a VAAI enabled ESXi host, the provisioned space for the corresponding virtual machine and datastore might be displayed incorrectly.

  • PR 899557: When vpxa writes syslog entries longer than 1024, it categorizes the message body after 1024 as Unknown and puts them in syslog.log file instead of vpxa.log file. As a result of this the ESXi host displays storage related Unknown messages in syslog.log files. Increasing the line buffer limit to a greater value resolves this issue.

  • PR 1021078: Virtual machine might fail and might display error messages in the vmware.log file similar to the following when the guest operating system with e1000 NIC driver is placed on D3 suspended mode:
    2013-08-20T10:14:35.121Z[+13.605]| vcpu-0| SymBacktrace[2] 000003ffff023be0 rip=000000000039d00f
    2013-08-20T10:14:35.121Z[+13.606]| vcpu-0| Unexpected signal: 11

    This issue occurs for the virtual machine that uses ip aliasing and the number of IP addresses exceed 10.

  • PR 1019019: When you apply a complex host profile, for example, large number of portgroups and datastores, the operation might time out with an error message similar to the following::
    2013-04-09T15:27:38.562Z [4048CB90 info 'Default' opID=12DA4C3C-0000057F-ee] [VpxLRO] -- ERROR task-302 -- -- vim.profile.host.profileEngine.HostProfileManager.applyHostConfig: vmodl.fault.SystemError:
    --> Result:
    --> (vmodl.fault.SystemError) {
    --> dynamicType = ,
    --> faultCause = (vmodl.MethodFault) null,
    --> reason = "",
    --> msg = "A general system error occurred: ",
    --> }
    The hostd default timeout is 10 minutes. As applyHostConfig is not a progressive task, the hostd service is unable to distinguish between failed task and long-running task during hostd timeout. As a result, the hostd service reports that the applyHostConfig has failed.

    This issue is resolved in this release by installing 30-minute timeout as a part of HostProfileManager Managed Object.However, this issue might still occur when you attempt to apply a large host profile and the task might exceeds 30-minute timeout limit. To workaround this issue, re-apply the host profile.

  • PR 971056: If two DVF filter processes attempt to manage a single configuration variable at the same time, while one process frees the filter configuration and the other process attempts to lock it, this might lead to an ESXi host failure. This issue occurs when the guest operating system is shutdown during the dvfilter cleanup process.
  • PR 1013285: When NFS datastores are connected through Layer 3 routed network and NFS vmknic is in a different subnet than NFS filer, the datastores might exhibit high Guest Average Latency (GAVG) for virtual machines running on it for low I/O Operations Per Second (IOPS). For IOPS values of 1 or less, the NFS datastores might exhibit high GAVG of value 40ms. When heavy I/O load is sent, GAVG values become less for the NFS datastores.

  • PR 1026980: The hostd process might stop responding and display error messages similar to the following during the creationof VMFS datastore:
    Panic: Assert Failed: \\\"matchingPart != __null\\\"
    This issue occurs when the partition alignment during the datastore creation requires a change in the startSector data object and the corresponding block range is not adjusted properly.
  • PR 1012906: When the SFCBD service is enabled in the trace mode and the service stops running, the Hardware Status tab of an ESXi host might report an error. Any third party tool might not be able to monitor the ESXi host's hardware status.

  • PR 967743: Multiple virtual machines might fail to reboot and generate VMX core file after reboot. This issue is seen with virtual machines that are migrated from ESX/ESXi 3.5 hosts to ESX/ESXi hosts with versions ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1 Update 2, ESXi 5.0 Update 2, and above by using vMotion.
    This issue is resolved for ESXi 5.0 hosts in this release.

  • PR 923580: When you enable Changed Block Tracking (CBT) on a virtual machine and perform QueryChangedDiskAreas after moving a virtual machine disk (VMDK) to a different volume by using vMotion, the incremental backup might fail with a FileFault error similar to the following:
    2012-09-04T11:56:17.846+02:00 [03616 info 'Default' opID=52dc4afb] [VpxLRO] -- ERROR task-internal-4118 -- vm-26512 -- vim.VirtualMachine.queryChangedDiskAreas: vim.fault.FileFault:
    --> Result:
    --> (vim.fault.FileFault) {
    --> dynamicType = ,
    --> faultCause = (vmodl.MethodFault) null,
    --> file = "/vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
    --> msg = "Error caused by file /vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
    --> }

    This issue occurs when a library function incorrectly reinitializes the disk change tracking facility.

  • PR 1036718: Microsoft failover cluster I/O might fail to survive Storage Fault Tolerance and I/O might fail with reservation conflict. This issue occurs when two failover cluster virtual machines are placed on two different ESXi hosts and are on ALUA configuration.

  • PR 1040423: When a floppy image is attached to a virtual machine, an attempt to install Linux operating system on it might fail.
    The vmware.log file might display entries similar to the following:
    RemoteFloppyVMX: Remote cmd uid 0 timed out.
    | vcpu-3| Caught signal 11 -- tid 115057
    | vcpu-3| SIGNAL: eip 0x1f5c2eca esp 0xcaf10da0 ebp 0xcaf10e00
    | vcpu-3| SIGNAL: eax 0x0 ebx 0x201826a0 ecx 0x593faa10 edx 0x201d63b0 esi 0x0 edi 0x593faa00
    | vcpu-3| r8 0x593faa00 r9 0x0 r10 0x1fd79f87 r11 0x293 r12 0x2022d000 r13 0x0 r14 0x0 r15 0x1fd6eba0
    | vcpu-3| Backtrace:
    | vcpu-3| Backtrace[0] 00000000caf109a0 rip=000000001f8caf9e rbx=000000001f8cad70 rbp=00000000caf109c0 r12=0000000000000000 r13=00000000caf198c8 r14=00000000caf10b50 r15=0000000000000080
    ....
    | vcpu-3| SymBacktrace[2] 00000000caf10ad0 rip=000000000038c00f
    | vcpu-3| Unexpected signal: 11.
    | vcpu-3| Writing monitor corefile "/vmfs/volumes/519f119b-e52d3cf3-6825-001999db3236/EMS/vmmcores.gz"


  • PR 1012453: When you attempt to shutdown or reboot an ESXi host through Direct Console User Interface (DCUI) the host freezes and the user is unable to complete the shutdown process.

  • PR 1011366: When you use storage vMotion to migrate virtual machines with storage of 2 TB ,i.e, 2 disks of 1TB, an error message similar to the following might be displayed:
    A general system error occurred: Source detected that destination failed to resume.


    The virtual machine fails to start on the destination host and an error message similar to the following is displayed:
    Error: "VMware ESX unrecoverable error: (vmx) Unexpected signal 8".

  • PR 1010969: The Hardware Status tab fails to display health statuses and displays an error message similar to the following:

    Hardware monitoring service on this host is not responding or not available.

    The hardware monitoring service (sfcdb) stops and the syslog file might display entries similar to the following:

    sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 6750210)
    sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 6 payLoadSize 19 chunkSize 0 from 12 resp 6750210
    sfcb-smx[xxxxxx]: spRecvReq returned error -1. Skipping message.
    sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 4)
    sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 220 payLoadSize 116 chunkSize 104 from 12 resp 4
    ...
    ...
    sfcb-vmware_int[xxxxxx]: spGetMsg receiving from 40 419746-11 Resource temporarily unavailable
    sfcb-vmware_int[xxxxxx]: rcvMsg receiving from 40 419746-11 Resource temporarily unavailable
    sfcb-vmware_int[xxxxxx]: Timeout or other socket error

  • PR 964970: When you attempt to add an ESXi server to the vCenter Server, multiple ESXi servers might stop responding and error message similar to the following might be displayed:
    Unable to access the specified host, either it doesn't exist, the server software is not responding, or there is a network problem.
    This issue occurs when a high volume of HTTP URL requests are sent to hostd and the hostd service fails.

  • PR 923427: When High Availability (HA) is enabled on the ESXi host and vMotion is performed, the failover of a virtual machine to a designated failover host might not be successful.
    This issue occurs when the virtual machine swap files (.vswp files) are locked and as a result the Fault Domain Manager (FDM) agents for HA do not succeed to failover the virtual machine on the designated host.

  • PR 890342: If you disable a DHCP enabled port group that contains the default gateway, the default gateway is left blank. When you re-enable the port group, the default gateway is still left blank.
    With this patch the default gateway is updated and not left blank.

  • PR 816439: After joining an ESXi 5.0 host to an Active Directory (AD) domain, an attempt to assign permission to AD users and groups might fail. You are unable to view the domain to which you have joined the host in the dropdown for adding permissions to AD users and groups. This issue occurs as the lsassd service on the host stops. The lsassd.log file displays entries similar to the following:
    20111209140859:DEBUG:0xff92a440:[AD_DsEnumerateDomainTrusts() /build/mts/release/bora-396388/likewise/esxi-esxi/src/linux/lsass/server/auth-providers/ad-provider/adnetapi.c:1127] Failed to enumerate trusts at host.your.domain.name.net (error 59)
    20111209140859:DEBUG:0xff92a440:[AD_DsEnumerateDomainTrusts() /build/mts/release/bora-396388/likewise/esxi-esxi/src/linux/lsass/server/auth-providers/ad-provider/adnetapi.c:1141] Error code: 40096 (symbol: LW_ERROR_ENUM_DOMAIN_TRUSTS_FAILED)

  • PR 922606: When network traffic passes through the bnx2x device driver and the vmklinux receives the Large Receive Offload (LRO) generated packets, the network packets might be dropped resulting in the failure of the ESXi hosts with a purple screen.
    The ESXi hosts experience a divide-by-zero exception during the TSO split and finally results in failure of the host. This issue occurs when the the bnx2x driver sends the LRO packet with a TCP Segmentation Offload (TSO) MSS value set to zero.
    Also, the ESXi host fails when the packet received is invalid for any one of the following reasons:
    • If the GSO size is zero
    • If the GSO type is not supported
    • The vLAN ID is incorrect

  • PR 981030: When VMKlinux incorrectly sets the device status, false Device Busy (D:0x8) status messages are displayed in VMkernel log files similar to the following:
    2013-04-04T17:56:47.668Z cpu0:4012)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x412441541f80) 0x16, CmdSN 0x1c9f from world 0 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x0 D:0x8 P:0x0 Possible sense data: 0x0 0x0 0x0
    This generates false alarms as the storage array does not send any Device Busy status message for SCSI commands.
    This issue is resolved in this release by correctly pointing to Host Bus Busy (H:0x2) status messages for issues in the device drivers similar to the following:
    2013-04-04T13:16:27.300Z cpu12:4008)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x4124819c2f00) 0x2a, CmdSN 0xfffffa80043a2350 from world 4697 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x2 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0

  • PR 962267: When you execute the cat command on a HP Smart Array controller with about 40 or more logical unit numbers, the ESXi host fails with a purple diagnostic screen. This happens because of the buffer overflow and data truncation in hpsc handler.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.For information about image profiles and how it applies to ESXi 5.0 hosts, see Image Profiles of ESXi 5.0 Hosts (KB 2009231). ESXi hosts can also be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 0 Ratings
Actions
KB: