Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

VMware ESXi 5.1 Patch Image Profile ESXi-5.1.0-20140102001-no-tools (2062311)

Details

Release date: January 16, 2014

Profile Name
ESXi-5.1.0-20140102001-no-tools
Build
For build information, see KB 2062314.
Vendor
VMware, Inc.
Release Date
January 16, 2014
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
esx-base, net-tg3, net-e1000e, scsi-rste, scsi-mpt2sas, sata-ata-piix, sata-ahci
PRs Fixed
926769, 938066, 940710, 949248, 952465, 957286, 959763, 962122, 966990, 969068, 970635, 975744, 976683, 979390, 981031, 984056, 985864, 986734, 987135, 987620, 990632, 990770, 994313, 996536, 998315, 1000989, 1005376, 1006825, 1006959, 1007170, 1007520, 1008254, 1008830, 1008892, 1011061, 1018995, 1019015, 1020453, 1026207, 1026618, 1027224, 1027949, 1029010, 1033086, 1034629, 1035097, 1035388, 1037369, 1038151, 1040222, 1042045, 1042668, 1044408, 1044919, 1045271, 1045434, 1046589, 1051263, 1051674, 1051735, 1053197, 1053550, 1054945, 1056515, 1059000, 1059043, 1061459, 1067458, 1069332, 1070074, 1070930, 1074681, 1074989, 1078765, 1078870, 1079558, 1082095, 1083962, 1087008, 1087441, 1087546, 1098119, 1098296, 1114090, 1119268, 1121196, 1122123, 1001868, 1038121, 1032019, 1074562, 1040433, 895622
Related CVE numbers
N/A

For more information on patch and update classification, see KB 2014447.


Solution

Summaries and Symptoms

This patch updates the esx-base, net-tg3, net-e1000e, scsi-rste, scsi-mpt2sas, sata-ata-piix, and sata-ahci VIBs to resolve the following issues:

  • PR 926769: In a LUN RESET task with NetApp targets, it is observed that LUN RESET fails. The fc_fcp_resp() is not completing the LUN RESET task since fc_fcp_resp assumes that the FCP_RSP_INFO is 8 bytes with the 4 byte reserved field, however, in case of NetApp targets the FCP_RSP to LUN RESET only has 4 bytes of FCP_RSP_INFO. This leads fc_fcp_resp error without completing the task. Reset the host to fix the issue.

  • PR 938066: NetApp has requested an update to the SATP claim rule which prevents the reservation conflict for a Logical Unit Number (LUN). The updated SATP claim rule uses the reset option to clear the reservation from the LUN and allows other users to set the reservation option.

  • PR 940710: Some 3D applications display misplaced geometry on a Windows 7 or Windows 8 virtual machine if the Enable 3D support option is selected.

  • PR 949248: You might be unable to clone and perform cold migration of virtual machines with large virtual machine disk (VMDK) and snapshot files to other datastores. This issue occurs when the vpxa process exceeds the limit of memory allocation during cold migration. As a result, the ESXi host loses the connection from the vCenter Server and the migration fails.

  • PR 952465: If you enable TPM/TXT in system BIOS (1.41 version) mode of an ESXi host, then both Tboot and ESXi host might fail to boot. This issue is observed on IBM 3650 M4 server.

  • PR 957286: Configure Host1 with PSP set it as fixed and configure a preferred path, then extract profile from Host1 and apply the profile to Host2. During the initial check of the Host2 profile the Host Profile Plugin module might encounter Invalid Path Value error and report the Invalid Path Value.

  • PR 959763: When you boot from a SAN and the boot device discovery process takes more time to complete, ESX host /bootbank will point to /tmp. With this release we have provided bootDeviceRescanTimeout parameter to ESX command line which can be passed before the boot process to configure the timeout value to resolves the issue.

  • PR 962122: ESXi hosts disconnect from vCenter Server and cannot be reconnected to the vCenter Server. This issue is caused by the hardware monitoring service (sfcdb) that populates the /var/run/sfcb directory with over 5000 files.The hostd.log file located at /var/log/ indicates that the host is out of space:

    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device

    The vmkernel.log file located at /var/log indicates that the host is out of inodes:
    cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
    cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.


  • PR 966990: When you attempt to remove multiple NFS datastores using the Host Profile option, an error occurs because there might be a datastore that has already been deleted.

  • PR 969068: When you attach a host profile to an host, vSwitch properties such as Failback and Failover Order extracted from a reference host is not applied the host.

  • PR 970635: If NetFlow monitoring is enabled on a distributed port group, the Internet Protocol Flow Information Export (IPFIX) capability is used to monitor switch traffic. A race condition might occur in the IPFIX filter function when records with the same key co-exist in the hashtable, causing ESXi hosts to stop responding with a bactrace similar to the following:

    cpu8:42450)Panic: 766: Panic from another CPU (cpu 8, world 42450): ip=0x41801d47b266:
    #GP Exception 13 in world 42450:vmm0:Datacen @ 0x41801d4b7aaf
    cpu8:42450)Backtrace for current CPU #8, worldID=42450, ebp=0x41221749b390
    cpu8:42450)0x41221749b390:[0x41801d4b7aaf]vmk_SPLock@vmkernel#nover+0x12 stack: 0x41221749b3f0, 0xe00000000000
    cpu8:42450)0x41221749b4a0:[0x41801db01409]IpfixFilterPacket@#+0x7e8 stack: 0x4100102205c0, 0x1, 0x
    cpu8:42450)0x41221749b4e0:[0x41801db01f36]IpfixFilter@#+0x41 stack: 0x41801db01458, 0x4100101a2e58


  • PR 975744: If an ESX host is not configured with a core dump partition and neither configured to direct core dumps to a dump collector server, we might lose important troubleshooting information in case of host panic. With this release we have included a check for this configuration in host agent and hence host Summary tab and Events tab will show the issue if core dump partition or dump collector service is not configured.

  • PR 976683: When you install a Windows Presentation Foundation (WPF) application on a virtual machine and enable the 3D software rendering option, the WPF application might have inconsistency while displaying some images.

  • PR 979390: If USB controllers are added as passthrough devices for virtual machines configured with DirectPath and if the host running the virtual machine is restarted, the USB controllers lose their passthrough status and are no longer available for direct access to virtual machines.

  • PR 981031: When VMKlinux incorrectly sets the device status, false Device Busy (D:0x8) status messages similar to the following are displayed in VMkernel log files:

    2013-04-04T17:56:47.668Z cpu0:4012)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x412441541f80) 0x16, CmdSN 0x1c9f from world 0 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x0 D:0x8 P:0x0 Possible sense data: 0x0 0x0 0x0

    This generates false alarms, as the storage array does not send any Device Busy status message for SCSI commands. This issue is resolved in this release by correctly pointing to Host Bus Busy (H:0x2) status messages for issues in the device drivers similar to the following:

    2013-04-04T13:16:27.300Z cpu12:4008)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x4124819c2f00) 0x2a, CmdSN 0xfffffa80043a2350 from world 4697 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x2 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0

  • PR 984056: The ESXi host might incorrectly display the last login time for the root as 1970, this information is displayed for ha-eventmgr on the ESXi hosts’ Web client. In this release the login time is calculated using the system time which solves the issue.

  • PR 985864: The tcpdump-uw frame capture utility bundled with ESXi can only capture packets smaller than 8138 bytes. This issue occurs due to a socket buffer size being set to 8KB in VMware's implementation of tcpdump-uw. The socket buffer is set to 8192 bytes and approximately 54 bytes is needed for control data, making 8138 bytes the maximum that can be captured.The default buffer size is increased to 64KB to resolve the issue.

  • PR 986734: If a non Vmk0 vmknic is used for management traffic on ESXi 5.0 and it is upgraded to ESXi 5.1, the Management Traffic defaults to Vmk0. This is incorrect.

  • PR 987135: Netlogond might consume high memory in Active Directory environment with multiple unreachable Domain Controllers. As a result, Netlogond might fail and ESXi host might loose Active Directory functionality.

  • PR 987620: Attempts to consolidate snapshots on vCenter Server might fail with error message similar to the following:

    hierarchy is too deep

    This issue occurs when the virtual machine has 255 or more snapshots created by vSphere replication.

  • PR 990632: When a SCSI Request Sense command is sent to Raw Device Mapping (RDM) in Physical Mode from a guest operating system, sometimes the returned sense data is NULL (zeroed). The issue only occurs when the command is sent from the guest operating system.

  • PR 990770: Incorrect error messages similar to the following might be displayed by CIM providers:

    \"Request Header Id (886262) != Response Header reqId (0) in request to provider 429 in process 5. Drop response.\"

    This issue is resolved in this release by updating the error log and restarting the sfcbd management agent to display correct error messages similar to the following:

    Header Id (373) Request to provider 1 in process 0 failed. Error:Timeout (or other socket error) waiting for response from provider.

  • PR 994313: You might not be successful in enabling a High Availability (HA) cluster after a single host of the same HA cluster is placed in the maintenance mode. This issue occurs when the value of the inode descriptor number is not set correctly in ESX root file system (VisorFS), and as a result, the stat calls on those inodes fail.

  • PR 996536: vCenter Server or vSphere Client might get disconnected from the ESXi host during VMFS datastore creation. This issue occurs when hostd fails with the error message similar to the following in the hostd log:

    Panic: Assert Failed: "matchingPart != __null".

    The hostd fails during VMFS datastore creation on disks with certain partition configuration that requires partition alignment.

  • PR 998315: While applying the host profile to an ESX host, if you attempt to change the network fail over detection to beacon probing, the ESXi host fails with an error message similar to the following:

    Associated host profile contains NIC failure criteria settings that cannot be applied to the host

  • PR 1000989: The Hardware Status tab fails to display health statuses and displays an error message similar to the following:

    Hardware monitoring service on this host is not responding or not available.

    The hardware monitoring service (sfcdb) stops and the syslog file contains entries similar to the following:

    sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 6750210)
    sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 6 payLoadSize 19 chunkSize 0 from 12 resp 6750210
    sfcb-smx[xxxxxx]: spRecvReq returned error -1. Skipping message.
    sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 4)
    sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 220 payLoadSize 116 chunkSize 104 from 12 resp 4
    ...
    ...
    sfcb-vmware_int[xxxxxx]: spGetMsg receiving from 40 419746-11 Resource temporarily unavailable
    sfcb-vmware_int[xxxxxx]: rcvMsg receiving from 40 419746-11 Resource temporarily unavailable
    sfcb-vmware_int[xxxxxx]: Timeout or other socket error


  • PR 1005376: When you use the virtual disk metric to view performance charts, you only have the option of viewing the virtual disk performance charts for the available virtual disk objects. This release allows you to view virtual disk performance charts for virtual machine objects as well. This is useful when you need to trigger alarms based on virtual disk usage by virtual machines.

  • PR 1006825: Under certain conditions, provisioned space value for NFS datastore might be incorrectly calculated and false alarms might be generated.

  • PR 1006959: When two virtual machines are configured with e1000 driver on the same vSwitch on a host, the network traffic between the two virtual machines might report significant packet drop in esxtop. This is happens because during reporting there is no accounting for split packets when you enable TSO from guest.

  • PR 1007170: After you install VMware Tools, Windows 8 virtual machines might stop responding when the guest opearting system is restarted.

  • PR 1007520: When there is invalid CIM subscription in the system and you perform a host profile compliance check against a host, an error message similar to the following might be displayed:

    Error extracting indication configuration: (6, u'The requested object could not be found')

    You cannot apply the host profile on the host.

    By resolving this issue, you can apply host profiles even when there is a invalid indication in the host profile.

  • PR 1008254: WS-Management GetInstance () action against http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_SoftwareIdentity?InstanceID=46.10000 might issue a wsa:DestinationUnreachable fault on some ESXi server. The OMC_MCFirmwareIdentity object path is not consistent for CIM gi/ei/ein operations on the system with Intelligent Platform Management Interface (IPMI) Baseboard Management Controller (BMC) sensor. As a result, WS-Management GetInstance () action issues a wsa:DestinationUnreachable fault on the ESXi server.

    This issue is resolved in this release.

  • PR 1008830: In an ESXi host, when reading the Field Replaceable Unit (FRU) inventory data using the Intelligent Platform Management Interface (IPMI) tool, the vmklinux_9:ipmi_thread of vmkapimod incorrectly displays the CPU usage as hundred percent. This is because the IPMI tool uses the Read FRU Data command multiple times to read the inventory data.

  • PR 1008892: When you run the snmpwalk command for ifHCOutOctets 1.3.6.1.2.1.31.1.1.1.10, an error message similar to the following is displayed:

    No Such Instance currently exists at this OID for ifHCOutOctets 1.3.6.1.2.1.31.1.1.1.10

  • PR 1011061: When the SFCBD service is enabled in the trace mode and the service stops running, the Hardware Status tab for an ESXi host might report an error. Any third-party tool might not be able to monitor the ESXi host's hardware status.

  • PR 1018995: When you use Auto Deploy with stateless caching, the host profile might not be correctly applied to the ESXi host. As a result, the host does not become compliant when it joins vCenter Server. This issue occurs when there are approximately 30 or more VMFS datastores. This issue does not occur when you manually apply the host profile.

  • PR 1019015: If the stateless caching is enabled on ESXi5.1 host and host profile is applied again on the cached image after any configuration changes, then the newly made host profile configurations are not applied.

  • PR 1020453: The host IPMI System Event Log is not getting cleared in a cluster environment. This issue is resolved by adding a new CLI support to clear the IPMI SEL.

  • PR 1026207: When you install Solaris 10 on a virtual machine and run the command smbios -t 4 to check the CPU version, the vm returns a blank version and if you attempt to install JDK 1.6 the installation fails.

  • PR 1026618: After a Permanent Device Loss (PDL), you are unable to bring data stores back online due to some open file handles on the volume.

  • PR 1027224: LSI CIM provider (one of sfcb process) leaks file descriptors. This might casue sfcb-hhrc to stop and sfcbd to restart. The syslog file might log messages similar to the following:

    sfcb-LSIESG_SMIS13_HHR[ ]: Error opening socket pair for getProviderContext: Too many open files
    sfcb-LSIESG_SMIS13_HHR[ ]: Failed to set recv timeout (30) for socket -1. Errno = 9
    ...
    ...
    sfcb-hhrc[ ]: Timeout or other socket error
    sfcb-hhrc[ ]: TIMEOUT DOING SHARED SOCKET RECV RESULT ( )


  • PR 1027949: When you upgrade ESXi 5.0.x to ESXi 5.1.x using vCenter Update Manager, the ESXi host loses the NAS datastore entries containing the string Symantec. In this release, the script is modified to remove unnecessary entries from the configuration files during upgrade, which resolves this issue.

  • PR 1029010: On an ESXi host when bandwidthCap and throughputCap are set at the same time, the I/O throttling option might not work on virtual machines. This happens because of incorrect logical comparison while setting the throttle option in scsi scheduler.

  • PR 1033086: When you perform SNMP query to the ESXi host for CPU load average, instead of calculating the hrProcessorLoad for the past minute the hrProcessorLoad calculates the CPU load for the entire life time. This results in the host reporting incorrect CPU load average value.

  • PR 1034629: When multi-writer option is used on shared virtual disks along with the Oracle Real Application Cluster (RAC) option of Oracle Clusterware software, attempts to take snapshots of virtual machines might fail and the virtual machines might stop running. Log files might contain entries similar to the following:

    Resuming virtual disk scsi1:5 failed. The disk has been modified since a snapshot was taken or the virtual machine was suspended.

    This issue might be observed with other cluster management softwares as well.

  • PR 1035097: During the ESXi boot process, loading the nfsclient module might hold lock on the esx.conf file for a long time if there is a delay in the host name resolution for Network File System (NFS) mount points. This might cause failure of other module loads such as migrate, ipmi, or others.

  • PR 1035388: When you attempt to retrieve the value for the resourceCpuAllocMax and resourceMemAllocMax system counters against the host system, the ESXi host returns incorrect values. This issue is observed on a vSphere Client connected to a vCenter server.


  • PR 1037369: You might find the Virtual Machine File System (VMFS) datastore missing from the vCenter Server's Datastore tab or an event similar to the following displayed in the Events tab:

    XXX esx.problem.vmfs.lock.corruptondisk.v2 XXX or At least one corrupt on-disk lock was detected on volume [[Image:prio1.png]] ({2}). Other regions of the volume might be damaged too.

    The following log message is displayed in the VMkernel log:

    [lockAddr 36149248] Invalid state: Owner 00000000-00000000-0000-000000000000 mode 0 numHolders 0 gblNumHolders 4294967295ESC[7m2013-05-12T19:49:11.617Z cpu16:372715)WARNING: DLX: 908: Volume 4e15b3f1-d166c8f8-9bbd-14feb5c781cf ("XXXXXXXXX") might be damaged on the disk. Corrupt lock detected at offset 2279800: [type 10c00001 offset 36149248 v 6231, hb offset 372ESC[0$

    You might also see the following message logged in the vmkernel.log:

    2013-07-24T05:00:43.171Z cpu13:11547)WARNING: Vol3: ValidateFS:2267: XXXXXX/51c27b20-4974c2d8-edad-b8ac6f8710c7: Non-zero generation of VMFS3 volume: 1

  • PR 1038151: Every time you add a text in the Annotations.WelcomeMessage and create an ESXi host profile and apply the same host profile to other hosts, then the other host reports an error message similar to the following:

    Option Annotaitons.WelcomeMessage does not match the specified Criteria

  • PR 1040222: On an ESXi host when bandwidthCap and throughputCap are set at the same time, the I/O throttling option might not work on virtual machines. This happens because of incorrect logical comparison while setting the throttle option in scsi scheduler.

  • PR 1042045: ESXi host experiences a purple diagnostic screen with errors for E1000PollRxRing and E1000DevRx when the rxRing buffer fills up and the max Rx ring is set to more than 2. The next Rx packet received that is handled by the second ring is NULL, causing a processing error. The purple diagnostic screen or backtrace contains entries similar to:

    @BlueScreen: #PF Exception 14 in world 63406:vmast.63405 IP 0x41801cd9c266 addr 0x0
    PTEs:0x8442d5027;0x383f35027;0x0;
    Code start: 0x41801cc00000 VMK uptime: 1:08:27:56.829
    0x41229eb9b590:[0x41801cd9c266]E1000PollRxRing@vmkernel#nover+0xdb9 stack: 0x410015264580
    0x41229eb9b600:[0x41801cd9fc73]E1000DevRx@vmkernel#nover+0x18a stack: 0x41229eb9b630
    0x41229eb9b6a0:[0x41801cd3ced0]IOChain_Resume@vmkernel#nover+0x247 stack: 0x41229eb9b6e0
    0x41229eb9b6f0:[0x41801cd2c0e4]PortOutput@vmkernel#nover+0xe3 stack: 0x410012375940
    0x41229eb9b750:[0x41801d1e476f]EtherswitchForwardLeafPortsQuick@ # +0xd6 stack: 0x31200f9
    0x41229eb9b950:[0x41801d1e5fd8]EtherswitchPortDispatch@ # +0x13bb stack: 0x412200000015
    0x41229eb9b9c0:[0x41801cd2b2c7]Port_InputResume@vmkernel#nover+0x146 stack: 0x412445c34cc0
    0x41229eb9ba10:[0x41801cd2ca42]Port_Input_Committed@vmkernel#nover+0x29 stack: 0x41001203aa01
    0x41229eb9ba70:[0x41801cd99a05]E1000DevAsyncTx@vmkernel#nover+0x190 stack: 0x41229eb9bab0
    0x41229eb9bae0:[0x41801cd51813]NetWorldletPerVMCB@vmkernel#nover+0xae stack: 0x2
    0x41229eb9bc60:[0x41801cd0b21b]WorldletProcessQueue@vmkernel#nover+0x486 stack: 0x41229eb9bd10
    0x41229eb9bca0:[0x41801cd0b895]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x10041229eb9bd20
    0x41229eb9bd20:[0x41801cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x41229eb9be20
    0x41229eb9be20:[0x41801cdbc9bc]CpuSchedIdleLoopInt@vmkernel#nover+0x13b stack: 0x29eb9bfa0
    0x41229eb9bf10:[0x41801cdc4c1f]CpuSchedDispatch@vmkernel#nover+0xabe stack: 0x0
    0x41229eb9bf80:[0x41801cdc5f4f]CpuSchedWait@vmkernel#nover+0x242 stack: 0x412200000000
    0x41229eb9bfa0:[0x41801cdc659e]CpuSched_Wait@vmkernel#nover+0x1d stack: 0x41229eb9bff0
    0x41229eb9bff0:[0x41801ccb1a3a]VmAssistantProcessTask@vmkernel#nover+0x445 stack: 0x0
    0x41229eb9bff8:[0x0] stack: 0x0


  • PR 1042668: When a virtual machine is replicated and you perform storage vMotion on the virtual machine, the hostd service might stop responding. As a result, ESXi host might lose connection from vCenter Server. The error messages similar to the following might be written to the hostd log files:

    2013-08-28T14:23:10.985Z [FFDF8D20 info 'TagExtractor'] 9: Rule type=[N5Hostd6Common31MethodNameBasedTagExtractorRuleE:0x4adddd0], id=rule[VMThrottledOpRule], tag=IsVMThrottledOp, regex=vim\.VirtualMachine\.(reconfigure|removeAllShapshots)|vim\.ManagedEntity\.destroy|vim\.Folder\.createVm|vim\.host\.LowLevelProvisioningManager\.(createVM|reconfigVM|consolidateDisks)|vim\.vm\.Snapshot\.remove - Identifies Virtual Machine operations that need additional throttling2013-08-28T14:23:10.988Z [FFDF8D20 info 'Default'] hostd-9055-1021289.txt time=2013-08-28 14:12:05.000--> Crash Report build=1021289--> Non-signal terminationBacktrace:--> Backtrace[0] 0x6897fb78 eip 0x1a231b70--> Backtrace[1] 0x6897fbb8 eip 0x1a2320f9

    This issue occurs if there is a configuration issue during virtual machine replication.

  • PR 1044408: When you install Windows 2000 Server on a virtual machine, the guest operating system fails with a blue diagnostic screen and a PAGE_FAULT_IN_NONPAGED_AREA error message. This issue is observed in Windows 2000 virtual machine with eight or more virtual CPUs.

  • PR 1044919: When a virtual disk with Thick Provision Lazy zeroed format is created on an ESXi host with VAAI NAS, in the Datastores and Datastore Cluster view, the Provisioned space displayed on the Summary tab might be double of the provisioned storage set for the virtual disk. For example if the provisioned storage is 75GB, the Provisioned space displayed might be around 150 GB.

  • PR 1045271: When high volume of parallel HTTP GET /folder URL requests are sent to hostd, the hostd service fails. This stops from adding the host back to the vCenter Server. An error message similar to the following might be displayed:

    Unable to access the specified host, either it doesn't exist, the server software is not responding, or there is a network problem.

  • PR 1045434: If the timer fires even before ScsiMidlayerFrame gets initialized with vmkCmd option in the SCSI function, then ESXi host might fail with a purple diagnostic screen and an error message similar to the following might be displayed:

    @BlueScreen: #PF Exception 14 in world 619851:PathTaskmgmt IP 0x418022c898f6 addr 0x48
    Code start: 0x418022a00000 VMK uptime: 9:08:16:00.066
    0x4122552dbfb0:[0x418022c898f6]SCSIPathTimeoutHandlerFn@vmkernel#nover+0x195 stack:
    0x4122552e7000 0x4122552dbff0:[0x418022c944dd]SCSITaskMgmtWorldFunc@vmkernel#nover+0xf0 stack: 0x0


  • PR 1046589: When you add shared non persistent read only disks to a virtual machine, the virtual machine might stop responding. This happens because the virtual machine opens the read only disks in exclusive mode.

  • PR 1051263: When multiple threads try to close the same device at the same time, ESXi 5.1 hosts might fail with a purple diagnostic screen and a backtrace similar to the following:

    cpu1:16423)@BlueScreen: #PF Exception 14 in world 16423:helper1-0 IP 0x41801ac50e3e addr 0x18PTEs:0x0;
    cpu1:16423)Code start: 0x41801aa00000 VMK uptime: 0:09:28:51.434
    cpu1:16423)0x4122009dbd70:[0x41801ac50e3e]FDS_CloseDevice@vmkernel#nover+0x9 stack: 0x4122009dbdd0
    cpu1:16423)0x4122009dbdd0:[0x41801ac497b4]DevFSFileClose@vmkernel#nover+0xf7 stack: 0x41000ff3ca98
    cpu1:16423)0x4122009dbe20:[0x41801ac2f701]FSS2CloseFile@vmkernel#nover+0x130 stack: 0x4122009dbe80
    cpu1:16423)0x4122009dbe50:[0x41801ac2f829]FSS2_CloseFile@vmkernel#nover+0xe0 stack: 0x41000fe9a5f0
    cpu1:16423)0x4122009dbe80:[0x41801ac2f89e]FSS_CloseFile@vmkernel#nover+0x31 stack: 0x1
    cpu1:16423)0x4122009dbec0:[0x41801b22d148]CBT_RemoveDev@ # +0x83 stack: 0x41000ff3ca60
    cpu1:16423)0x4122009dbef0:[0x41801ac51a24]FDS_RemoveDev@vmkernel#nover+0xdb stack: 0x4122009dbf60
    cpu1:16423)0x4122009dbf40:[0x41801ac4a188]DevFSUnlinkObj@vmkernel#nover+0xdf stack: 0x0
    cpu1:16423)0x4122009dbf60:[0x41801ac4a2ee]DevFSHelperUnlink@vmkernel#nover+0x51 stack: 0xfffffffffffffff1
    cpu1:16423)0x4122009dbff0:[0x41801aa48418]helpFunc@vmkernel#nover+0x517 stack: 0x0
    cpu1:16423)0x4122009dbff8:[0x0] stack: 0x0
    cpu1:16423)base fs=0x0 gs=0x418040400000 Kgs=0x0
    cpu1:16423)vmkernel 0x0 .data 0x0 .bss 0x0
    cpu1:16423)chardevs 0x41801ae70000 .data 0x417fc0000000 .bss 0x417fc00008a0


  • PR 1051674: When you execute the unmap command, ESXi host retrieves maximum LBA count and maximum unmap descriptor count when disk is opened and caches it. The host uses this information to validate requests by virtual SCSI. Earlier the host failed to retrieve the required information.

  • PR 1051735: When VMM passes a VPN value to read a page, VMKernel fails to find a valid machine page number for that VPN value. This results in the host failing with a purple diagnostic screen. This issue happens if the vmm sends a bad vpn while performing a monitor core dump during a VMM failure.

  • PR 1053197: Virtual machines lose network connectivity and do not respond during snapshot consolidations after a virtual machine backup. The VMs are reported as not busy during this period.

  • PR 1053550: The sfcb-CIMXML-Pro freezes and causes a core dump, when the sfcb sub process for http processing incorrectly calls atexit and exit following a fork command.

  • PR 1054945: You might not be able to add permissions to AD users or groups because the domain name is not available for selection in theDomain drop-down menu.

  • PR 1056515: When the Network Healthcheck feature is enabled, the L2Echo function might not be able to handle high network traffic and the ESXi hosts might stop responding with a purple diagnostic screen and a backtrace similar to the following:

    cpu4:8196)@BlueScreen: PCPU 1: no heartbeat (2/2 IPIs received)
    cpu4:8196)Code start: 0x418024600000 VMK uptime: 44:20:54:02.516
    cpu4:8196)Saved backtrace from: pcpu 1 Heartbeat NMI
    cpu4:8196)0x41220781b480:[0x41802468ded2]SP_WaitLockIRQ@vmkernel#nover+0x199 stack: 0x3b
    cpu4:8196)0x41220781b4a0:[0x4180247f0253]Sched_TreeLockMemAdmit@vmkernel#nover+0x5e stack: 0x20
    cpu4:8196)0x41220781b4c0:[0x4180247d0100]MemSched_ConsumeManagedKernelMemory@vmkernel#nover+0x1b stack: 0x0
    cpu4:8196)0x41220781b500:[0x418024806ac5]SchedKmem_Alloc@vmkernel#nover+0x40 stack: 0x41220781b690
    ...
    cpu4:8196)0x41220781bbb0:[0x4180247a0b13]vmk_PortOutput@vmkernel#nover+0x4a stack: 0x100
    cpu4:8196)0x41220781bc20:[0x418024c65fb2]L2EchoSendPkt@com.vmware.net.healthchk#1.0.0.0+0x85 stack: 0x4100000
    cpu4:8196)0x41220781bcf0:[0x418024c6648e]L2EchoSendPort@com.vmware.net.healthchk#1.0.0.0+0x4b1 stack: 0x0
    cpu4:8196)0x41220781bfa0:[0x418024c685d9]L2EchoRxWorldFn@com.vmware.net.healthchk#1.0.0.0+0x7f8 stack: 0x4122
    cpu4:8196)0x41220781bff0:[0x4180246b6c8f]vmkWorldFunc@vmkernel#nover+0x52 stack: 0x0


  • PR 1059000: When the monitored hardware status is changed on an ESXi host that has SNMP enabled and third Party CIM Providers installed on the server, you might not receive SNMP trap. Messages similar to the following are logged in the syslog:

    2013-07-11T05:24:39Z snmpd: to_sr_type: unable to convert varbind type '71'
    2013-07-11T05:24:39Z snmpd: convert_value: unknown SR type value 0
    2013-07-11T05:24:39Z snmpd: parse_varbind: invalid varbind with type 0 and value: '2'
    2013-07-11T05:24:39Z snmpd: forward_notifications: parse file '/var/spool/snmp/1373520279_6_1_3582.trp' failed, ignored


  • PR 1059043: Virtual machine might fail and display error messages in the vmware.log file similar to the following when the guest operating system with e1000 NIC driver is placed in suspended mode.

    2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120: Msg_Post: Error
    2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
    2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120+ Unexpected signal: 11.


    This issue occurs for the virtual machine that uses ip aliasing and the number of IP addresses exceed 10.

  • PR 1061459: Whenever you reset a management network where the management traffic is enabled on multiple VMkernel ports, the IP address displayed on the Direct Console User Interface (DCUI) changes.

  • PR 1067458: When you perform certain operations that directly or indirectly involve Virtual Machine File System (VMFS), usually in an ESXi cluster environment sharing a large number of VMFS datastores, you might encounter the following problems:
    • ESXi host stops responding intermittently
    • ESXi host gets disconnected from the vCenter Server
    • Virtual machine stops responding intermittently
    • The virtual machines that are part of Microsoft Cluster Service (MSCS) stops responding, resulting in failover
    • Host communication errors occur during Site Recovery Manager (SRM) data recovery and failover tests

  • PR 1069332: If two DVF filter processes attempt to manage a single configuration variable at the same time, while one process frees the filter configuration and the other process attempts to lock it, this might lead to an ESXi host failure.

  • PR 1070074: On a Changed Block Training (CBT) enabled virtual machine, when you perform QueryChangedDiskAreas after moving a virtual machine disk (VMDK) to a different volume using Storage vMotion, as CBT information gets re-initialized without discarding all ChangeID references it might result in a FileFault error similar to the following:

    2012-09-04T11:56:17.846+02:00 [03616 info 'Default' opID=52dc4afb] [VpxLRO] -- ERROR task-internal-4118 -- vm-26512 --
    vim.VirtualMachine.queryChangedDiskAreas: vim.fault.FileFault:
    --> Result:
    --> (vim.fault.FileFault) {
    --> dynamicType = <unset>,
    --> faultCause = (vmodl.MethodFault) null, file =
    --> "/vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
    --> msg = "Error caused by file
    /vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
    --> }

    To resolve the issue, CBT tracking is deactivated and reactivated after moving a virtual machine disk (VMDK) to a different datastore using Storage vMotion, it must discard all changeId reference and take a full backup to utilize CBT for further incremental backup. This issue occurs when a library function incorrectly re-initializes the disk change tracking facility.

  • PR 1070930: ESXi hosts with DvFilter module might display a purple diagnostic screen. A backtrace similar to the following is displayed:

    2013-07-18T06:41:39.699Z cpu12:10669)0x412266b5bbe8:[0x41800d50b532]DVFilterDispatchMessage@com.vmware.vmkapi#v2_1_0_0+0x92d stack: 0x10
    2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bc68:[0x41800d505521]DVFilterCommBHDispatch@com.vmware.vmkapi#v2_1_0_0+0x394 stack: 0x100
    2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bce8:[0x41800cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x412266b5bde8, 0x412266b5bd88,


  • PR 1074681: During vMotion of a virtual machine which has a vNIC connected to VDS, port files from the vMotion source host is not cleared in .dvsData directory even after a while.

  • PR 1074989: When you periodically run exscli hardware ipmi sdr list command, the IPMI data repository might stop working.

  • PR 1078765: VMware ESXi 5.x host configured with TCP/SSL remote syslog stops sending syslogs to remote log server when the network connection to the remote log server is interrupted and restored. This issue is resolved by adding a Default Network Retry Timeout and the host retries to send syslog after Default Network Retry Timeout. The default value of Default Network Retry Timeout is 180 seconds. The command esxcli system syslog config set --default-timeout= can be used to change the default value.

  • PR 1078870: When the lsilogic virtual adapter performs a target reset, it waits for commands in flight for all targets on the adapter. This causes the target reset to block the virtual machine for more time than required.

  • PR 1079558: Attempts to create a quiesced snapshot of a virtual machine with Paravirtual SCSI (PVSCSI) disks might fail with error messages similar to the following:

    2013-07-15T08:12:05.026Z| vcpu-0| ToolsBackup: changing quiesce state: IDLE -> STARTED
    2013-07-15T08:12:28.863Z| vcpu-1| Msg_Post: Warning
    2013-07-15T08:12:28.863Z| vcpu-1| [msg.snapshot.quiesce.vmerr] The guest OS has reported an error during quiescing.
    2013-07-15T08:12:28.863Z| vcpu-1| --> The error code was: 5
    2013-07-15T08:12:28.863Z| vcpu-1| --> The error message was: 'VssSyncStart' operation failed: IDispatch error #8455 (0x80042307)
    2013-07-15T08:12:28.863Z| vcpu-1| ----------------------------------------
    2013-07-15T08:12:28.872Z| vcpu-1| ToolsBackup: changing quiesce state: STARTED -> ERROR_WAIT
    2013-07-15T08:12:30.889Z| vcpu-0| ToolsBackup: changing quiesce state: ERROR_WAIT -> IDLE
    2013-07-15T08:12:30.889Z| vcpu-0| ToolsBackup: changing quiesce state: IDLE -> DONE
    2013-07-15T08:12:30.889Z| vcpu-0| SnapshotVMXTakeSnapshotComplete done with snapshot 'clone-temp-1373875924577206': 0
    2013-07-15T08:12:30.889Z| vcpu-0| SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine. (40).


    This issue occurs when the total number of slots required for temporary snapshots is more than or equal to the slot available in scsi0 adapter.

  • PR 1082095: An ESXi host creating virtual machines on a remotely mounted vSphere Storage Appliance (VSA) datastore might stop responding. This is due to the VSA Manager plugin not handling network errors correctly. This issue occurs due to communication errors and the underlying function returning NULL pointer to the VSA Manager plugin.

  • PR 1083962: In an ESX host when you boot from a Fibre Channel over Ethernet Storage Area Network, the user might not be able to disable the FCoE on any port which are not even used to present the FCoE boot LUN.

  • PR 1087008: The ESXi 5.1 host might generate hostd-worker dump after you detach the software iSCSI disk, which is connected with vmknic on vDS. This issue occurs when you attempt to retrieve the latest information on the host.

  • PR 1087441: You might find the Virtual Machine File System (VMFS) datastore missing from the vCenter Server's Datastore tab or an event similar to the following displayed in the Events tab:

    XXX esx.problem.vmfs.lock.corruptondisk.v2 XXX or At least one corrupt on-disk lock was detected on volume [[Image:prio1.png]] ({2}). Other regions of the volume might be damaged too.

    The following log message is displayed in the VMkernel log:

    [lockAddr 36149248] Invalid state: Owner 00000000-00000000-0000-000000000000 mode 0 numHolders 0 gblNumHolders 4294967295ESC[7m2013-05-12T19:49:11.617Z cpu16:372715)WARNING: DLX: 908: Volume 4e15b3f1-d166c8f8-9bbd-14feb5c781cf ("XXXXXXXXX") might be damaged on the disk. Corrupt lock detected at offset 2279800: [type 10c00001 offset 36149248 v 6231, hb offset 372ESC[0$

    You might also see the following message logged in the vmkernel.log:

    2013-07-24T05:00:43.171Z cpu13:11547)WARNING: Vol3: ValidateFS:2267: XXXXXX/51c27b20-4974c2d8-edad-b8ac6f8710c7: Non-zero generation of VMFS3 volume: 1

  • PR 1087546: In vCenter performance chart, the net.throughput.usage related value is in Kilobytes, but the same value is returned in bytes in the vmkernel. This leads to incorrect representation of values in the vCenter performance chart.

  • PR 1098119: The ESXi host might become unresponsive during vMotion with the trace as displayed below:

    2013-07-18T17:55:06.693Z cpu24:694725)0x4122e7147330:[0x418039b5d12e]Migrate_NiceAllocPage@esx#nover+0x75 stack: 0x4122e7147350
    2013-07-18T17:55:06.694Z cpu24:694725)0x4122e71473d0:[0x418039b5f673]Migrate_HeapCreate@esx#nover+0x1ba stack: 0x4122e714742c
    2013-07-18T17:55:06.694Z cpu24:694725)0x4122e7147460:[0x418039b5a7ef]MigrateInfo_Alloc@esx#nover+0x156 stack: 0x4122e71474f0
    2013-07-18T17:55:06.695Z cpu24:694725)0x4122e7147560:[0x418039b5df17]Migrate_InitMigration@esx#nover+0x1c2 stack: 0xe845962100000001
    ...
    2013-07-18T17:55:07.714Z cpu25:694288)WARNING: Heartbeat: 646: PCPU 27 didn't have a heartbeat for 7 seconds; *may* be locked up.
    2013-07-18T17:55:07.714Z cpu27:694729)ALERT: NMI: 1944: NMI IPI received. Was eip(base):ebp:cs


    This occurs while running vMotion on a host under memory overload.

  • PR 1098296: Starting with this release, changes to the software iSCSI session and connection parameters are now logged in the syslog.log file and it includes previous and new parameter values.

  • PR 1114090: Virtual machine becomes unresponsive due to a signal 11 error, while executing SVGA code in svga2_map_surface.

  • PR 1119268: When you provision and customize a virtual machine from a template on a vDS with Ephemeral Ports, the virtual machine might lose connection from the network. The error messages similar to the following might be written to the log files:

    2013-08-05T06:33:33.990Z| vcpu-1| VMXNET3 user: Ethernet1 Driver Info: version = 16847360 gosBits = 2 gosType = 1, gosVer = 0, gosMisc = 02013-08-05T06:33:35.679Z| vmx| Msg_Post: Error
    2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetPortID] Unable to get dvs.portId for ethernet0
    2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetNetworkName] Unable to get networkName or devName for ethernet0
    2013-08-05T06:33:35.679Z| vmx| [msg.device.badconnect] Failed to connect virtual device Ethernet0.


  • PR 1121196: You might be unable to replicate a virtual machine after you upgrade to ESXi 5.5 from ESXi 5.1.x. This issue occurs when vSphere Replication traffic checkbox is selected under Networking option using vSphere Web Client on ESXi 5.1.x. Error messages similar to the following are written to vmkernel.log file:

    2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 549: Connection failed to 10.1.253.51 (groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout
    2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 4521: Failed to establish connection to [10.1.253.51]:31031(groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout


    When you select virtual machine Summary tab, error messages similar to the following might be displayed:
    No Connection to VR server: Not responding

    Attempts to start synchronization might also fail with error message similar to the following:
    An ongoing synchronization task already exists

    The Tasks & Events tab might display the state of the virtual machine as Invalid.

  • PR 1122123: After the ESXi host encounters a storage issue, such as disk full. The image utility function might fail to clear the tempFileName files from the vCloud Director.

  • PR 1001868, 1038121: While creating a snapshot for a VM hostd might fail with an error.

  • PR 1032019: The tg3 inbox network driver 3.123b.v50.1 does not automatically load the tg3 VMkernel module during initial boot-up for installation on Apple Mac mini (Late 2012) which includes models Macmini6,1 and Macmini6,2.

  • PR 1086705: Intel e1000e network interface driver might stop responding on received (RX) traffic.

  • PR 1074562: RSTe (Rapid Storage Technology enterprise) driver does not validate the NAA (Network Address Authority) ID reported by the disk and forwards an invalid ID to the upper layer.

  • PR 1040433: When you install or reboot after installing the mpt2sas driver on Atmos hardware (Intel s2600jf, LSI 9200-8e) with ESXi 5.1, it might result in a purple diagnostic screen.

  • PR 895622: This patch updates the sata-ata-piix and sata-ahci VIBs to update the ahci and ata_piix drivers.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

The typical way to apply patch bulletin to ESXi hosts is through the VMware Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.

Additional Information

For translated versions of this article, see:

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 0 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 0 Ratings
Actions
KB: