VMware ESXi 5.5, Patch ESXi-5.5.0-20150902001-standard
search cancel

VMware ESXi 5.5, Patch ESXi-5.5.0-20150902001-standard

book

Article ID: 334693

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile Name
ESXi-5.5.0-20150902001-standard
Build
For build information, see KB 2110231.
Vendor
VMware, Inc.
Release Date
Sep 16, 2015
Acceptance Level
PartnerSupported
Affected Hardware
N/A
Affected Software
N/A
Affected VIBs
  • VMware:esx-base:5.5.0-3.68.3029944
  • VMware:tools-light:5.5.0-3.68.3029944
  • VMware:misc-drivers:5.5.0-3.68.3029944
  • VMware:scsi-megaraid-sas:5.34-9vmw.550.3.68.3029944
  • VMware:scsi-mpt2sas:14.00.00.00-3vmw.550.3.68.3029944
  • VMware:scsi-mptsas:4.23.01.00-9vmw.550.3.68.3029944
  • VMware:scsi-mptspi:4.23.01.00-9vmw.550.3.68.3029944
  • VMware:scsi-qla2xxx:902.k1.1-12vmw.550.3.68.3029944
  • VMware:lsi-mr3:0.255.03.01-2vmw.550.3.68.3029944
  • VMware:net-e1000e:3.2.2.1-2vmw.550.3.68.3029944
  • VMware:sata-ahci:3.0-22vmw.550.3.68.3029944
  • VMware:elxnet:10.2.309.6v-1vmw.550.3.68.3029944
  • VMware:xhci-xhci:1.0-2vmw.550.3.68.3029944
PRs Fixed
628968, 1196286, 1213076, 1218410, 1220926, 1233904, 1245831, 1248859, 1265277, 1271164, 1271188, 1273989, 1274831, 1275540, 1277053, 1278119, 1280937, 1283459, 1283868, 1283916, 1289069, 1296290, 1298493, 1298743, 1302316, 1303243, 1303815 1307220, 1309295, 1310956, 1314236, 1316283, 1319810, 1325428, 1328298, 1329448, 1338475, 1340668, 1344481, 1349878 1351054, 1351249, 1354934, 1355473, 1361735, 1373179, 1373450, 1397031, 1397957, 1398378, 1401737, 1412064, 1416811, 1418886, 1420828, 1431387, 1444249, 1444581, 1450682, 1455290, 1493982, 1500821, 1088789, 1214453, 1214632, 1218198, 1261811, 1319645, 1320301, 1326580, 1352240, 1412799
Related CVE numbers
NA

For information on patch and update classification, see KB 2014447.

To search for available patches, see the Patch Manager Download Portal.


Environment

VMware vSphere ESXi 5.5

Resolution

Summaries and Symptoms

This patch resolves the following issues:

  • During a High Availability failover or host crash, the .vswp files of powered ON virtual machines on that host might be left behind on the storage. When many such failovers or crashes occur, the storage capacity might become full.

  • An error message similar to the following with tracebacks is observed on the boot screen when ESXi 5.5 host boots from Auto Deploy Stateless Caching. The error is due to an unexpected short length message of less than four characters in the syslog network.py script.

    IndexError: string index out of range

  • Spew in syslog when System Event Log (SEL) is full and indication subscriptions exist. The following logs are logged in rapidly:

    sfcb-vmware_raw[xxxxxxxxxx]: Can't get Alert Indication Class. Use default
    sfcb-vmware_raw[xxxxxxxxxx]: Can't get Alert Indication Class. Use default
    sfcb-vmware_raw[xxxxxxxxxx]: Can't get Alert Indication Class. Use default

  • When any inbox or third-party drivers do not have their SCSI transport-specific interfaces defined, the ESXi host might stop responding and display a purple diagnostic screen. The issue occurs during collection of vm-support log bundles or when you run I/O Device Management (IODM) Command-Line Interfaces (CLI) such as:

    • esxcli storage san sas list
    • esxcli storage san sas stats get


  • When you create a FIFO and attempt to write data to the /tmp/dpafifo, a purple diagnostic screen might displayed under certain conditions.

  • The helper world opID tagging generates a lot of log messages that are logged rapidly in the VMkernel log filling it up. Logs similar to the following are logged in the VMkernel log:

    cpu16:nnnnn)World: nnnnn: VC opID hostd-60f4 maps to vmkernel opID nnnnnnnn cpu16:nnnnn)World: nnnnn: VC opID HB-host-nnn@nnn-nnnnnnn-nn maps to vmkernel opID nnnnnnnn cpu8:nnnnn)World: nnnnn: VC opID SWI-nnnnnnnn maps to vmkernel opID nnnnnnnn cpu14:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu22:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu14:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu14:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu4:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn

  • When you perform Fast Suspend and Resume (FSR) or Storage vMotion on preallocated virtual machines, the operation might fail as the reservation validation fails during reservation transfer from the source to the destination virtual machine.

  • After you create a quiesce snapshot and browse through the Managed Object Browser (MOB) of the virtual machine, the MOB value of currentSnapshot field is observed to be unset. To view the currentSnapshot, you can navigate to Content -> root folder -> datacenter -> vmFolder -> vmname -> snapshot -> currentSnaphot.

  • If you rename a datastore on which replication source virtual machines are running, replication sync operations for these virtual machines fail with an error message similar to the following:

    VRM Server runtime error. Please check the documentation for any troubleshooting information.
    The detailed exception is: 'Invalid datastore format '<Datastore Name>'

  • When heavy I/O load is in progress, the SFQ Network scheduler might reset the physical NIC when switching the network schedulers. This might cause an unrecoverable transmission where no packets are transmitted to the driver.

  • When you rescan a VMFS datastore with multiple extents, the following log message might be written in the VMkernel log even without any issues from storage connectivity:

    Number of PEs for volume <VMFS UUID volume></VMFS>changed from 3 to 1. A VMFS volume rescan may be needed to use this volume.

  • If the sfcbd service stops running, the CIM indications in host profile cannot be applied successfully.

  • After you upgrade the ESXi host from ESXi 5.1 to 5.5 and import the latest MIB module, the third-party monitoring software returns "unknown(1)" status when querying Fiber Channel (FC) Host Bus Adapters (HBA).

  • Status of some disks on an ESXi 5.5 host might be displayed as UNCONFIGURED GOOD instead of ONLINE. This issue occurs for LSI controller using the LSI CIM provider.

  • During transient error conditions like BUS BUSY, QFULL, HOST ABORTS, HOST RETRY and so on, you might repeatedly attempt commands on current path and do not failover to another path even after a reasonable amount of time.

    During occurrence of such transient errors, if the path is busy after a couple of retries, the path state is now changed to DEAD. As a result, a failover is triggered and an alternate working path to the device is used to send I/Os.

  • Unisphere Storage Management software registers the given initiator IQN when software iSCSI is first enabled. During stateless boot, the registered IQN does not change with the name defined in host profile. You are required to manually remove the initiators from the array and add them again under the new IQN.

    This issue is resolved by adding a new parameter to the software iSCSI enable command so that Unisphere registers the initiator under the name defined in the host profile. The command line to set the IQN during software iSCSI enablement is:

    esxcli <CONN_OPTIONS></CONN_OPTIONS>iscsi software set --enabled=true --name iqn.xyz

  • When you perform storage device rescan operations, the hostd might fail as multiple threads attempt to modify the same object. You might see error messages similar to the following in the vmkwarning.log file:

    cpu43:nnnnnnn)ALERT: hostd detected to be non-responsive
    cpu20:nnnnnnn)ALERT: hostd detected to be non-responsive

  • If the CIM client sends two requests of Delete Instance to the same CIM indication subscription, the sfcb-vmware_int might stop responding due to memory contention. You might not be able to monitor the Hardware Status with the vCenter Server and ESXi.

  • In a nested ESXi environment, implementation of CpuSchedAfterSwitch() results in a race condition in the scheduler code and a purple diagnostic screen with Page Fault exception is displayed.

  • A PCIe serial port redirection card might not function properly when connected to an Industry Standard Architecture (ISA) Interrupt Request (IRQ) (0-15 decimals) on an Advanced Programmable Interrupt Controller (APIC) as it is unable have its interrupts received by the CPU. To allow these and other PCI devices connected to ISA IRQs to function, VMkernel will now allow level-triggered interrupts on ISA IRQs.

  • When an existing ESXi host profile is applied to a newly installed ESXi host, the profile compliance status might show as noncompliant. This happens when the host profile is created from hosts with VXLAN interface configured, the test for compliance on hosts with the previously created host profile might fail. An error message similar to the following is displayed:

    IP route configuration doesn't match the specification

  • ESXi hosts with the virtual machines having e1000 or e1000e vNIC driver might fail with a purple screen when you enable TCP segmentation Offload (TSO). Error messages similar to the following might be written to the log files:

    cpu7:nnnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 9:21:12:17.991
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]E1000TxTSOSend@vmkernel#nover+0x65b stack: 0xnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]E1000PollTxRing@vmkernel#nover+0x18ab stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]E1000DevAsyncTx@vmkernel#nover+0xa2 stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]NetWorldletPerVMCB@vmkernel#nover+0xae stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]WorldletProcessQueue@vmkernel#nover+0x488 stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]WorldletBHHandler@vmkernel#nover+0x60 stack: 0xnnnnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]BH_Check@vmkernel#nover+0x185 stack: 0xnnnnnnnnnnnn

  • The LoadModule command might fail when using the CIM interface client to load the kernel module. An error message similar to the following is displayed:

    Access denied by VMkernel access control policy.

  • The keyboard layout option for the Direct Console User Interface (DCUI) and host profile user interface might incorrectly appear as Czechoslovakian. This option is displayed during ESXi installation and also in the DCUI after installation.

    This issue is resolved in this release by renaming the keyboard layout option to Czech.

  • Attempt to clone CBT-enabled virtual machines templates simultaneously from two different ESXi 5.5 hosts might fail. An error message similar to the following is displayed:

    Failed to open VM_template.vmdk': Could not open/create change tracking file (2108).

  • Attempts to query the hardware status on the vSphere Client might fail. An error message similar to the following is displayed in the /var/log/syslog.log file in the ESXi host:

    TIMEOUT DOING SHARED SOCKET RECV RESULT (1138472) Timeout (or other socket error) waiting for response from provider Header Id (16040) Request to provider 111 in process 4 failed. Error:Timeout (or other socket error) waiting for response from provider Dropped response operation details -- nameSpace: root/cimv2, className: OMC_RawIpmiSensor, Type: 0

  • Attempts to restore a virtual machine on an ESXi host using vSphere Data Protection might fail and display an error message similar to the following:

    Unexpected exception received during reconfigure

  • Performing a storage vMotion on a virtual machine might fail if you have configured the local host swap and set the value of the checkpoint.cptConfigName in the VMX file. An error message similar to the following might be displayed:

    xxxx-xx-xxT00:xx:xx.808Z| vmx| I120: VMXVmdbVmVmxMigrateGetParam: type: 2 srcIp=<127.0.0.1> dstIp=<127.0.0.1> mid=xxxxxxxxxxxxx uuid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx priority=none checksumMemory=no maxDowntime=0 encrypted=0 resumeDuringPageIn=no latencyAware=no diskOpFile=
    <snip>
    xxxx-xx-xxT00:xx:xx.812Z| vmx| I120: VMXVmdb_SetMigrationHostLogState: hostlog state transits to failure for migrate 'to' mid xxxxxxxxxxxxxxxx


  • When you add an ESXi host in vCenter server and create a VMkernel interface for vMotion, you will see the following message displayed in quick succession (log spew) in the hostd.log file:

    Failed to find vds Id for portset vSwitch0

  • When you create a virtual machine with sched.cpu.latencySensitivity set to high and power it on, the exclusive affinity for the vCPUs might not get enabled if the VM does not have a full CPU reservation.

    In earlier releases, the VM did not display a warning message when the CPU is not fully reserved. For more information, see Knowledge Base article 2087525.

  • Attempts to PXE boot virtual machines that use the VMXNET3 network adapter by using the Microsoft Windows Deployment Services (WDS) might fail with messages similar to the following:

    Windows failed to start. A recent hardware or software change might be the cause. To fix the problem:
    1. Insert your Windows installation disc and restart your computer.
    2. Choose your language setting, and then click Next.
    3. Click Repair your computer.
    If you do not have the disc, contact your system administrator or computer manufacturer for assistance.

    Status: 0xc0000001

    Info: The boot selection failed because a required device is inaccessible.

  • Monitoring an ESXi 5.5 host with Dell OpenManage might fail due to an openwsmand error. An error message similar to the following might be reported in the syslog.log file:

    Failed to map segment from shared object: No space left on device

  • After you reboot, the Windows 8 and Windows 2012 Server virtual machines might become unresponsive when the Microsoft Windows boot splash screen appears. For more information refer, Knowledge Base article 2092807.

  • ESXi might send duplicate events to the management software when an Intelligent Platform Management Interface (IPMI) sensor event is triggered on the ESXi Host.

  • On Lenovo systems, the value of power usage and power cap is not available in the esxtop command.

  • An ESXi server might experience a purple diagnostic screen when using DvFilter with a NetQueue supported uplink connected to a vSwitch or a vSphere Distributed Switch (VDS). The ESXi host might report a backtrace similar to the following:

    pcpu:22 world:4118 name:"idle22" (IS)
    pcpu:23 world:2592367 name:"vmm1:S10274-AAG" (V)
    @BlueScreen: Spin count exceeded (^P) - possible deadlock
    Code start: 0xnnnnnnnnnnnn VMK uptime: 57:09:18:15.770
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Panic@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SP_WaitLock@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]NetSchedFIFOInput@vmkernel#nover+0xnnn stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]NetSchedInput@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]IOChain_Resume@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PortOutput@vmkernel#nover+0xnn stack: 0xnn

  • In an environment with Nutanix NFS storage, the secondary FT virtual machine fails to take over when the primary FT virtual machine is down (KB 2096296).

  • The sfcbd service might stop responding and you might find the following error message in the syslog file:

    spSendReq/spSendMsg failed to send on 7 (-1)
    Error getting provider context from provider manager: 11

    This issue occurs when there is a contention for semaphore between the CIM server and the providers.

  • The hostd service might fail on an ESXi 5.x host when there is an acquireLeaseExt API execution attempt on a snapshot disk which goes offline. This snapshot disk may be on an extent which has gone offline. The API caller may be a third-party backup solution. An error message similar to the following is displayed in vmkernel.log:

    cpu4:4739)LVM: 11729: Some trailing extents missing (498, 696).

  • Cold migration between different datastores does not support CBT reset for the virtual Raw Device Mapping (RDM) disks.

  • The Virtual Serial Port Concentrator (vSPC) or NFS client service might not function on the ESXi platform. This happens when there is a different ruleset order, which allows port 0-65535 with enabling sequence. This results in the vSPC or NFS Client related packets to be dropped unexpectedly even if the allowed IP on corresponding ruleset is specified.



  • An ESXi 5.5 host might fail with a purple diagnostic screen similar to the following when multiple vSCSI filters are attached to a VM disk.

    cpu24:103492 opID=nnnnnnnn)@BlueScreen: #PF Exception 14 in world 103492:hostd-worker IP 0xnnnnnnnnnnnn addr 0x30
    PTEs:0xnnnnnnnnnn;0xnnnnnnnnnn;0x0;
    cpu24:103492 opID=nnnnnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 21:06:32:38.296
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_GetFilterPrivateData@vmkernel#nover+0x1 stack: 0x4136c7d
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_IssueInternalCommand@vmkernel#nover+0xc3 stack: 0x410961
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FileSyncRead@<None>#<None>+0xb1 stack: 0x0
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_DigestRecompute@<None>#<None>+0x291 stack: 0x1391
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FilterDigestRecompute@<None>#<None>+0x36 stack: 0x20
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x322 stack: 0x411424b18120
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@<None>#<None>+0xef stack: 0x41245111df10
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@<None>#<None>+0x243 stack: 0x41245111df20
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x275c3918
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry@vmkernel#nover+0x64 stack: 0x0
  • An ESXi host might fail when you attempt to expand a VMFS5 datastore beyond 16 TB. Error messages similar to the following is written to the vmkernel.log file:

    cpu38:34276)LVM: 2907: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks)
    cpu38:34276)LVM: 2907: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks)
    cpu47:34276)LVM: 11172: LVM device naa.600000e00d280000002800c000010000:1 successfully expanded (new size: 31314089590272)
    cpu47:34276)Vol3: 661: Unable to register file system ds02 for APD timeout notifications: Already exists
    cpu47:34276)LVM: 7877: Using all available space (15657303277568).
    cpu7:34276)LVM: 7785: Error adding space (0) on device naa.600000e00d280000002800c000010000:1 to volume 52f05483-52ea4568-ce0e-901b0e0cd0f0: No space left on device
    cpu7:34276)LVM: 5424: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol 52f05483-52ea4568-ce0e-901b0e0cd0f0: Limit exceeded
    cpu7:34276)LVM: 7133: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded
    cpu7:34276)LVM: 7805: LVMProbeDevice failed for device naa.600000e00d280000002800c000010000:1: Limit exceeded
    cpu32:38063)<3>ata1.00: bad CDB len=16, scsi_op=0x9e, max=12
    cpu30:38063)LVM: 5424: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol 52f05483-52ea4568-ce0e-901b0e0cd0f0: Limit exceeded
    cpu30:38063)LVM: 7133: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded

  • An ESXi host might fail with a PF exception 14 purple diagnostic screen when the Netflow feature of vSphere Distributed Switch gets deactivated. The issue occurs due to a timer synchronization problem.

  • After you upgrade Integrated Lights Out (iLO) firmware on HP DL980 G7, false alarms appear in the Hardware Status tab of the vSphere Client. Error messages similar to the following might be logged in the /var/log/syslog.log file:

    sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x8 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruChassis: Reading FRU Chassis Info Area length for 0x0 FAILED
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruBoard: Reading FRU Board Info details for 0x0 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x70 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruProduct: Reading FRU product Info Area length for 0x0 FAILED
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: data length mismatch req=19,resp=3
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0001,resp=0002
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0002,resp=0003
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0003,resp=0004
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0004,resp=0005
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0005,resp=0006
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0006,resp=0007

  • When a virtual machine is deployed or cloned with guest customization and the VMware Tools Upgrade Policy is set to allow the virtual machine to automatically upgrade VMware Tools at next power-on, VMware Tools might fail to automatically upgrade during the first power-on operation of the virtual machine.

  • The vmkping command with Jumbo Frames might fail after one vmknic MTU is changed amongst many in the same switch. An error message similar to the following is displayed:

    sendto() failed (Message too long)

  • The PCPU UTIL/CORE UTIL in esxtop utility incorrectly displays CPU utilization at 100% if you have the PcpuMigrateIdlePcpus set at 0.

  • When you execute multiple enumerate queries on VMware Ethernet port class using the CBEnumInstances method, servers running on an ESXi 5.5 might notice an error message similar to the following:

    CIM error: enumInstances Class not found

    This issue occurs when the management software fails to retrieve information provided by VMware_EthernetPort()class. When the issue occurs, query on memstats might display the following error message:

    MemStatsTraverseGroups: VSI_GetInstanceListAlloc failure: Not found.

  • An ESXi host might report an error in the Hardware Status tab due to the unresponsive hardware monitoring service (sfcbd). An error similar to the following is written to syslog.log:

    sfcb-hhrc[5149608]: spGetMsg receiving from 65 5149608-11 Resource temporarily unavailable
    sfcb-hhrc[5149608]: rcvMsg receiving from 65 5149608-11 Resource temporarily unavailable
    sfcb-hhrc[5149608]: Timeout or other socket error
    sfcb-LSIESG_SMIS13_HHR[6064161]: spGetMsg receiving from 51 6064161-11 Resource temporarily unavailable
    sfcb-LSIESG_SMIS13_HHR[6064161]: rcvMsg receiving from 51 6064161-11 Resource temporarily unavailable
    sfcb-LSIESG_SMIS13_HHR[6064161]: Timeout or other socket error
    sfcb-kmoduleprovider[6064189]: spGetMsg receiving from 57 6064189-11 Resource temporarily unavailable
    sfcb-kmoduleprovider[6064189]: rcvMsg receiving from 57 6064189-11 Resource temporarily unavailable
    sfcb-kmoduleprovider[6064189]: Timeout or other socket error

    The syslog below in debug level indicates that an invalid data of 0x3c is sent by IPMI when the expected data is 0x01.

    sfcb-vmware_raw[35704]: IpmiIfcRhFruInv: fru.header.version: 0x3c

    This issue occurs when sfcb-vmware_raw provider receives invalid data from the Intelligent Platform Management Interface (IPMI) tool while reading the Field Replaceable Unit (FRU) inventory data.

  • You can now specify an iSCSI initiator name to the esxcli iscsi software set command.

  • Applying host profile on stateless ESXi host with large number of storage LUNs might take long time to reboot when you enable stateless caching with the esx as first disk argument. This happens when you manually apply host profile or during the reboot of the host.

  • vSAN cluster check might fail due to an unexpected network partitioning where the IGMP v3 query is not reported if the system is in V2 mode.

  • The SNMPD might start automatically after you upgrade the ESXi host to 5.5 Update 2.

  • An ESXi host might stop responding and the virtual machines become inaccessible. Also, the ESXi host might lose connection to vCenter Server due to a deadlock during storage hicuups on Non-ATS VMFS datastores.

  • Attempts to login to an ESXi host might fail after the host successfully joins an Active Directory. This occurs when a user from one domain attempts to join another trusted domain, which is not present in the ESXi client site. An error similar to the following is written to sys.log/netlogon.log file:

    netlogond[17229]: [LWNetDnsQueryWithBuffer() /build/mts/release/bora-1028347/likewise/esxi-esxi/src/linux/netlogon/utils/lwnet-dns.c:1185] DNS lookup for '_ldap._tcp.<domain details>' failed with errno 0, h_errno = 1

  • When some VIBs are installed on the system, esxupdate constructs a new image in /altbootbank and changes the /altbootbank boot.cfg bootstate to be updated. When a live installable VIB is installed, the system saves the configuration change to /altbootbank. The stage operation deletes the contents of /altbootbank unless you perform a remediate operation after the stage operation. The VIB installation might be lost if you reboot the host after a stage operation.

  • An ESXi host might lose network connectivity and experience stability issues when multiple error messages similar to the following are logged in:

    WARNING: Heartbeat: 785: PCPU 63 didn't have a heartbeat for 7 seconds; *may* be locked up.

  • vSAN does not gracefully handle extremely high latency disks that are about to die. Such a dying disk might cause Input or Output backlogs and the vSAN cluster nodes might become unresponsive in the vCenter Server.

    This issue is resolved in this release with a new feature, Dying Disk Handling (DDH) which provides latency monitoring framework in the kernel, a daemon to detect high latency periods, and a mechanism to unmount individual disks and diskgroups.

  • When applying host profile during Auto Deploy, you might lose network connectivity because the VXLAN Tunnel Endpoint (VTEP) NIC gets tagged as management vmknic.

  • Host Profiles become non-compliant with a simple change to SNMP syscontact or syslocation. The issue occurs as the SNMP host profile plugin applies only a single value to all hosts attached to the host profile. An error message similar to the following might be displayed:

    SNMP Agent Configuration differs

    This issue is resolved in this release by enabling per-host value settings for certain parameters like syslocation, syscontact, v3targets,v3users and engineid.

  • On a Linux virtual machine, the VMware Tools service, vmtoolsd, might fail when you shut down the guest operating system.

  • Attempts to upgrade VMware Tools on a Windows 2000 virtual machine might fail with an error message similar to the following written to the vmmsi.log file:

    Invoking remote custom action. DLL: C:\WINNT\Installer\MSI12.tmp, Entrypoint: VMRun
    VM_CacheMod. Return value 3.
    PROPERTY CHANGE: Deleting RESUME property. Its current value is '1'.
    INSTALL. Return value 3.

  • On an ESXi 5.5 host, some of the drivers installed on Solaris 11 guest operating system might be from Solaris 10. As a result, the drivers might not work as expected.

  • When you attempt to configure VMware Tools with new kernel using the /usr/bin/vmware-config-tools.pl -k <kernel version> script after the kernel has been updated with Dracut, the driver list in add_drivers entry of /etc/dracut.conf.d/vmware-tools.conf file gets truncated. This issue occurs when the VMware Tools are upstreamed in the kernel.

  • A Linux virtual machine enabled with Large Receive Offload (LRO) functionality on VMXNET3 device might experience packet drops on the receiver side when the Rx Ring #2 runs out of memory, since the size of Rx Ring#2 is unable to be configured originally.

  • When you upgrade the Vmware Tools in 64-bit Windows guest operating system, the tools.conf file gets removed automatically. The tools.conf file will be retained by default from ESXi 5.5 Update 3 release onwards.

  • After installing VMware Tools on Windows 8 or Windows Server 2012 guest operating system, attempts to open telnet using the start telnet://xx.xx.xx.xx command fails with the following error message

    Make sure the virtual machine's configuration allows the guest to open host applications

  • When you power off a virtual machine immediately after an install, upgrade or uninstall of VMware Tools in a Linux environment (RHEL or Cent OS 6), the guest OS might fail during the next reboot due to corrupted RAMDISK image file. The guest OS reports an error similar to the following:

    RAMDISK: incomplete write (31522 != 32768)
    write error
    Kernel panic - not syncing : VFS: Unable to mount root fs on unknown-block(0,0)


    This release resolves the complete creation of the initramfs file creation during an install, upgrade or uninstall of VMware Tools.

    Guest OS with corrupted RAMDISK image file can be rescued to complete boot state. For more information, see Knowledge Base article 2086520.

  • During a High Availability failover or host crash, the .vswp files of powered ON virtual machines on that host might be left behind on the storage. When many such failovers or crashes occur, the storage capacity might become full.

  • IPv6 Router Advertisements (RA) does not function as expected when tagging 802.1q with VMXNET3 adapters in an Linux virtual machine as the IPv6 RA address intended for the VLAN interface is delivered to the base interface.

  • Updates to the misc-drivers VIB.

  • Updates to the scsi-megaraid-sas VIB.

  • Updates to the scsi-mpt2sas VIB.

  • Updates to the scsi-mptsas VIB.

  • Updates to the scsi-mptspi VIB.

  • Updates to the scsi-qla2xxx VIB.

  • Updates to the lsi-mr3 VIB.

  • Updates to the net-e1000e VIB.

  • Updates to the sata-ahci VIB.

  • Updates to the elxnet VIB.

  • Updates to the xhci-xhci VIB.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

Patch Download and Installation

An ESXi system can be updated using the image profile, by using the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide. ESXi hosts can also be updated by manually downloading the patch ZIP file from the Patch Manager Download Portal and installing the VIB by using the esxcli software vib command.