Release date: January 16, 2014
Bulletin ID | ESXi510-201401201-UG |
Patch Category | Bugfix |
Patch Severity | Critical |
Build | For build information, see KB 2062314. |
Host Reboot Required | Yes |
Virtual Machine Migration or Shutdown Required | Yes |
Affected Hardware | N/A |
Affected Software | N/A |
VIBs Included | esx-base |
PRs Fixed | 926769, 938066, 940710, 949248, 952465, 957286, 959763, 962122, 966990, 969068, 970635, 975744, 976683, 979390, 981031, 984056, 985864, 986734, 987135, 987620, 990632, 990770, 994313, 996536, 998315, 1000989, 1005376, 1006825, 1006959, 1007170, 1007520, 1008254, 1008830, 1008892, 1011061, 1018995, 1019015, 1020453, 1026207, 1026618, 1027224, 1027949, 1029010, 1033086, 1034629, 1035097, 1035388, 1037369, 1038151, 1040222, 1042045, 1042668, 1044408, 1044919, 1045271, 1045434, 1046589, 1051263, 1051674, 1051735, 1053197, 1053550, 1054945, 1056515, 1059000, 1059043, 1061459, 1067458, 1069332, 1070074, 1070930, 1074681, 1074989, 1078765, 1078870, 1079558, 1082095, 1083962, 1087008, 1087441, 1087546, 1098119, 1098296, 1114090, 1119268, 1121196, 1122123, 1001868, 1038121 |
Related CVE numbers | N/A |
For more information on patch and update classification, see KB 2014447.
This patch updates the esx-base VIB to resolve the following issues:
fc_fcp_resp()
is not completing the LUN RESET task since fc_fcp_resp
assumes that the FCP_RSP_INFO
is 8 bytes with the 4 byte reserved field, however, in case of NetApp targets the FCP_RSP
to LUN RESET only has 4 bytes of FCP_RSP_INFO
. This leads fc_fcp_resp
error without completing the task. Reset the host to fix the issue. /bootbank
will point to /tmp
. With this release we have provided bootDeviceRescanTimeout
parameter to ESX command line which can be passed before the boot process to configure the timeout value to resolves the issue./var/run/sfcb
directory with over 5000 files.The hostd.log file
located at /var/log/ indicates that the host is out of space:VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
The vmkernel.log file located at /var/log indicates that the host is out of inodes:
cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
cpu8:42450)Panic: 766: Panic from another CPU (cpu 8, world 42450): ip=0x41801d47b266:
#GP Exception 13 in world 42450:vmm0:Datacen @ 0x41801d4b7aaf
cpu8:42450)Backtrace for current CPU #8, worldID=42450, ebp=0x41221749b390
cpu8:42450)0x41221749b390:[0x41801d4b7aaf]vmk_SPLock@vmkernel#nover+0x12 stack: 0x41221749b3f0, 0xe00000000000
cpu8:42450)0x41221749b4a0:[0x41801db01409]IpfixFilterPacket@#+0x7e8 stack: 0x4100102205c0, 0x1, 0x
cpu8:42450)0x41221749b4e0:[0x41801db01f36]IpfixFilter@#+0x41 stack: 0x41801db01458, 0x4100101a2e58
2013-04-04T17:56:47.668Z cpu0:4012)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x412441541f80) 0x16, CmdSN 0x1c9f from world 0 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x0 D:0x8 P:0x0 Possible sense data: 0x0 0x0 0x0
2013-04-04T13:16:27.300Z cpu12:4008)ScsiDeviceIO: SCSICompleteDeviceCommand:2311: Cmd(0x4124819c2f00) 0x2a, CmdSN 0xfffffa80043a2350 from world 4697 to dev "naa.600601601fb12d00565065c6b381e211" failed H:0x2 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0
ha-eventmgr
on the ESXi hosts’ Web client. In this release the login time is calculated using the system time which solves the issue.hierarchy is too deep
Request Sense
command is sent to Raw Device Mapping (RDM) in Physical Mode from a guest operating system, sometimes the returned sense data is NULL (zeroed). The issue only occurs when the command is sent from the guest operating system.\"Request Header Id (886262) != Response Header reqId (0) in request to provider 429 in process 5. Drop response.\"
Header Id (373) Request to provider 1 in process 0 failed. Error:Timeout (or other socket error) waiting for response from provider.
Panic: Assert Failed: "matchingPart != __null".
Associated host profile contains NIC failure criteria settings that cannot be applied to the host
Hardware monitoring service on this host is not responding or not available.
syslog
file contains entries similar to the following:sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 6750210)
sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 6 payLoadSize 19 chunkSize 0 from 12 resp 6750210
sfcb-smx[xxxxxx]: spRecvReq returned error -1. Skipping message.
sfcb-smx[xxxxxx]: spRcvMsg Receive message from 12 (return socket: 4)
sfcb-smx[xxxxxx]: --- spRcvMsg drop bogus request chunking 220 payLoadSize 116 chunkSize 104 from 12 resp 4
...
...
sfcb-vmware_int[xxxxxx]: spGetMsg receiving from 40 419746-11 Resource temporarily unavailable
sfcb-vmware_int[xxxxxx]: rcvMsg receiving from 40 419746-11 Resource temporarily unavailable
sfcb-vmware_int[xxxxxx]: Timeout or other socket error
TSO
from guest.Error extracting indication configuration: (6, u'The requested object could not be found')
WS-Management GetInstance ()
action against http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_SoftwareIdentity?InstanceID=46.10000
might issue a wsa:DestinationUnreachable
fault on some ESXi server. The OMC_MCFirmwareIdentity
object path is not consistent for CIM gi/ei/ein
operations on the system with Intelligent Platform Management Interface (IPMI) Baseboard Management Controller (BMC) sensor. As a result, WS-Management GetInstance ()
action issues a wsa:DestinationUnreachable
fault on the ESXi server.vmklinux_9:ipmi_thread of vmkapimod
displays the CPU usage as hundred percent. This is because the IPMI tool uses the Read FRU Data
command multiple times to read the inventory data.snmpwalk
command for ifHCOutOctets 1.3.6.1.2.1.31.1.1.1.10
, an error message similar to the following is displayed:No Such Instance currently exists at this OID for ifHCOutOctets 1.3.6.1.2.1.31.1.1.1.10
smbios -t 4
to check the CPU version, the vm returns a blank version and if you attempt to install JDK 1.6 the installation fails.syslog
file might log messages similar to the following:sfcb-LSIESG_SMIS13_HHR[ ]: Error opening socket pair for getProviderContext: Too many open files
sfcb-LSIESG_SMIS13_HHR[ ]: Failed to set recv timeout (30) for socket -1. Errno = 9
...
...
sfcb-hhrc[ ]: Timeout or other socket error
sfcb-hhrc[ ]: TIMEOUT DOING SHARED SOCKET RECV RESULT ( )
Resuming virtual disk scsi1:5 failed. The disk has been modified since a snapshot was taken or the virtual machine was suspended.
resourceCpuAllocMax
and resourceMemAllocMax
system counters against the host system, the ESXi host returns incorrect values. This issue is observed on a vSphere Client connected to a vCenter server. XXX esx.problem.vmfs.lock.corruptondisk.v2 XXX or At least one corrupt on-disk lock was detected on volume [[Image:prio1.png]] ({2}). Other regions of the volume might be damaged too.
[lockAddr 36149248] Invalid state: Owner 00000000-00000000-0000-000000000000 mode 0 numHolders 0 gblNumHolders 4294967295ESC[7m2013-05-12T19:49:11.617Z cpu16:372715)WARNING: DLX: 908: Volume 4e15b3f1-d166c8f8-9bbd-14feb5c781cf ("XXXXXXXXX") might be damaged on the disk. Corrupt lock detected at offset 2279800: [type 10c00001 offset 36149248 v 6231, hb offset 372ESC[0$
2013-07-24T05:00:43.171Z cpu13:11547)WARNING: Vol3: ValidateFS:2267: XXXXXX/51c27b20-4974c2d8-edad-b8ac6f8710c7: Non-zero generation of VMFS3 volume: 1
Option Annotaitons.WelcomeMessage does not match the specified Criteria
@BlueScreen: #PF Exception 14 in world 63406:vmast.63405 IP 0x41801cd9c266 addr 0x0
PTEs:0x8442d5027;0x383f35027;0x0;
Code start: 0x41801cc00000 VMK uptime: 1:08:27:56.829
0x41229eb9b590:[0x41801cd9c266]E1000PollRxRing@vmkernel#nover+0xdb9 stack: 0x410015264580
0x41229eb9b600:[0x41801cd9fc73]E1000DevRx@vmkernel#nover+0x18a stack: 0x41229eb9b630
0x41229eb9b6a0:[0x41801cd3ced0]IOChain_Resume@vmkernel#nover+0x247 stack: 0x41229eb9b6e0
0x41229eb9b6f0:[0x41801cd2c0e4]PortOutput@vmkernel#nover+0xe3 stack: 0x410012375940
0x41229eb9b750:[0x41801d1e476f]EtherswitchForwardLeafPortsQuick@ # +0xd6 stack: 0x31200f9
0x41229eb9b950:[0x41801d1e5fd8]EtherswitchPortDispatch@ # +0x13bb stack: 0x412200000015
0x41229eb9b9c0:[0x41801cd2b2c7]Port_InputResume@vmkernel#nover+0x146 stack: 0x412445c34cc0
0x41229eb9ba10:[0x41801cd2ca42]Port_Input_Committed@vmkernel#nover+0x29 stack: 0x41001203aa01
0x41229eb9ba70:[0x41801cd99a05]E1000DevAsyncTx@vmkernel#nover+0x190 stack: 0x41229eb9bab0
0x41229eb9bae0:[0x41801cd51813]NetWorldletPerVMCB@vmkernel#nover+0xae stack: 0x2
0x41229eb9bc60:[0x41801cd0b21b]WorldletProcessQueue@vmkernel#nover+0x486 stack: 0x41229eb9bd10
0x41229eb9bca0:[0x41801cd0b895]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x10041229eb9bd20
0x41229eb9bd20:[0x41801cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x41229eb9be20
0x41229eb9be20:[0x41801cdbc9bc]CpuSchedIdleLoopInt@vmkernel#nover+0x13b stack: 0x29eb9bfa0
0x41229eb9bf10:[0x41801cdc4c1f]CpuSchedDispatch@vmkernel#nover+0xabe stack: 0x0
0x41229eb9bf80:[0x41801cdc5f4f]CpuSchedWait@vmkernel#nover+0x242 stack: 0x412200000000
0x41229eb9bfa0:[0x41801cdc659e]CpuSched_Wait@vmkernel#nover+0x1d stack: 0x41229eb9bff0
0x41229eb9bff0:[0x41801ccb1a3a]VmAssistantProcessTask@vmkernel#nover+0x445 stack: 0x0
0x41229eb9bff8:[0x0] stack: 0x0
hostd log
files:2013-08-28T14:23:10.985Z [FFDF8D20 info 'TagExtractor'] 9: Rule type=[N5Hostd6Common31MethodNameBasedTagExtractorRuleE:0x4adddd0], id=rule[VMThrottledOpRule], tag=IsVMThrottledOp, regex=vim\.VirtualMachine\.(reconfigure|removeAllShapshots)|vim\.ManagedEntity\.destroy|vim\.Folder\.createVm|vim\.host\.LowLevelProvisioningManager\.(createVM|reconfigVM|consolidateDisks)|vim\.vm\.Snapshot\.remove - Identifies Virtual Machine operations that need additional throttling2013-08-28T14:23:10.988Z [FFDF8D20 info 'Default'] hostd-9055-1021289.txt time=2013-08-28 14:12:05.000--> Crash Report build=1021289--> Non-signal terminationBacktrace:--> Backtrace[0] 0x6897fb78 eip 0x1a231b70--> Backtrace[1] 0x6897fbb8 eip 0x1a2320f9
PAGE_FAULT_IN_NONPAGED_AREA
error message. This issue is observed in Windows 2000 virtual machine with eight or more virtual CPUs.HTTP GET /folder
URL requests are sent to hostd, the hostd service fails. This stops from adding the host back to the vCenter Server. An error message similar to the following might be displayed: Unable to access the specified host, either it doesn't exist, the server software is not responding, or there is a network problem.
ScsiMidlayerFrame
gets initialized with vmkCmd option in the SCSI function, then ESXi host might fail with a purple diagnostic screen and an error message similar to the following might be displayed:@BlueScreen: #PF Exception 14 in world 619851:PathTaskmgmt IP 0x418022c898f6 addr 0x48
Code start: 0x418022a00000 VMK uptime: 9:08:16:00.066
0x4122552dbfb0:[0x418022c898f6]SCSIPathTimeoutHandlerFn@vmkernel#nover+0x195 stack:
0x4122552e7000 0x4122552dbff0:[0x418022c944dd]SCSITaskMgmtWorldFunc@vmkernel#nover+0xf0 stack: 0x0
cpu1:16423)@BlueScreen: #PF Exception 14 in world 16423:helper1-0 IP 0x41801ac50e3e addr 0x18PTEs:0x0;
cpu1:16423)Code start: 0x41801aa00000 VMK uptime: 0:09:28:51.434
cpu1:16423)0x4122009dbd70:[0x41801ac50e3e]FDS_CloseDevice@vmkernel#nover+0x9 stack: 0x4122009dbdd0
cpu1:16423)0x4122009dbdd0:[0x41801ac497b4]DevFSFileClose@vmkernel#nover+0xf7 stack: 0x41000ff3ca98
cpu1:16423)0x4122009dbe20:[0x41801ac2f701]FSS2CloseFile@vmkernel#nover+0x130 stack: 0x4122009dbe80
cpu1:16423)0x4122009dbe50:[0x41801ac2f829]FSS2_CloseFile@vmkernel#nover+0xe0 stack: 0x41000fe9a5f0
cpu1:16423)0x4122009dbe80:[0x41801ac2f89e]FSS_CloseFile@vmkernel#nover+0x31 stack: 0x1
cpu1:16423)0x4122009dbec0:[0x41801b22d148]CBT_RemoveDev@ # +0x83 stack: 0x41000ff3ca60
cpu1:16423)0x4122009dbef0:[0x41801ac51a24]FDS_RemoveDev@vmkernel#nover+0xdb stack: 0x4122009dbf60
cpu1:16423)0x4122009dbf40:[0x41801ac4a188]DevFSUnlinkObj@vmkernel#nover+0xdf stack: 0x0
cpu1:16423)0x4122009dbf60:[0x41801ac4a2ee]DevFSHelperUnlink@vmkernel#nover+0x51 stack: 0xfffffffffffffff1
cpu1:16423)0x4122009dbff0:[0x41801aa48418]helpFunc@vmkernel#nover+0x517 stack: 0x0
cpu1:16423)0x4122009dbff8:[0x0] stack: 0x0
cpu1:16423)base fs=0x0 gs=0x418040400000 Kgs=0x0
cpu1:16423)vmkernel 0x0 .data 0x0 .bss 0x0
cpu1:16423)chardevs 0x41801ae70000 .data 0x417fc0000000 .bss 0x417fc00008a0
unmap
command, ESXi host retrieves maximum LBA count and maximum unmap descriptor count when disk is opened and caches it. The host uses this information to validate requests by virtual SCSI. Earlier the host failed to retrieve the required information.cpu4:8196)@BlueScreen: PCPU 1: no heartbeat (2/2 IPIs received)
cpu4:8196)Code start: 0x418024600000 VMK uptime: 44:20:54:02.516
cpu4:8196)Saved backtrace from: pcpu 1 Heartbeat NMI
cpu4:8196)0x41220781b480:[0x41802468ded2]SP_WaitLockIRQ@vmkernel#nover+0x199 stack: 0x3b
cpu4:8196)0x41220781b4a0:[0x4180247f0253]Sched_TreeLockMemAdmit@vmkernel#nover+0x5e stack: 0x20
cpu4:8196)0x41220781b4c0:[0x4180247d0100]MemSched_ConsumeManagedKernelMemory@vmkernel#nover+0x1b stack: 0x0
cpu4:8196)0x41220781b500:[0x418024806ac5]SchedKmem_Alloc@vmkernel#nover+0x40 stack: 0x41220781b690
...
cpu4:8196)0x41220781bbb0:[0x4180247a0b13]vmk_PortOutput@vmkernel#nover+0x4a stack: 0x100
cpu4:8196)0x41220781bc20:[0x418024c65fb2][email protected]#1.0.0.0+0x85 stack: 0x4100000
cpu4:8196)0x41220781bcf0:[0x418024c6648e][email protected]#1.0.0.0+0x4b1 stack: 0x0
cpu4:8196)0x41220781bfa0:[0x418024c685d9][email protected]#1.0.0.0+0x7f8 stack: 0x4122
cpu4:8196)0x41220781bff0:[0x4180246b6c8f]vmkWorldFunc@vmkernel#nover+0x52 stack: 0x0
2013-07-11T05:24:39Z snmpd: to_sr_type: unable to convert varbind type '71'
2013-07-11T05:24:39Z snmpd: convert_value: unknown SR type value 0
2013-07-11T05:24:39Z snmpd: parse_varbind: invalid varbind with type 0 and value: '2'
2013-07-11T05:24:39Z snmpd: forward_notifications: parse file '/var/spool/snmp/1373520279_6_1_3582.trp' failed, ignored
vmware.log
file similar to the following when the guest operating system with e1000 NIC driver is placed in suspended mode.2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120: Msg_Post: Error
2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
2013-08-02T05:28:48Z[+11.453]| vcpu-1| I120+ Unexpected signal: 11.
2012-09-04T11:56:17.846+02:00 [03616 info 'Default' opID=52dc4afb] [VpxLRO] -- ERROR task-internal-4118 -- vm-26512 --
vim.VirtualMachine.queryChangedDiskAreas: vim.fault.FileFault:
--> Result:
--> (vim.fault.FileFault) {
--> dynamicType = <unset>,
--> faultCause = (vmodl.MethodFault) null, file =
--> "/vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
--> msg = "Error caused by file
/vmfs/volumes/4ff2b68a-8b657a8e-a151-3cd92b04ecdc/VM/VM.vmdk",
--> }
To resolve the issue, CBT tracking is deactivated and reactivated after moving a virtual machine disk (VMDK) to a different datastore using Storage vMotion, it must discard all changeId reference and take a full backup to utilize CBT for further incremental backup. This issue occurs when a library function incorrectly re-initializes the disk change tracking facility.
DvFilter
module might display a purple diagnostic screen. A backtrace similar to the following is displayed:2013-07-18T06:41:39.699Z cpu12:10669)0x412266b5bbe8:[0x41800d50b532][email protected]#v2_1_0_0+0x92d stack: 0x10
2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bc68:[0x41800d505521][email protected]#v2_1_0_0+0x394 stack: 0x100
2013-07-18T06:41:39.700Z cpu12:10669)0x412266b5bce8:[0x41800cc2083a]BH_Check@vmkernel#nover+0x185 stack: 0x412266b5bde8, 0x412266b5bd88,
.dvsData
directory even after a while.exscli hardware ipmi sdr list
command, the IPMI data repository might stop working.Default Network Retry Timeout
and the host retries to send syslog after Default Network Retry Timeout
. The default value of Default Network Retry Timeout
is 180 seconds. The command esxcli system syslog config set --default-timeout=
can be used to change the default value.2013-07-15T08:12:05.026Z| vcpu-0| ToolsBackup: changing quiesce state: IDLE -> STARTED
2013-07-15T08:12:28.863Z| vcpu-1| Msg_Post: Warning
2013-07-15T08:12:28.863Z| vcpu-1| [msg.snapshot.quiesce.vmerr] The guest OS has reported an error during quiescing.
2013-07-15T08:12:28.863Z| vcpu-1| --> The error code was: 5
2013-07-15T08:12:28.863Z| vcpu-1| --> The error message was: 'VssSyncStart' operation failed: IDispatch error #8455 (0x80042307)
2013-07-15T08:12:28.863Z| vcpu-1| ----------------------------------------
2013-07-15T08:12:28.872Z| vcpu-1| ToolsBackup: changing quiesce state: STARTED -> ERROR_WAIT
2013-07-15T08:12:30.889Z| vcpu-0| ToolsBackup: changing quiesce state: ERROR_WAIT -> IDLE
2013-07-15T08:12:30.889Z| vcpu-0| ToolsBackup: changing quiesce state: IDLE -> DONE
2013-07-15T08:12:30.889Z| vcpu-0| SnapshotVMXTakeSnapshotComplete done with snapshot 'clone-temp-1373875924577206': 0
2013-07-15T08:12:30.889Z| vcpu-0| SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine. (40).
XXX esx.problem.vmfs.lock.corruptondisk.v2 XXX or At least one corrupt on-disk lock was detected on volume [[Image:prio1.png]] ({2}). Other regions of the volume might be damaged too.
[lockAddr 36149248] Invalid state: Owner 00000000-00000000-0000-000000000000 mode 0 numHolders 0 gblNumHolders 4294967295ESC[7m2013-05-12T19:49:11.617Z cpu16:372715)WARNING: DLX: 908: Volume 4e15b3f1-d166c8f8-9bbd-14feb5c781cf ("XXXXXXXXX") might be damaged on the disk. Corrupt lock detected at offset 2279800: [type 10c00001 offset 36149248 v 6231, hb offset 372ESC[0$
2013-07-24T05:00:43.171Z cpu13:11547)WARNING: Vol3: ValidateFS:2267: XXXXXX/51c27b20-4974c2d8-edad-b8ac6f8710c7: Non-zero generation of VMFS3 volume: 1
net.throughput.usage
related value is in Kilobytes, but the same value is returned in bytes in the vmkernel. This leads to incorrect representation of values in the vCenter performance chart.2013-07-18T17:55:06.693Z cpu24:694725)0x4122e7147330:[0x418039b5d12e]Migrate_NiceAllocPage@esx#nover+0x75 stack: 0x4122e7147350
2013-07-18T17:55:06.694Z cpu24:694725)0x4122e71473d0:[0x418039b5f673]Migrate_HeapCreate@esx#nover+0x1ba stack: 0x4122e714742c
2013-07-18T17:55:06.694Z cpu24:694725)0x4122e7147460:[0x418039b5a7ef]MigrateInfo_Alloc@esx#nover+0x156 stack: 0x4122e71474f0
2013-07-18T17:55:06.695Z cpu24:694725)0x4122e7147560:[0x418039b5df17]Migrate_InitMigration@esx#nover+0x1c2 stack: 0xe845962100000001
...
2013-07-18T17:55:07.714Z cpu25:694288)WARNING: Heartbeat: 646: PCPU 27 didn't have a heartbeat for 7 seconds; *may* be locked up.
2013-07-18T17:55:07.714Z cpu27:694729)ALERT: NMI: 1944: NMI IPI received. Was eip(base):ebp:cs
signal 11
error, while executing SVGA code in svga2_map_surface
.
2013-08-05T06:33:33.990Z| vcpu-1| VMXNET3 user: Ethernet1 Driver Info: version = 16847360 gosBits = 2 gosType = 1, gosVer = 0, gosMisc = 02013-08-05T06:33:35.679Z| vmx| Msg_Post: Error
2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetPortID] Unable to get dvs.portId for ethernet0
2013-08-05T06:33:35.679Z| vmx| [msg.mac.cantGetNetworkName] Unable to get networkName or devName for ethernet0
2013-08-05T06:33:35.679Z| vmx| [msg.device.badconnect] Failed to connect virtual device Ethernet0.
vmkernel.log
file:2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 549: Connection failed to 10.1.253.51 (groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout
2013-10-16T15:42:48.169Z cpu5:69024)WARNING: Hbr: 4521: Failed to establish connection to [10.1.253.51]:31031(groupID=GID-46415ddc-2aef-4e9f-a173-49cc80854682): Timeout
No Connection to VR server: Not responding
An ongoing synchronization task already exists
None beyond the required patch bundles and reboot information listed in the table above.
The typical way to apply patch bulletin to ESXi hosts is through the VMware Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.