VMware ESXi 6.0, Patch Release ESXi-6.0.0-20180704001-standard
search cancel

VMware ESXi 6.0, Patch Release ESXi-6.0.0-20180704001-standard

book

Article ID: 319542

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile NameESXi-6.0.0-20180704001-standard
BuildFor build information, see KB 53627.
VendorVMware, Inc.
Release DateJuly 26, 2018
Acceptance LevelPartnerSupported
Affected HardwareN/A
Affected SoftwareN/A
Affected VIBs
  • VMware_bootbank_vsan_6.0.0-3.96.8924611
  • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.96.8820019
  • VMware_bootbank_esx-base_6.0.0-3.96.9239799
  • VMware_bootbank_ipmi-ipmi-si-drv_39.1-5vmw.600.3.96.9239799
  • VMware_locker_tools-light_6.0.0-3.93.9239792
  • VMware_bootbank_misc-drivers_6.0.0-3.96.9239799
PRs Fixed2064914, 2068973​, 2017378, 2017382, 1468750, 1713632, 1731807, 1832907, 1866865, 1872507, 1876652, 1888629, 1907074, 1911577, 1924211, 1924482, 1925600, 1930484, 1930886, 1931589, 1934055, 1936211, 1936849, 1938632, 1943039, 1944320, 1947933, 1948165, 1949427, 1950334, 1950763, 1953335, 1953849, 1954363, 1959407, 1962859, 1963034, 1968575, 1972097, 1972581, 1975333, 1976582, 1982289, 1989239, 1989781, 1990733, 1991901, 1999298, 2001675, 2003149, 2007036, 2011424, 2014839, 2020786, 2032049, 2036647, 2037864, 2037927, 2040328, 2041252, 2046193, 2048692, 2050957, 2068407, 2069819, 2071816, 2075070, 2075725, 2082941, 2085547, 2090290, 2093604, 2094371, 2099230, 2113324, 2119608, 2017378, 2017382, 1910544, 2053706, 2060499,1940175, 2045868, 2125157, 2069364
Related CVE numbersN/A


Environment

VMware vSphere ESXi 6.0

Resolution

Summaries and Symptoms

This patch updates the following issues:

  • The ESXi host daemon, hostd, might stop responding in attempt to reconfigure a virtual machine if a config entry with unset value exists in the ConfigSpec object that encapsulates the settings for creation and configuration of virtual machines.

  • In some cases, an ESXi host might fail with a purple diagnostic screen and NULL pointer value in the process of collecting information for statistical purposes. This is due to stale TCP connections.

  • The I/O Filter Daemon might stop responding if a polling implementation fails and stops the execution of pending callbacks.

  • An ESXi host might stop responding during the boot sequence with Loading state.tgz displayed as the last line on the screen.

  • In configurations with multiple storage arrays on one host bus adapter (HBA) using a vmklinux driver, loss of connection to one of the arrays might affect I/O operations of other arrays. This can cause the ESXi host to lose connectivity to VMFS volumes.

  • SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa, which translates to the alert LOGICAL UNIT NOT ACCESSIBLE, ASYMMETRIC ACCESS STATE TRANSITION. This causes a premature all paths down state that might lead to timeouts in the VMware vSphere VMFS heartbeat, and ultimately to performance drops. You can use the advanced configuration option /Nmp/Nmp Satp Alua Cmd RetryTime to increase the timeout value to up to 50 seconds.

  • vSphere vMotion might fail due to timeout, because virtual machines might need more buffer to store the shared memory page details used by 3D or graphic devices. The process stops responding and fails. The issue is observed mainly on virtual machines configured with 3D, running graphics intensive loads or running in virtual desktop infrastructure environments. With this fix, the buffer size is increased by four times for 3D virtual machines. If the increased capacity still does not satisfy the P2M buffer, the virtual machine is powered off.

  • PXE boot of virtual machines with 64-bit EFI might fail when booting from Windows Deployment Services on Windows Server 2008, because PXE boot discovery requests might contain an incorrect PXE architecture ID. This fix sets a default architecture ID in line with the processor architecture identifier for x64 UEFI that the Internet Engineering Task Force (IETF) stipulates. This release also adds a new advanced configuration option, efi.pxe.architectureID = <integer>, to facilitate backwards compatibility. For instance, if your environment is configured to use architecture ID 9, you can add efi.pxe.architectureID = 9 to the advanced configuration options of the virtual machine.

  • vSphere Virtual Volumes reports for space allocation might not display unshared resources in storage arrays as allocated. With this fix, reports with the APIs for Storage Awareness (VASA) package display correct allocations.

  • If a backend failover of a SAN storage controller takes place after a vSphere vMotion operation of MSCS clustered virtual machines, shared raw device mapping devices might go offline.

  • vSphere vApp or virtual machine power-on operations might fail in a DRS-managed cluster if the vApp or the virtual machines use a non-expandable memory reservation resource pool. The following error message is displayed: The available Memory resources in the parent resource pool are insufficient for the operation.

  • If an ESXi host goes in an all-paths-down state while you mount an Network File System (NFS) datastore with protocol version 4.1, the host might fail with a purple diagnostic screen when connectivity is restored. With this fix, if connectivity is lost, mount operations are canceled to avoid host failures.

  • Due to a memory corruption, an ESXi host might fail with an error @BlueScreen: Spin count exceeded - possible deadlock with PCPU XXX with a backtrace similar to:

    PShare_RemoveHint@vmkernel
    VmMemCowPFrameRemoveHint@vmkernel
    VmMemCowPShareFn@vmkernel
    VmAssistantProcessTasks@vmkernel
    CpuSched_StartWorld@vmkernel Before the failure, you might see similar logs: 04:40:54.761Z cpu21:14392120)
    WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc795 status: "Invalid address" (bad0026) 04:40:54.763Z cpu21:14392120)
    WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc7b5 status: "Invalid address" (bad0026) 04:40:54.764Z cpu21:14392120)
    WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc7d5 status: "Invalid address" (bad0026) 04:40:54.765Z cpu21:14392120)
    WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc7f5 status: "Invalid address" (bad0026) 04:40:54.766Z cpu21:14392120)
    WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc815 status: "Invalid address" (bad0026) 04:40:54.768Z cpu21:14392120)
    WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc835 status: "Invalid address" (bad0026)

  • Backup solutions might produce a high rate of Map Disk Region tasks, causing the hostd process to run out of memory and fail.

  • If you export a host profile from one vCenter Server instance and import it into another vCenter Server instance, the compliance check of the host profile might result in status Unknown even after host remediation. This issue occurs if the host profile contains VMware vSphere Distributed Virtual Switch configuration settings and the host does not.

  • An invalid network input in the error message handling of vSphere Fault Tolerance causes a message type overflow that might result in invalid memory access. The ESXi host might fail with a purple diagnostic screen and a Unexpected message type error.

  • When multiple virtual machines using vSphere FT and PVSCSI storage adapters, are running on a single host and have high storage I/O workloads, the virtual machines might stop responding. This might cause an unexpected vSphere FT failover.

  • When you upgrade an ESXi host or you update the Network Time Protocol (NTP) configuration by using the user interface, one of the restrict lines generated in the ntp.conf file is the following: restrict default kod nomodify notrap nopeer noquery. Upon a hostd restart, the nopeer option is missing and the restrict line changes to: restrict default kod nomodify notrap noquery.

  • If you use the vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration and display the storage allocation of the virtual machine unchanged.

  • An ESXi host might fail with a purple diagnostic screen due to a rare race condition in iSCSI sessions while processing the scheduled queue and running queue during connection shutdown. The connection lock might be freed to terminate a task while the shutdown is still in process. The next node pointer would be incorrect and cause the failure. With this fix, during an ISCSI session first all tasks in a queue get a reference and then the tasks are terminated.

  • When you power off a virtual machine, other virtual machines that are in the same VLAN on the ESXi host lose network connectivity. This happens because powering off the virtual machine removes its virtual port ID and the VLAN bitmap of the uplink port group is reset to zero for all the connected ports.

  • ESXi hosts with virtual machines using vmxnet3 virtual NICs might fail, if a transmission queue index, passed by a guest driver, is greater than the configured number of transmission queues and is equal or less than eight, which might result in invalid memory access or a null pointer reference. This patch fixes the issue by validating values passed by guest drivers against the configured number of transmission queues.

  • If you simultaneously start multiple IP addresses validations, the validations might fail with a timeout error.

  • Virtual machines that run 3D or heavy graphics might fail due to the size limit of the VMX process with an error in the vmware.log file similar to:

    2017-09-14T02:06:37Z[+2699.219]| svga| I125: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (svga)
    2017-09-14T02:06:37Z[+2699.219]| svga| I125+ Unexpected signal: 11.

  • An ESXi host might fail with an error kbdmode_set:519: invalid keyboard mode 4: Not supported due to an invalid keyboard input during the shutdown process.

  • An ESXi host might fail with a purple diagnostic screen after a fast suspend and resume operation on a virtual machine. The error might happen when a virtual machine or resource pool are configured with a reservation, you apply changes to the resource pool, and run the fast suspend and resume operation at the same time.

  • The netdump IP address might be lost after your upgrade from ESXi 5.5 Update 3 to ESXi600-201807401.

  • A high rate of InternalStatsCollector query tasks might cause hostd to run out of memory and fail.

  • An ESXi host might fail with a purple diagnostic screen error due to a physical disk log not initializing. You might see the following backtrace:

    [email protected]#
    LSOMExecuteDiskAttrCMD@LSOMCommon#
    LSOM_ExecuteDiskAttrEvent@LSOMCommon#
    LSOMGetSMARTAttribute@LSOMCommon#
    LSOM_ExecuteDiskAttrEvent@LSOMCommon#
    [email protected]#

  • When you create virtual machine disks with the VMware Integrated OpenStack block storage service (Cinder), the disks get a random UUID. In around 0.5% of the cases, virtual machines do not recognize these UUIDs. This might result in unexpected behavior of the virtual machines.

  • In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

  • Due to a race condition between a parent and a child, during a fork() system call, a file opened by a child in Exclusive mode might fail with an error Device or resource busy.

  • If a virtual machine fails to power on, the page cache data structure might be corrupted and release back memory on a failed path. As a result, the host fails with the following backtrace:

    PageCacheRemoveFirstPageLocked@vmkernel#nover+0x2f stack: 0x4305dd30
    PageCacheAdjustSize@vmkernel#nover+0x260 stack: 0x0, 0x3cb418317a909
    CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0, 0x0, 0x0, 0x0, 0

    OR

    UserMemSwapRemovePage@<None>#<None>+0x60 stack: 0x418036643392
    UserMemSwap_RemoveSwappablePage@<None>#<None>+0x36 stack: 0x2a2d56
    UserMem_MakeUnswappable@<None>#<None>+0x2f stack: 0x2a2d56
    UserMemMapInfoUnmapPrep@<None>#<None>+0x431 stack: 0x32000
    UserMemUnmapLocked@<None>#<None>+0x1f7 stack: 0x439232a1bbc8
    UserMemUnmap@<None>#<None>+0x82 stack: 0x4314c5dcfc61
    UserMem_Unmap@<None>#<None>+0xec stack: 0x4305d52e1060
    UserMem_EarlyCartelCleanup@<None>#<None>+0x20 stack: 0x430ae677efc8
    UserModuleTableRun@<None>#<None>+0x6b stack: 0x4314c5dd6001
    User_ExitDeadThread@<None>#<None>+0xe5 stack: 0x439232a1be84
    UserKernelExit@<None>#<None>+0xb9 stack: 0x80

  • If the scratch partition of an ESXi host is located on a vSAN datastore, the host might become unresponsive. This might happen if the datastore is temporary inaccessible, which might block userworld daemons, such as hostd, if they try to access this datastore at the same time. With this fix, scratch partitions to a vSAN datastore are no longer configurable by using the API. Manual edits to the /etc/vmware/locker.conf file might also cause ESXi host updates to fail.

  • When you use the pktcap-uw utility together with other programs in a pipeline and those programs exit earlier than the pktcap-uw process, it might not close correctly. As result, new pktcap-uw instances might not work as expected.

  • vSphere Distributed Switch (VDS) might not pass OSPFv3 multicast packets with IPsec Security Associations and IPsec Authentication Header.

  • You might see long running virtual machines with vSphere Fault Tolerance (vSphere FT) power off during failover. A memory leak during checkpointing causes the failure. Since vSphere FT relies on checkpointing, the issue affects virtual machines with vSphere FT.

  • Hostd might crash if an HTTP request is made while hostd starts up.

  • When you create and add a VMkernel NIC in the vSphereProvisioning netstack to use it for NFC traffic, the daemon that manages the NFC connections might fail to clean up old connections and reach the limit of available processes. As a result, the host becomes unresponsive and unable to create new SSH processes for incoming connections. Any subsequent operations fail.

  • Due to a memory corruption, an ESXi host might fail with an error @BlueScreen: Spin count exceeded - possible deadlock with PCPU XXX and a backtrace similar to:
    PShare_RemoveHint@vmkernel
    VmMemCow_PShareRemoveHint@vmkernel
    VmMemCowPFrameRemoveHint@vmkernel
    VmMemCowPShareFn@vmkernel
    VmAssistantProcessTasks@vmkernel
    CpuSched_StartWorld@vmkernel

    Before the failure, you might see similar logs:
    04:40:54.761Z cpu21:14392120)WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc795 status: "Invalid address" (bad0026)
    04:40:54.763Z cpu21:14392120)WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc7b5 status: "Invalid address" (bad0026)
    04:40:54.764Z cpu21:14392120)WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc7d5 status: "Invalid address" (bad0026)
    04:40:54.765Z cpu21:14392120)WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc7f5 status: "Invalid address" (bad0026)
    04:40:54.766Z cpu21:14392120)WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc815 status: "Invalid address" (bad0026)
    04:40:54.768Z cpu21:14392120)WARNING: UserMem: 14034: vmx-vthread-6: vpn 0xa00bc835 status: "Invalid address" (bad0026)

  • If you clone a virtual machine that has vSphere Virtual Volumes snapshots, the cloned virtual machine might not be bootable due to disk corruption.

  • An ESXi host might fail with a purple diagnostic screen due to a race condition in the NFSv4.1 Client.

  • Due to a deadlock caused when a large number of LUNs are unmapped from an array, hostd might stop responding or disconnect from the vCenter Server system.

  • System identification information consists of asset tags, service tags, and OEM strings. In earlier releases, this information comes from the Common Information Model (CIM) service, but in ESXi600-201807401, it comes directly from the SMBIOS.

  • A new SATP claim rule of the Native Multipathing Plugin (NMP) for Huawei XSG1 arrays achieves optimal performance as it sets SATP to VMW_SATP_ALUA, PSP to VMW_PSP_RR and selects tpgs_on as default.

  • The sfcb-vmware_bas process might fail due to major memory leaks caused by VmkCtl::Hardware::PciDeviceImpl.

  • When you enable the Network Health Check of VDS, some ESXi hosts might fail with a purple diagnostic screen. 

  • If you set the memory of a virtual machine to full reservation mode during a hot plug and the hot plug fails, the memory reservation mode might not revert to its previous state. As a result, you might not migrate the virtual machine to a host that has less memory than the source host.

  • ESXi hosts might lose connectivity due to an I/O exception in a driver of Cisco Unified Computing System Virtual Interface Card Fibre Channel over Ethernet Host Bus Adapters (Cisco VIC FCoE HBA) that might cause the ESXi lose all paths on the associated HBA.

  • When you configure an ESXi host to use an IPv6 DNS server, but later you disable the IPv6 protocol on the host, the user world processes dependent on name server queries, such as the vmsyslogd service, might fail.

  • One or more device drivers might fail to load and the affected devices cannot be used. Тhe vmkernel.log file contains an error message similar to VMK_ISA: 55: failed to map isaIRQ 7 to a intrCookie

  • The following warning message might continue to appear after each host restart:

    No coredump target has been configured. Host core dumps cannot be saved.

    The message might appear even after you disable the core dump warning with an advanced setting configuration UserVars.SuppressCoredumpWarning=1.

  • If an NFS datastore is configured as a syslog datastore and if the ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.

  • This fix sets SATP to VMW_SATP_DEFAULT_AA and PSP to VMW_PSP_RR with 10 I/O operations per second as default for SolidFire SSD SAN array models. 

  • If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters. You might also observe corrupted replication I/O filters and disk corruption, when cache I/O filters are configured in write-back mode. 

  • If you manually add settings to the NTP configuration, and you update NTP by using the vSphere Web Client, the manually configured settings might be deleted from the ntp.conf file. NTP updates preserve all settings along with restrict options, driftfile and manually added servers to the ntp.conf file. If you manually modify the ntp.conf file, you must restart hostd to propagate the updates.

  • If you use the vSphere Web Client to track which virtual disks are associated with virtual machine snapshots, the search might display incorrect backing information for virtual disks associated with one or more virtual machine snapshots. The virtual disk backing file names might refer to a non-existent VMDK.

  • If multiple targets are connected to one HBA port and one of the targets loses connectivity, commands by the same HBA to other targets might also fail. This might cause downtime on volumes from other arrays.

  • If an NFS datastore is configured as a syslog datastore and an ESXi host disconnects from it, logging to the datastore stops and the ESXi host might become unresponsive.

  • Virtual machines configured with one or more PCI passthrough devices and more than 1 TB of memory might fail to power on due to a memory heap limit. The problem also occurs when multiple virtual machines, each with less than 1 TB of memory and one or more PCI passthrough devices, are powered on concurrently and the sum of the memory configured across all virtual machines exceeds 1 TB.

  • If the cluster primary host in the preferred fault domain goes down, the witness host leaves the cluster until the backup primary host assumes the role of the cluster primary. During this period, virtual machines running on the cluster might temporarily become inaccessible.

  • A race condition might occur while replacing a magnetic disk, and if it is followed by a rescan operation, the ESXi host might fail. You might see a purple diagnostic screen on the host.

  • If the ESXi syslog service vmsyslogd fails to reach the remote syslog server at start or restart, vmsyslogd might not forward logs to the remote server when it becomes available, unless you manually restart the syslog service.

  • While repairing non-compliant objects, vSAN does not reuse the same disk which is holding the absent components, even when the disk has become healthy. vSAN assumes that an absent component must reside on an unhealthy disk. This might not be true if the issue is with the disk driver or firmware, which can be resolved after an upgrade. vSAN does not use such disks to place the replacement components created as part of repair.

  • An ESXi host might disconnect from the vCenter Server system due to a hostd failure caused by problems with the sfcc library. The library exits or cancels the process when it cannot handle some data.

  • If you attempt to add a host with 192 or more CPUs, the calculated heap size might exceed the VMkernel limit and cause the heap creation to fail. The heap calculation in the Distributed Object Manager (DOM) is based on the number of CPUs available in that system. When DOM cannot allocate heap, then the vSAN initialization on that host fails.

    You might see the following message: Out of memory error.

  • When you expand the size of a VMDK on a host in a vSAN cluster, DOM might spend too much processing time on objects with concatenated components. This can cause delays in running other operations, and you might experience very high I/O latency.

  • A snapshot VMDK in a virtual machine disk chain can be deleted when the virtual machine is in powered-on state, causing data loss. If this happens, the virtual machine fails, even when you power off and then power on the virtual machine. Snapshot VMDKs that might be deleted are not the topmost ones in the snapshot chain.

  • vSAN is designed for best performance with I/O workloads aligned to a 4KB offset and length. Some unavoidable overheads can occur when I/Os are not aligned properly. In some specific cases unaligned sequential write operations might lead to inter-dependency on IOs, and cause some inefficiency.

  • When you log off from VMware Horizon, if you have a vSphere configuration with vCenter Server version 6.5.x and ESXi version 6.0.x, you might see some idle virtual machine folders or vmsn files in instant clone pools.

  • The cluster Profile Compliance status for a vSAN cluster is always listed as Not Compliant. The logic behind the cluster profile compliance check fails to treat the vSAN datastore as a shared datastore.

  • During the initial phase of a resync operation for an object, if the disk that stores metadata for the object goes bad, the host might fail with a purple diagnostic screen.

  • Virtual machines with brackets ([]) in their name might not deploy successfully. For more information, see VMware Knowledge Base article 53648.

  • The SNMP agent delivers unnecessary trap vmwVmPoweredOn when a virtual machine is selected under the Summary Tab in the vSphere Web Client. For more information, see VMware Knowledge Base article 54778.

  • If you enable the NetFlow network analysis tool and set the Sampling rate to 0 to sample every packet on a vSphere Distributed Switch port group, network latency might reach 1000 ms in case flows exceed 1 million. This issue is resolved in this release, but you can further optimize the NetFlow performance by setting the ipfixHashTableSize IPFIX parameter to -p ipfixHashTableSize=65536 -m ipfix by using CLI. You must reboot ESXi to complete the task.

  • This fix sets Storage Array Type Plugin (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.

  • An ESXi host might fail with a purple screen during shutdown or power off of a virtual machine if you use EMC RecoverPoint because of a race condition in the vSCSI filter tool.

  • A stateless ESXi host booted by vSphere Auto Deploy with a Host Profile that contains configuration for a vSphere Distributed Switch, Active Directory permission and enabled Stateless Caching to USB, might fail to perform stateless caching to a USB.

  • During a non-disruptive upgrade (NDU), some paths might enter a permanent device loss (PDL) state. Such paths remain unavailable even after the upgrade. As a result, the device might lose connectivity.

  • The code to compute cache reservation does not provide accurate space reservation accounting. In some cases, when you increase the cache reservation, the cache lines are not reserved. This causes an underflow in read cache reservation, as components are being deleted. The vSAN health service reports very high read cache reservation values.

  • The vCenter Server Agent (vpxa) service might fail during a copy operation of the NFC protocol due to an invalid error code.

  • If you add a passthrough device to a virtual machine and select the Expose hardware assisted virtualization to OS guest option in the virtual machine settings, when the virtual machine boots, the ESXi host might fail without an error message.

  • If the Intelligent Platform Management Interface (IPMI) becomes unresponsive, the hardware status monitoring tab in the Host Client UI might display a blank screen. While the IPMI driver is unresponsive, the corresponding heap memory might be exhausted.

  • The vmlinux driver, which creates millions of heap chunks, might overload CPUs while collecting heap stats. This might cause the ESXi host to fail with a purple diagnostic screen. If you use the vm-support command for heap stats collection, this might also cause the ESXi host to fail with a purple diagnostic screen and a panic message TLB invalidation timeout. For example, @BlueScreen: PCPU 40 locked up. Failed to ack TLB invalidate.

  • When you press Enter to reboot an ESXi host and complete a new installation from a USB media drive on some HP servers, you might see a purple diagnostic screen due to a rare race condition.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.