VMware ESXi 6.5, Patch Release ESXi-6.5.0-20180502001-standard
search cancel

VMware ESXi 6.5, Patch Release ESXi-6.5.0-20180502001-standard

book

Article ID: 325314

calendar_today

Updated On:

Products

VMware vCenter Server

Issue/Introduction

Profile NameESXi-6.5.0-20180502001-standard
BuildFor build information, see KB 53097.
VendorVMware, Inc.
Release DateMay 3, 2018
Acceptance LevelPartnerSupported
Affected HardwareN/A
Affected SoftwareN/A
Affected VIBs
  • VMware_bootbank_vsan_6.5.0-2.50.8064065
  • VMware_bootbank_vsanhealth_6.5.0-2.50.8143339
  • VMware_bootbank_esx-base_6.5.0-2.50.8294253
  • VMware_bootbank_esx-tboot_6.5.0-2.50.8294253
  • VMW_bootbank_qedentv_2.0.6.4-8vmw.650.2.50.8294253
  • VMW_bootbank_vmkusb_0.1-1vmw.650.2.50.8294253
  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.650.2.50.8294253
  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.650.2.50.8294253
  • VMW_bootbank_vmw-ahci_1.1.1-1vmw.650.2.50.8294253
  • VMW_bootbank_nhpsa_2.0.22-3vmw.650.2.50.8294253
  • VMW_bootbank_nvme_1.2.1.34-1vmw.650.2.50.8294253
  • VMW_bootbank_smartpqi_1.0.1.553-10vmw.650.2.50.8294253
  • VMW_bootbank_lsi-msgpt35_03.00.01.00-9vmw.650.2.50.8294253
  • VMW_bootbank_lsi-msgpt3_16.00.01.00-1vmw.650.2.50.8294253
  • VMW_bootbank_lsi-mr3_7.702.13.00-3vmw.650.2.50.8294253
  • VMW_bootbank_bnxtnet_20.6.101.7-11vmw.650.2.50.8294253
  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-2.50.8294253
  • VMW_bootbank_i40en_1.3.1-19vmw.650.2.50.8294253
  • VMW_bootbank_misc-drivers_6.5.0-2.50.8294253
  • VMW_bootbank_lsi-msgpt2_20.00.01.00-4vmw.650.2.50.8294253
  • VMW_bootbank_ne1000_0.8.3-7vmw.650.2.50.8294253
  • VMW_bootbank_ixgben_1.4.1-12vmw.650.2.50.8294253
  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-6vmw.650.2.50.8294253
  • VMW_bootbank_lpfc_11.4.33.1-6vmw.650.2.50.8294253
  • VMW_bootbank_brcmfcoe_11.4.1078.0-8vmw.650.2.50.8294253
  • VMW_bootbank_usbcore-usb_1.0-3vmw.650.2.50.8294253
  • VMware_bootbank_esx-xserver_6.5.0-2.50.8294253
  • VMware_locker_tools-light_6.5.0-1.47.8285314
  • PRs Fixed
1882261, 1907025, 1911721, 1913198, 1913483, 1919300, 1923173, 1937646, 1947334, 1950336, 1950800, 1954364, 1958600, 1962224, 1962739, 1971168, 1972487, 1975787, 1977715, 1982291, 1983123, 1983511, 1986020, 1987996, 1988248, 1995640, 1996632, 1997995, 1998437, 1999439, 2004751, 2006929, 2007700, 2008663, 2011392, 2012602, 2015132, 2016411, 2016413, 2018046, 2019185, 2019578, 2020615, 2020785, 2021819, 2023740, 2023766, 2023858, 2024647, 2028736, 2029476, 2032048, 2033471, 2034452, 2034625, 2034772, 2034800, 2034930, 2039373, 2039896, 2046087, 2048702, 2051775, 2052300, 2053426, 2054940, 2055746, 2057189, 2057918, 2058697, 2059300, 2060589, 2061175, 2064616, 2065000, 1998108, 1948936, 2004950, 2004948, 2004949, 2004957, 2004956, 2004947, 1491595, 2004946, 2004949, 2015606, 2036299, 2053073, 2053073, 2053703, 2051280, 2019185, 1994373
Related CVE numbersN/A


Environment

VMware vCenter Server 6.5.x

Resolution

Summaries and Symptoms

This patch updates the following issues:

  • Third-party storage array user interfaces might not properly display relationships between virtual machine disks and virtual machines when one disk is attached to many virtual machines, due to insufficient metadata.

  • Sometimes, when you generate a support bundle for an ESXi host by using the Host Client, the support bundle might not contain all log files.

  • You cannot modify the vSAN iSCSI parameter VSAN-iSCSI.ctlWorkerThreads, because the parameter is designed as an internal parameter and is not a subject to change. This fix hides internal parameters of vSAN iSCSI to avoid confusion.

  • If you export a Host Profile from one vCenter Server system and import it to another vCenter Server system, compliance checks might show a status Unknown.

  • In the vSphere Web Clientthe option to enable or disable checks during unresolved volume queries with the VMFS.UnresolvedVolumeLiveCheck option under Advance Setting > VMFS, might be grayed out.

  • If the scratch partition of an ESXi host is located on a vSAN datastore, the host might become unresponsive, because if the datastore is temporary inaccessible, this might block userworld daemons, such as hostd, if they try to access this datastore at the same time. With this fix, scratch partitions to a vSAN datastore are no longer configurable through the API or the user interface. 

  • Hostd might stop responding in attempt to reconfigure virtual machines. Configuration entries with unset value in the ConfigSpec object, which encapsulates configuration settings for creation and configuration of virtual machines, cause the issue.

  • vSphere vMotion might fail due to timeout, because virtual machines might need more buffer to store the shared memory page details used by 3D or graphic devices. The issue is observed mainly on virtual machines running graphics intensive loads or working in virtual desktop infrastructure environments. With this fix, the buffer size is increased by four times for 3D virtual machines and if the increased capacity is still not enough, the virtual machine is powered off.

  • The netdump IP address might be lost after your upgrade from ESXi 5.5 Update 3b to ESXi 6.0 Update 3 and/or ESXi 6.5 Update 2.

  • System identification information consists of asset tags, service tags, and OEM strings. In earlier releases, this information comes from the Common Information Model (CIM) service. Starting with vSphere 6.5, the CIM service is turned off by default, unless any third-party CIM providers are installed, which prevents ESXi system identification information to display in the vSphere Web Client.

  • When you create virtual machine disks with the VMware Integrated OpenStack block storage service (Cinder), the disks get random UUIDs. In around 0.5% of the cases, the virtual machines do not recognize these UUIDs, which results in unexpected behavior of the virtual machines.

  • When you upgrade a virtual machine to macOS 10.13, the virtual machine might fail to locate a bootable operating system. If the macOS 10.13 installer determines that the system volume is located on a Solid State Disk (SSD), during the OS upgrade, the installer converts the Hierarchical File System Extended (HFS+/HFSX) to the new Apple File System (APFS) format. As a result, the virtual machine might fail to boot.

  • You might not be able to apply a host profile, if the host profile contains a DNS configuration for a VMkernel network adapter that is associated to a vSphere Distributed Switch.

  • If you enable VLAN and maximum transmission unit (MTU) checks in vSphere Distributed Switches, your ESXi hosts might fail with a purple diagnostic screen due to a possible deadlock.  

  • The nhpsa driver is updated to 2.0.22-1vmw.

  • An ESXi host might fail with a purple diagnostic screen due to invalid network input in the error message handling of vSphere Fault Tolerance, which results into an Unexpected message type error.

  • When you run vSphere Fault Tolerance in vSphere 6.5 and vSphere 6.0, virtual machines might fail with an error such as Unexpected FT failover due to PANIC error: VERIFY bora/vmcore/vmx/main/monitorAction.c:598. The error occurs when virtual machines use PVSCSI disks and multiple virtual machines using vSphere FT are running on the same host.

  • An ESXi host might fail with a purple diagnostic screen if the filesystem journal allocation fails while opening a volume due to space limitation or I/O timeout, and the volume opens as read only.

  • When you add more than 128 multicast IP addresses to an ESXi host and try to retrieve information about the multicast group, the ESXi host might fail with a purple diagnostic screen and an error message such as: PANIC bora/vmkernel/main/dlmalloc.c:4924 - Usage error in dlmalloc.

  • Backup solutions might produce a high rate of Map Disk Region tasks, causing hostd to run out of memory and fail.

  • An ESXi host might fail with a purple diagnostic screen due to a rare race condition between file allocation paths and volume extension operations.

  • If you use AUTH_SYS, KRB5 or KRB5i authentication to mount NFS 4.1 shares, the mount might fail. This happens when the Generic Security Services type is not the same value for all the NFS requests generated during the mount session. Some hosts, which are less tolerant of such inconsistencies, fail the mount requests.

  • For some devices, such as CD-ROM, where the maximum queue depth is less than the default value of number of outstanding I/Os with competing worlds, the command esxcli storage core device list might show the maximum number of outstanding I/O requests exceeding the maximum queue depth. With this fix, the maximum value of No of outstanding IOs with competing worlds is restricted to the maximum queue depth of the storage device on which you make the change.

  • The Small-Footprint CIM Broker (SFCB) might fail during snapshot operations on large scale with a core dump file similar to sfcb-vmware_bas-zdump.XXX in /var/core/ and cause ESXi hosts to become unresponsive.

  • An ESXi host might fail with an error kbdmode_set:519: invalid keyboard mode 4: Not supported due to an invalid keyboard input during the shutdown process.

  • The sfcb-hhrc process might stop working, causing the hardware monitoring service sfcbd on the ESXi host to become unavailable.

  • When you add an ESXi host to an Active Directory domain, hostd might become unresponsive due to Likewise services running out of memory and the ESXi host disconnects from the vCenter Server system.

  • PXE boot of virtual machines with 64-bit Extensible Firmware Interface (EFI) might fail when booting from Windows Deployment Services on Windows Server 2008, because PXE boot discovery requests might contain an incorrect PXE architecture ID. This fix sets a default architecture ID in line with the processor architecture identifier for x64 UEFI that the Internet Engineering Task Force (IETF) stipulates. This release also adds a new advanced configuration option, efi.pxe.architectureID = <integer>, to facilitate backwards compatibility. For instance, if your environment is configured to use architecture ID 9, you can add efi.pxe.architectureID = 9 to the advanced configuration options of the virtual machine.

  • If a backend failover of a SAN storage controller takes place after a vMotion operation of MSCS clustered virtual machines, shared raw device mapping devices might go offline.

  • An ESXi host might fail with a purple diagnostic screen in the process of collecting information for statistical purposes due to stale TCP connections.

  • ESXi VMFS5 and VMFS6 file systems with certain volume sizes might fail to expand due to an issue with the logical volume manager (LVM). A VMFS6 volume fails to expand with a size greater than 16 TB.

  • The Syslog Service of ESXi hosts might stop transferring logs to remote log servers when a remote server is down and does not restart transfers after the server is up.

  • SATP ALUA might not handle properly SCSI sense code 0x2 0x4 0xa, which translates to the alert LOGICAL UNIT NOT ACCESSIBLE, ASYMMETRIC ACCESS STATE TRANSITION. This causes a premature all paths down state that might lead to VMFS heartbeat timeouts and ultimately to performance drops. With this fix, you can use the advanced configuration option /Nmp/Nmp Satp Alua Cmd RetryTime and increase the timeout value to 50 seconds. 

  • EtherSwitch service modules might run out of memory when configuring a L3 mirror session with Encapsulated Remote Switched Port Analyzer (ERSPAN) type II or type III.

  • If an ESXi host is configured to use an Active Directory domain with more than 10,000 accounts, the host might intermittently lose connectivity.

  • An ESXi host might fail with a purple diagnostic screen with an error for locked PCPUs, caused by a spin count fault.

    You might see a log similar to this:

    2017-08-05T21:51:50.370Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T21:56:55.398Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:02:00.418Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:07:05.438Z cpu4:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:12:10.453Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T22:17:15.472Z cpu2:134108)WARNING: Hbr: 873: Failed to receive from 172.16.30.42 (groupID=GID-d6bb98fb-ace7-4793-9ee5-30545518a5a3): Broken pipe

    2017-08-05T16:32:08.539Z cpu67:67618 opID=da84f05c)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_NETSERVER-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

    2017-08-05T16:32:45.177Z cpu61:67618 opID=389fe17f)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_Way4SRV-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

    2017-08-05T16:33:16.529Z cpu66:68466 opID=db8b1ddc)FSS: 6229: Conflict between buffered and unbuffered open (file 'DC_UAT_FIleServer-000001-sesparse.vmdk'):flags 0x4002, requested flags 0x8

  • The VMkernel module that manages PCI passthrough devices has a dedicated memory heap that does not allow support to virtual machines configured with more than 1 TB of memory and one or more PCI passthrough devices. Such virtual machines might fail to power on with a warning in the ESXi vmkernel log (/var/log/vmkernel.log) similar to WARNING: Heap: 3729: Heap mainHeap (2618200/31462232): Maximum allowed growth (28844032) too small for size (32673792). The issue also occurs when multiple virtual machines, each with less than 1 TB of memory and one or more PCI passthrough devices, are powered on concurrently and the sum of the memory configured across all virtual machines exceeds 1 TB.

  • When vSAN health service runs a Hardware compatibility check, the following warning message might appear: Timed out in querying HCI info. This warning is caused by slow retrieval of hardware information. The check times out after 20 seconds, which is less than the time required to retrieve information from the disks.

  • The syslog.log file is repeatedly populated with messages related to ImageConfigManager.

  • You might not be able to monitor the hardware health of an ESXi host through the CIM interface, because some processes in SFCB might stop working. This fix provides code enhancements to the sfcb-hhrc and the sfcb-vmware_base processes. The sfcb-hhrc process manages third-party Host Hardware RAID Controller (HHRC) CIM providers and provides hardware health monitoring, and the sfcb-vmware_base process.

  • HotAdd backup of a Windows virtual machine disk might fail if both the target virtual machine and the proxy virtual machine are CBRC-enabled, because the snapshot disk of the target virtual machine cannot be added to the backup proxy virtual machine.

  • In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

  • Attempts to create a host profile might fail with an error NoneType object has no attribute lower if you pass a null value parameter in the iSCSI configuration.

  • You might see packet loss of up to 15% and poor application performance across a L2 VPN network if a sink port is configured on the vSphere Distributed Switch but no overlay network is available.

  • The esxtop utility reports incorrect statistics for the average device latency per command (DAVG/cmd) and average ESXi VMkernel latency per command (KAVG/cmd) on VAAI-supported LUNs due to an incorrect calculation.

  • Distributed Object Manager processing operations might take too much time when an object has concatenated components. This might cause delays in other operations, which leads to high I/O latency.

  • Application running parallel File I/O operations on a VMFS datastore might cause the ESXi host to stop responding.

  • When you customize a Host Profile to attach it to a host, some parameter values, including host name and MAC address, might be lost.

  • An ESXi host might fail with an error PF Exception 14 in world 67048:Res6Affinity IP due to a race condition between resource allocations and VMFS6 volume closing operation.

  • If VMware vSphere Network I/O Control is not enabled, packets from third-party networking modules might cause an ESXi host to fail with a purple diagnostic screen.

  • With ESXi 6.5 Update 2, you can use PacketCapture, a lightweight tcpdump utility implemented in the Reverse HTTP Proxy service, to capture and store only the minimum amount of data to diagnose a networking issue, saving CPU and storage. For more information, see KB 52843.

  • After an ESXi host boots, the VMkernel adapter that connects to virtual switches with an Emulex physical NIC might be unable to communicate though the NIC and the VMkernel adapter might not get a DHCP IP address.

  • If you use vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration and display the storage allocation of the virtual machine as unchanged.

  • An ESXi host might not have access to the WSMan protocol due to failing CIM ticket authentication.

  • The ESXi daemon hostd might fail during backup due to a race condition.

  • Deleting files from a content library might cause a failure of the vCenter Server Agent (vpxa) service which results in failure of the delete operation.

  • Dell system management software might not display the BIOS attributes of ESXi hosts on Dell OEM servers, because the ESXi kernel module fails to load automatically during ESXi boot. This is due to different vendor names in the SMBIOS System Information of Dell and Dell OEM servers.

  • Under certain circumstances, the memory of hostd might exceed the hard limit and the agent might fail.

  • An ESXi host might fail with a purple diagnostic screen due to a rare race condition in iSCSI sessions while processing the scheduled queue and running queue during connection shutdown. The connection lock might be freed to terminate a task while the shutdown is still in process. The next node pointer would be incorrect and cause the failure. 

  • If a device in a disk group encounters a permanent disk loss, the physical log (PLOG) in the cache device might increase to its maximum threshold. This problem is due to faulty cleanup of data in the log, which are bound to the lost disk. The log entries build up over time, leading to log congestion, which can cause virtual machines to become inaccessible.

  • If you are using HP controllers, vSAN health service might be unable to read the controller firmware information. The health check might issue a warning, with the firmware version listed as N/A.

  • When vSAN cluster is managed by vRealize Operations, you might observe intermittent ping health check failures. 

  • After vRealize Operations integration with vSAN Management Pack enabled, the vSAN health service might become less responsive. This problem affects large scale clusters, such as those with 24 hosts, each with 10 disks. vSAN health service uses an inefficient log filter for parsing some specific log patterns, and can consume a large amount of CPU resources while communicating with vRealize Operations.

  • When adding unicast configuration of cluster hosts, if the host own local configuration is added, the host experiences a purple diagnostic screen error.

  • For each iSCSI target, the initiator establishes one or more connections depending on whether multipath is configured. For each connection, if there is only one outstanding I/O all the time, and the network is very fast (ping time less than 1 ms), the IOPS on an iSCSI LUN might be much lower that on a native vSAN LUN.

  • When the instantiation of a vSAN object fails due to memory constraints, the host might experience a purple diagnostic screen error.

  • After a host has lost connection with vCenter Server, with no manual retest, there could be a 5 to 10 minute delay for vSAN health service to report the loss of connection.

  • A false alarm is reported for cluster configuration consistency health check when vSAN encryption is enabled with two or more KMS server configured. The KMS certifications fetched from host and vCenter Server do not match due to improper ordering.

  • During upgrade to vCenter Server 6.5 Update 2, when vSAN applies unicast configuration of other hosts in the cluster, a local configuration might be added on the host, resulting in a host failure with purple diagnostic screen.

  • When you perform a vSphere vMotion operation with a virtual machine with more than 64GB of memory (vRAM) on vSAN, the task might fail due to error in swap file initialization on the destination.

  • When there are many vSAN components on disks, during reboot it might take a long time to publish vSAN storage. The host is delayed from joining the vSAN cluster, so this host might be partitioned for a long time.

  • When an object spans across all available fault domains, vSAN is unable to replace a component that is impacted by a maintenance mode operation or a repair operation. If this happens, vSAN rebuilds the entire object, which can result in large amounts of resync traffic.

  • When vSAN Observer is running, memory leaks can occur in the host init group. Other operations that run under the init group might fail until you reboot the host.

  • Under certain conditions, when I/Os from the Distributed Object Manager (DOM) fail and are retried, the cluster might experience high latencies and slow performance. You might see the following trace logs:
    [33675821] [cpu66] [6bea734c OWNER writeWithBlkAttr5 VMDISK] DOMTraceOperationNeedsRetry:3630: {'op': 0x43a67f18c6c0, 'obj': 0x439e90cb0c00, 'objUuid': '79ec2259-fd98-ef85-0e1a-1402ec44eac0', 'status': 'VMK_STORAGE_RETRY_OPERATION'}

  • I/O performance of ESX PCIe devices might drop when the IOMMU driver of AMD is enabled.

  • Virtual machines with brackets ([]) in their name might not deploy successfully. For more information, see KB 53648

  • An ESXi host might fail with a purple diagnostic screen and a Page Fault error message when you attach a USB device. This happens because some USB devices report invalid number of interface descriptors to the host.

  • Third-party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or the drivers of an ESXi host and the host might fail with a purple diagnostic screen. Memory corruptions are visible at later point, and not at the exact time of corruption.

  • The VMware Advanced Host Controller Interface (AHCI) driver is updated to version 1.1.1-1 to fix issues with memory allocation, the Marvell 9230 AHCI controller, and Intel Cannon Lake PCH-H SATA AHCI controller.

  • The nhpsa driver is updated to 2.0.22-1vmw.

  • ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

  • ESXi 6.5 Update 2 enables smartpqi driver support for the HPE ProLiant Gen10 Smart Array Controller.

  • ESXi 6.5 Update 2 enables a new native driver to support the Broadcom SAS 3.5 IT/IR controllers with devices including combination of NVMe, SAS, and SATA drives. The HBA device IDs are: SAS3516(00AA), SAS3516_1(00AB), SAS3416(00AC), SAS3508(00AD), SAS3508_1(00AE), SAS3408(00AF), SAS3716(00D0), SAS3616(00D1), SAS3708(00D2).

  • The lsi_msgpt3 driver is updated to version 16.00.01.00-1vmw.

  • The lsi_mr3 driver is updated to enable support for Broadcom SAS 3.5 RAID controllers. It also contains significant enhancement on task management and critical bug fixes. 

  • ESXi 6.5 Update 2 adds management of multiple namespaces compatible with the NVMe 1.2 specification and enhanced diagnostic log.

  • ESXi native drivers might fail to support more than eight message signaled interrupts even on devices with MSI-X that allows the device to allocate up to 2048 interrupts. This issue is observed on newer AMD-based platforms and is due to default limits of the native drivers, lsi_msgpt2 and lsi_msgpt3.

  • The disk serviceability plug-in of the ESXi native driver for HPE Smart Array controllers, nhpsa, now works with an extended list of expander devices to enable compatibility with HPE Gen9 configurations.
  • In ESXi 6.5 Update 2, LightPulse and OneConnect adapters are supported by separate default drivers. The brcmfcoe driver supports OneConnect adapters and the lpfc driver supports only LightPulse adapters. Previously, the lpfc driver supported both OneConnect and LightPulse adapters.

  • When you press Enter to reboot an ESXi host and complete a new installation from a USB media drive on some HP servers, you might see an error on a purple diagnostic screen due to a rare race condition.

  • Some SR-IOV graphics configurations, that are running ESXi 6.5 with enabled GPU Statistics , might cause virtual machines on the ESXi host to become unresponsive after extended host uptime due to a memory leak. In some cases, the ESXi host might become unresponsive as well.

  • In VMFS-6 datastores, if you use the esxcli storage vmfs unmap command for manual reclamation of free space, it might only reclaim space from small file blocks and not process large file block resources.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.