Search the VMware Knowledge Base (KB)
View by Article ID

vSphere SSD and Flash Device Support (2145210)

  • 4 Ratings
Language Editions


This article provides guidance for SSD and Flash Device usage with vSphere, basic requirements that must be met as well as recommendations that should be met for specific use cases.

Target Audience
  • Customer: Ensure that vSphere hosts are populated with flash and SSD devices that meet the required size and endurance criteria as set forth in Table-1 for the various use cases. Also, for devices used for the coredump use case in the case of hosts with a large amount of system memory or when vSAN is in use, ensure that the device size matches guidance in Table-2 and that actual size of the actual coredump partition is adequate.

  • System Vendor: Ensure that certified servers for vSphere use supported flash and SSD devices that meet the required size and endurance criteria as set forth in Table-1 for the various use cases. For systems with large memory as well as for vSAN Ready Nodes, vendors should take care to size flash or SSD devices used for the coredump use case as specified in Table-2 to ensure adequate operation in the event of a system crash and to ensure that the coredump partition is correctly sized as default settings may need to be overridden.

  • SSD Vendor: Ensure that your SSD and Flash drives types meet the required endurance criteria as set forth in Table-1 for the various use cases.


VMware vSphere ESXi can use locally attached SSDs (Solid State Disk) and flash devices in multiple ways. Since SSDs offer much higher throughput and much lower latency than traditional magnetic hard disks the benefits are clear. While offering lower throughput and higher latency, flash devices such as USB or SATADOM can also be appropriate for some use cases. The potential drawback to using SSDs and flash device storage is that the endurance can be significantly less than traditional magnetic disks and it can vary based on the workload type as well as factors such as the drive capacity, underlying flash technology, etc.

This KB outlines the minimum SSD and flash device recommendations based on different technologies and use case scenarios.

SSD and Flash Device Use Cases

A non-exhaustive survey of various usage models in vSphere environment are listed below.
  • Host swap cache
    • This usage model has been supported since vSphere 5.1 for SATA and SCSI connected SSDs. USB and low end SATA or SCSI flash devices are not supported.
    • The workload is heavily influenced by the degree of host memory over commitment.
  •  Regular datastore
    • A (local) SSD is used instead of a hard disk drive.
    • This usage model has been supported since vSphere 6.0 for SATA and SCSI connected SSDs.
    • There is currently no support for USB connected SSDs or for low end flash devices regardless of connection type.

  •  vSphere Flash Read Cache (aka Virtual Flash)
    • This usage model has been supported since vSphere 5.5 for SATA and SCSI connected SSDs.
    • There is no support for USB connected SSDs or for low end flash devices.

  • vSAN
  • vSphere ESXi Boot Disk
    • A USB flash drive or SATADOM or local SSD can be chosen as the install image for ESXi, the vSphere hypervisor, which then boots from the flash device. 
    • This usage model has been supported since vSphere 3.5 for USB flash devices and vSphere 4.0 for SCSI/SATA connected devices.
    • Installation to SATA and SCSI connected SSD, SATADOM and flash devices creates a full install image which includes a logging partition (see below) whereas installation to a USB device creates a boot disk image without a logging partition.

  • vSphere ESXi Coredump device
    • The default size for the coredump partition is 2.5 GiB which is about 2.7 GB and the installer creates a coredump partition on the boot device device for vSphere 5.5 and above. After installation the partition can be resized if necessary using partedUtil.  For more information, see the vSphere documentation.
    • Any SATADOM or SATA/SCSI SSD may be configured with a coredump partition.
    • This usage model has been supported from vSphere 3.5 for boot USB flash devices and since vSphere 4.0 for any SATA or SCSI connected SSD that is local.
    • This usage model also applies to Autodeploy hosts which have no boot disk.

  • vSphere ESXi Logging device
    • A SATADOM or local SATA/SCSI SSD is chosen as the location for the vSphere logging partition (/scratch partition). This partition may be but need not be on the boot disk and this applies to Autodeploy hosts which lack a boot disk.
    • This usage model has been supported since vSphere 6.0 for any SATA or SCSI connected SSD that is local. SATADOMs that meet the requirement set forth in Table 1 are also supported.
    • This usage model can be supported in a future release of vSphere for USB flash devices that meet the requirement set forth in Table 1.

SSD Endurance Criteria

The flash industry often uses Tera Bytes Written (TBW) as a benchmark for SSD endurance. TBW is the number of terabytes that can be written to the device over its useful life. Most devices have distinct TBW ratings for sequential and random IO workloads, with the latter being much lower due to Write Amplification Factor (WAF) (defined below). Other measures of endurance commonly used are DWPD (Drive Writes Per Day) and P/E (Program/Erase) cycles. 

Conversion formulas are provided here:
  • Converting DWPD (Drive Writes Per Day) to TBW (Terabytes Written):
    • TBW = DWPD * Warranty (in Years) * 365 * Capacity (in GB) / 1,000 (GB per TB)

  • Converting Flash P/E Cycles per Cell to TBW (Terabytes Written):
    • TBW = Capacity (in GB) * (P/E Cycles per Cell) / (1,000 (GB per TB) * WAF)
WAF is a measure of the induced writes caused by inherent properties of flash technology. Due to the difference between the storage block size (512 bytes), the flash cell size (typically 4KiB or 8KiB bytes) and the minimum flash erase size of many cells one write can force a number of induced writes due to copies, garbage collection, etc. For sequential workloads typical WAFs fall in the range of single digits while for random workloads WAFs can approach or even exceed 100. Table 1 contains workload characterization for the various workloads excepting the Datastore and vSphere Flash Read Cache workloads which depend on the characteristics of the Virtual Machines workloads being run and thus cannot be characterized here. A WAF from the table can be used with the above P/E to TBW formula.

SSD Selection Requirements

Performance and endurance are critical factors when selecting SSDs. For each of the above use cases, the amount and frequency of data written to the SSD or flash device determines the minimum requirement for performance and endurance by ESXi. In general, SSDs can be deployed in all of the above use cases, but (low end) flash devices including SATADOM can only be deployed in some. In the table below: ESXi write endurance requirements are stated in terms of Terabytes written (TBW) for a JEDEC random workload. There are no specific ESXi performance requirements, but products built on top of ESXi such as vSAN may have their own requirements.

Table 1: SSD/Flash Endurance Requirements

SSD/Flash Device Use Case JEDEC Endurance Requirement Workload Charectization Notes
Host Swap Cache 365 TBW or better Random, infrequent writes Host memory rarely overcommitted
3650 TBW or better Random, frequent writes Host memory routinely overcommitted
Regular Datastore 3650 TBW or better1 Virtual Machine workload dependent Size >= 1TB needs more endurance
vSphere Flash Read Cache (VFlash) 365 TBW or better Virtual Machine workload dependent Size <= 4TB
ESXi Boot Device 0.5 TBW minimum2
2 TBW recommended2,6
Sequential (WAF <10) Size >= 4GB3
ESXi Coredump Device 0.1 TBW minimum2,4 Extremely sequential (WAF ~1) Size >= 4GB3,4
ESXi Logging Device 64 TBW (dedicated device)
128 TBW (colocated) 2,5
(WAF < 100 block mode, WAF < 10 page mode)
Size >= 4GB2,3
    1. For SSD sizes over 1 TB the endurance should grow proportionally (e.g., 7300 TBW for 2 TB).
    2. Endurance requirement normalized to JEDEC random for an inherently sequential workload.
    3. Only 4 GB of device is used, so a 16 GB device need only support 25% as many P/E cycles.
    4. Default coredump partition size is 2.7 GB.  See Table 2 for detailed size requirements.  When boot and coredump devices are co-located  the boot device endurance requirement will suffice.
    5. Failure of the ESXi boot and/or coredump devices is catastrophic for vSphere, hence the higher requirement as an extra margin of safety when logging device is co-located with one or both1.
    6. Future release of vSphere may require higher TBW for its boot device. It is highly recommended for future looking system to have 2TBW endurance requirements for vSphere boot device.
    7. For specific characteristics details of ESXi logging workload and its normalization to JEDEC219 random workload, please contact VMware.
    Important: ALL of the TBW requirements in Table 1 are stated in terms of the JEDEC Enterprise Random Workload2 because vendors commonly publish only a single endurance number, the random TBW.  Vendors may provide a sequential number if asked and such a number together with a measured or worst case WAF can be used to calculate an alternative sequential TBW if the total workload writes in 5 years are known3.  Failure of the boot or coredump device is catastrophic for vSphere so VMware requires use of random TBW requirement for boot and coredump use cases.
    1. Co-located refers to the case where 2 use cases are partitions on the same device, thereby sharing flash cells.
    2. See JESD218A and JESD219 for the Endurance Test Method and Enterprise Workload definitions, respectively.
    3. Contact VMware for detailed instructions to compute the alternative sequential TBW for the logging use case.
    ESXi Coredump Device Usage Model

    The size requirement for the ESXi coredump device scales with the size of the host DRAM and also usage of vSAN. vSphere ESXi installations with an available local datastore are advised to use dump to file which automatically reserves the needed space on the local datastore but flash media in general and installations using vSAN in particular will often lack a local datastore and thus require a coredump device. While the default size of 2560 MiB suffices for a host with 1 TiB of DRAM not running vSAN, if vSAN is in use the default size is almost always insufficient. 

    Table 2 gives the recommended partition size in units of MiB and corresponding flash drive size recommendation. If these recommendations are ignored and ESXi crashes then the coredump may be truncated. The footnotes explain the calculation, and note that if using vSAN the values from the right side of the table must be added to those from the left side of the table. To override the default or change coredump partition size after installing, please consult vSphere release documentation or contact VMware.

    Table 2: Coredump Partition Size Parameter and Size Requirement4 as a Function of both Host DRAM Size and (if applicable) vSAN Caching Tier Size
    Base vSphere ESXi System Addition when using vSAN1
    DRAM size  Partition Size Parameter (MiB) Flash Size Requirement2 Caching Tier size Partition Size Param. Delta (MiB)3 Flash Size Req. Delta2,3
    1 TiB 25601 4 GB 500 GB +20481 +4 GB
    2 TiB 5120 8 GB 1 TB +2048 +4 GB
    4 TiB 10240 16 GB 2 TB +3072 +4 GB
    6 TiB 15360 32 GB 4 TB +5120 +8 GB
    8 TiB 20480 32 GB 8 TB +9216 +16 GB
    12 TiB 30720 64 GB 16 TB +17408 +32 GB

      1. 2560 is the default so no parameter is required for systems without vSAN with up to 1 TiB of DRAM or with vSAN with up to 512 GiB of DRAM and 250 GB of SSDs in the Caching Tier.
      2. Due to GiB to GB conversion 6 and 12 TiB DRAM sizes require the next larger flash device to accommodate the coredump partition. Provided sizes will also accommodate colocating the boot device and the coredump device on the same physical flash drive.
      3. Sizes in these columns must be added to sizes from left hand side of table. For example, a host with 4 TiB of DRAM and 4 TB of SSD in the vSAN Caching Tier requires a flash device size of at least 24 GB (16 GB + 8 GB) and a coredump partition size of 15360 MiB (10240 + 5120).
      4. Coredump device usage is very infrequent so TBW requirement is unchanged from Table 1.

      VMware Support Policy

      In general, if the SSD’s host controller interface is supported by a certified IOVP driver, then the SSD drive is supported for ESXi provided that the media meets the endurance requirements above.  Therefore, there are no specific vSphere restrictions against SATADOM and M.2 provided, again, that they adhere to the endurance requirements set forth in Table 1 above.

      For USB storage devices (such as flash drives, SD cards plus readers, and external disks of any kind) the drive vendor must work directly with system manufacturers to ensure that the drives are supported for these systems. USB flash devices and SD cards plus readers are qualified pairwise with USB host controllers and it is possible for a device to fail certification with one host controller but pass with another.  VMware strictly recommends that customers who do not have a preinstalled system either obtain a USB flash drive directly from their OEM vendor or purchase a model that has been certified for use with their server.

      See Also

      Update History

      01/23/2017 - Added vSphere 6.5 products

      Request a Product Feature

      To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


      • 4 Ratings

      Did this article help you?
      This article resolved my issue.
      This article did not resolve my issue.
      This article helped but additional information was required to resolve my issue.

      What can we do to improve this information? (4000 or fewer characters)

      Please enter the Captcha code before clicking Submit.
      • 4 Ratings