Search the VMware Knowledge Base (KB)
View by Article ID

Understanding Virtual Volumes (VVols) in VMware vSphere 6.0 (2113013)

  • 4 Ratings


This article provides a brief outline and overview of VMware's  approach on Virtual Volumes (VVols) in vSphere 6.0.



Virtual Volumes (VVols) is a new integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. Virtual Volumes simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed. It simplifies the delivery of storage service levels to individual applications by providing finer control of hardware resources and native array-based data services that can be instantiated with virtual machine granularity.

With Virtual Volumes (VVols), VMware offers a new paradigm in which an individual virtual machine and its disks, rather than a LUN, becomes a unit of storage management for a storage system.Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.


Virtual Volumes (VVols) are VMDK granular storage entities exported by storage arrays. Virtual volumes are exported to the ESXi host through a small set of protocol end-points (PE). Protocol Endpoints are part of the physical storage fabric, and they establish a data path from virtual machines to their respective virtual volumes on demand. Storage systems enables data services on virtual volumes. The results of these data services are newer virtual volumes. Data services, configuration and management of virtual volume systems is exclusively done out-of-band with respect to the data path. Virtual volumes can be grouped into logical entities called storage containers (SC) for management purposes. The existence of storage containers is limited to the out-of-band management channel. 

Virtual volumes (VVols) and Storage Containers (SC) form the virtual storage fabric. Protocol Endpoints (PE) are part of the physical storage fabric. 

By using a special set of APIs called vSphere APIs for Storage Awareness (VASA), the storage system becomes aware of the virtual volumes and their associations with the relevant virtual machines. Through VASA, vSphere and the underlying storage system establishes a two-way out-of-band communication to perform data services and offload certain virtual machine operations to the storage system. For example, operations such as snapshots and clones can be offloaded.

For in-band communication with Virtual Volumes storage systems, vSphere continues to use standard SCSI and NFS protocols. This results in support with Virtual Volumes for any type of storage that includes iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE), and NFS.
    • Virtual Volumes represent virtual disks of a virtual machine as abstract objects identified by 128-bit GUID, managed entirely by Storage hardware.
    • Model changes from managing space inside datastores to managing abstract storage objects handled by storage arrays.
    • Storage hardware gains complete control over virtual disk content, layout and management.
Storage partners have already started adding VVols support in their arrays. For end-to-end VVols support, HBA drivers need to support VVols-based devices. This necessitates availability of an API to get the second-level LUN ID (SLLID) and use by the SCSI drivers. 

Drivers that support I/O to VVols, needs to advertise second-level addressing capability in their driver at the time of adapter registration. In addition, the HBA drivers need to advertise its second-level (SLLID) addressing capability with the ESXi storage stack. For more information, see VMware ESXi 6.0 I/O driver information: certified 5.5 I/O drivers are compatible with vSphere 6.0 (2111492). This is required so that the ESXi host storage stack can take a decision if the adapter can handle VVols I/O.

When checking the I/O Devices section of the VMware Compatibility Guide, you will see a new entry called Secondary LUNID. (Enables VVols). The ESXCLI infrastructure has the provision to display SSLID capability output of the HBA with this command esxcli storage core adapter list

Virtual Volumes Component Overview

Virtual volumes constitutes five major components namely VVOL Device, Protocol End Point, Storage Container, VASA Provider and Array and all these components are managed/used/handled by different components in vSphere Stack such as Virtual Center (VASA, SPBM), ESXi (Hostd, VVOLD, VVOL FDS Driver). it becomes necessary to get a holistic view of environment and configuration.

Characteristics of Virtual Volumes (VVols):
    • No File System
    • ESX manages the array through VASA (vSphere APIs for Storage Awareness) APIs
    • Arrays are logically partitioned into containers, called Storage Containers
    • Virtual machine disks, called Virtual Volumes, stored natively on the Storage Containers
    • IO from ESXi host to the storage array is addressed through an access point called, Protocol Endpoint (PE)
    • Data Services are offloaded to the array. Snapshot, Replication, Encryption
    • Managed through storage policy-based management (SPBM) framework
VASA (vSphere APIs for Storage Awareness) APIs (VP)

A Virtual Volumes storage provider, also called a VASA provider is a software component that acts as a storage awareness service for vSphere. The provider mediates out-of-band communication between the vCenter Server and ESXi hosts on one side and a storage system on the other. For more information, see vSphere Storage APIs - Storage Awareness FAQ (2004098).
    • Software component developed by Storage Array Vendors 
    • ESX and vCenter Server connect to  VASA Provider
    • Provides Storage awareness services
    • Single VASA Provider can manages multiple arrays
    • Supports VASA APIs exported by the ESXi host
    • VASA Provider can be implemented within the array’s management server or firmware
    • Responsible for creating Virtual Volumes
Storage Containers  (SC)

Unlike traditional LUN and NFS based vSphere storage, the Virtual Volumes functionality does not require pre-configured volumes on a storage side. Instead, Virtual Volumes uses a storage container, which is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual volumes. 
    • Logical storage constructs for grouping of virtual volumes
    • Logically partition or isolate virtual machines with diverse storage needs and requirement
    • A single Storage Container can be simultaneously accessed via multiple Protocol Endpoints
    • Desired capabilities are applied to the Storage Containers
    • VASA Provider discovers Storage Container and reports to the vCenter Server
    • Any new virtual machines that are created are subsequently provisioned in the Storage Container
Protocol Endpoint (PE)

Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the Protocol Endpoint (PE), to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate. ESXi uses Protocol Endpoints (PE) to establish a data path on demand from virtual machines to their respective virtual volumes. 
    • Separate the access points from the storage itself
    • Can have fewer access points 
    • Existing multi-path policies and NFS topology requirements can be applied to the PE
    • Access points that enables communication between ESXi hosts and storage array systems.
    • Compatible with all SAN and NAS Protocols: iSCSI, NFS v3, FC, FCoE
    • A Protocol Endpoint can support any one of these protocols at a given time
VVols Objects

A virtual datastore represents a storage container in vCenter Server and the vSphere Web Client. Virtual volumes are encapsulations of virtual machine files, virtual disks, and their derivatives. 
    • Virtual machine objects stored natively on the array storage containers.
    • There are five different types of recognized Virtual Volumes:
      1. Config-VVol - Metadata 
      2. Data-VVol - VMDKs
      3. Mem-VVol - Snapshots
      4. Swap-VVol - Swap files
      5. Other-VVol - Vendor solution specific

Guidelines when Using Virtual Volumes 

The Virtual Volumes functionality offers several benefits and advantages. When you work with Virtual Volumes, you must follow specific guidelines:

Virtual Volumes has the following characteristics: 
    • Virtual Volumes supports offloading a number of operations to storage hardware. These operations include snapshotting, cloning, and Storage DRS.
    • With Virtual Volumes, you can use advanced storage services that include replication, encryption, deduplication, and compression on individual virtual disks.
    • Virtual Volumes supports such vSphere features as vMotion, Storage vMotion, snapshots, linked clones, Flash Read Cache, and DRS.
    • With Virtual Volumes, storage vendors can use native snapshot facilities to improve performance of vSphere snapshots.
    • You can use Virtual Volumes with storage arrays that support vSphere APIs for Array Integration (VAAI).
    • Virtual Volumes supports backup software that uses vSphere APIs for Data Protection (VADP).

Virtual Volumes Guidelines and Limitations

Follow these guidelines when using Virtual Volumes. 
    • Because the Virtual Volumes environment requires the vCenter Server, you cannot use Virtual Volumes with a standalone ESXi host.
    • Virtual Volumes does not support Raw Device Mappings (RDMs).
    • A Virtual Volumes storage container cannot span across different physical arrays.
    • Host profiles that contain virtual datastores are vCenter Server specific. After you extract this type of host profile, you can attach it only to hosts and clusters managed by the same vCenter Server as the reference host. 
Key benefits of Virtual Volumes:
    • Operational transformation with Virtual Volumes when data services are enabled at the application level
    • Improved storage utilization with granular level provisioning
    • Common management using Policy Based Management

Additional Information

See Also

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


  • 4 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)

Please enter the Captcha code before clicking Submit.
  • 4 Ratings