Search the VMware Knowledge Base (KB)
View by Article ID

DataCore Metro Storage Solutions (2149740)

  • 2 Ratings


This document demonstrates how combining DataCore™ SANsymphony™ and HVSAN storage servers in a Stretched Cluster configuration can ensure that VMware virtual machines maintain continuous access to primary storage. In this configuration, ESXi hosts are configured in a VMware Metro Storage Cluster with a highly available storage system and data at multiple sites.  To do this, common failure scenarios are enacted in a test environment to verify the reaction to failure and confirm continuous access to storage.

This document is intended for VMware administrators to promote an understanding of stretched-cluster compatibility between VMware vSphere and DataCore Storage Defined Software. This article assumes that the reader is familiar with VMware vSphere, VMware vCenter Server, VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Schedule (vSphere DRS), VMware vSphere Storage DRS, and replication and storage clustering technology and terminology.
This solution is partner supported. For more information, see Partner Verified and Supported Products (PVSP).


What is SANsymphony™ and DataCore™ Hyper-converged Virtual SAN (HVSAN)

DataCore storage products provide comprehensive and universal storage services that extend the capabilities of the storage devices that are managed by SANsymphony and HVSAN software.  The technical description will be limited to the synchronous mirroring feature and ALUA and its inter-operation with Native Multi-Pathing (NMP) since these features are most directly related to vSphere Metro Storage Cluster (vMSC).

As a software package, the solution can run on dedicated x86 servers with SANsymphony software or as a virtual machine (VM) on the hypervisor host with HVSAN software.  When running SANsymphony software, the software presents any number of shared, highly available, multi-pathed SCSI (block) disks to the hypervisor clusters over conventional Fibre Channel (FC) or iSCSI (iSCSI) SAN networks.  For iSCSI, both the native kernel iSCSI initiator and/or an iSCSI HBA may be used by the initiating host.  The iSCSI target driver on the storage server follows SCSI specifications.

Similarly, when run in a VM, DataCore™ Hyper-converged Virtual SAN (HVSAN), the software presents any number of shared, highly available, multi-pathed iSCSI (block) disks to the local and remote hypervisor hosts over iSCSI networks.  In either configuration, the logical diagram is the same.  For a given DataCore disk, the two active software instances act as a pair of well behaved, logical disk controllers that present active/active block SCSI LUNs to the clustered hypervisor hosts.

Single datacenter between racks

Multiple datacenters across distance

The two block diagrams above illustrate the configurations conceptually.  Redundant physical and logical paths are assumed but not drawn for simplicity.  

For configuration details, see the Datacore documents:

Configuration Requirements
  • Two or more physically independent links between sites with a maximum round trip latency of five milliseconds or less.  One link will provide connectivity for management (TCP/IP) and the additional links will provide connectivity for SCSI (iSCSI or Fibre Channel) for synchronous mirroring.

  • For management and vMotion traffic, the ESXi hosts in both datacenters must have a private network on the same IP subnet and broadcast domain. Preferably management and vMotion traffic are on separate networks.

  • Any IP subnet used by the virtual machine that resides on it must be accessible from ESXi hosts in both datacenters. This requirement is important so that clients accessing virtual machines running on ESXi hosts on both sides are able to function smoothly upon any VMware HA triggered virtual machine restart events.

  • The data storage locations, including the boot device used by the virtual machines, must be active and accessible from ESXi hosts in both datacenters.

  • The VMFS-5 file system must be used and not upgraded from a previous VMFS file system version.

  • vCenter Server must be able to connect to ESXi hosts in both datacenters.

  • The VMware datastore for the virtual machines running in the ESXi Cluster are provisioned on shared virtual volumes or VVOLs if the vSphere environment is configured to use them.

  • When setting up a VMware Fault Tolerant or High Available Cluster and where virtual disks are to be shared between two or more of the ESXi Hosts, make sure that all host connections to any DataCore Server front-end (FE) port do not share any 'physical links' with the mirror (MR) connections between DataCore Servers.

  • When using the ESXi software iSCSI initiator, do not configure VMKernel iSCSI port binding as DataCore iSCSI target does not support multi-session iSCSI.

  • For more information, see the VMware ESXi Configuration Guide.
See Also:
Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.


  • 2 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)

Please enter the Captcha code before clicking Submit.
  • 2 Ratings