Search the VMware Knowledge Base (KB)
View by Article ID

DataCore Metro Storage Solutions (2149740)

  • 3 Ratings

Purpose

This document demonstrates how DataCore™ SANsymphony™ or DataCore™ Hyperconverged Virtual SAN (HVSAN) storage nodes should be configured in a Stretched Cluster to ensure that VMware virtual machines maintain continuous access to primary storage. In this configuration, ESXi hosts are configured in a VMware Metro Storage Cluster with a highly available storage system and data at multiple sites.  To do this, common failure scenarios are enacted in a test environment to verify the reaction to failure and confirm continuous access to storage.

This document is intended for VMware administrators to promote an understanding of stretched-cluster compatibility between VMware vSphere and DataCore Software Storage. This article assumes that the reader is familiar with VMware vSphere, VMware vCenter Server, VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Schedule (vSphere DRS), VMware vSphere Storage DRS, and replication and storage clustering technology and terminology.
This solution is partner supported. For more information, see Partner Verified and Supported Products (PVSP).

Resolution

What are SANsymphony™ and DataCore™ Hyperconverged Virtual SAN (HVSAN)

DataCore storage products provide comprehensive and universal storage services that extend the capabilities of the storage devices that are managed by SANsymphony and HVSAN software.  The technical description will be limited to the synchronous mirroring feature and ALUA and its inter-operation with Native Multi-Pathing (NMP) since these features are most directly related to vSphere Metro Storage Cluster (vMSC).

As a software package, the solution can run on dedicated x86 servers or as a virtual machine (VM) on the hypervisor host. With SANsymphony software, the software presents any number of shared, highly available, multi-pathed SCSI (block) disks to the hypervisor clusters over conventional Fibre Channel (FC) or iSCSI (iSCSI) SAN networks.  For iSCSI, both the native kernel iSCSI initiator and/or an iSCSI HBA may be used by the initiating host.  The iSCSI target driver on the storage server follows SCSI specifications.

When running in a VM, the software presents any number of shared, highly available, multi-pathed SCSI (block) disks to the local and remote hypervisor hosts over iSCSI networks.  In either configuration, the logical diagram is the same.  For a given DataCore disk, the two active software instances act as a pair of well behaved, logical disk controllers that present active/active block SCSI LUNs to the clustered hypervisor hosts.


Single datacenter between racks

Multiple datacenters across distance

The two block diagrams above illustrate the configurations conceptually.  Redundant physical and logical paths are assumed but not drawn for simplicity.  Please refer to the DataCore documents, VMware ESXi Configuration Guide and if appropriate, the SANsymphony and HVSAN Hyperconverged Virtual SAN Best Practice Guides for configuration details.

Configuration Requirements
  • An independent link between sites with a maximum round trip latency of five milliseconds will provide connectivity for management (TCP/IP).

  • Two additional independent links with a maximum round trip latency of five milliseconds will provide connectivity and redundancy for SCSI (iSCSI or Fibre Channel) connectivity for synchronous mirroring.

  • For management and vMotion traffic, the ESXi hosts in both datacenters must have a private network on the same IP subnet and broadcast domain. Preferably management and vMotion traffic are on separate networks.

  • Any IP subnet used by the virtual machine that resides on it must be accessible from ESXi hosts in both datacenters. This requirement is important so that clients accessing virtual machines running on ESXi hosts on both sides are able to function smoothly upon any VMware HA triggered virtual machine restart events.

  • The data storage locations, including the boot device used by the virtual machines, must be active and accessible from ESXi hosts in both datacenters.

  • The VMFS-5 file system must be used and not upgraded from a previous VMFS file system version.

  • vCenter Server must be able to connect to ESXi hosts in both datacenters.

  • For VMs on vSphere Nodes separated by distance each Node should access a local datastore (mirrored VDisk) for VMs to access. ALUA and Round Robin should be selected, and the local DataCore Server Node should be specified as the Preferred Server for that mirrored VDisk. This will keep performance high and prevent cross-link access and mirroring.

  • The VMware datastore for the virtual machines running in the ESXi Cluster are provisioned on shared virtual volumes or VVOLs if the vSphere environment is configured to use them.

  • When setting up a VMware Fault Tolerant or High Available Cluster and where virtual disks are to be shared between two or more of the ESXi Hosts, make sure that all host connections to any DataCore Server front-end (FE) port do not share any 'physical links' with the mirror (MR) connections between DataCore Servers.

  • When using the ESXi software iSCSI initiator, do not configure VMKernel iSCSI port binding as DataCore iSCSI target does not support multi-session iSCSI.

  • iSCSI port bindings are supported in release PSP7 and higher.

  • For more information, see the VMware ESXi Configuration Guide.
See Also:
Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 3 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)




Please enter the Captcha code before clicking Submit.
  • 3 Ratings
Actions
KB: