Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

vSphere 5.x and 6.0 support with NetApp MetroCluster (2031038)

Purpose

This article provides information about deploying a vSphere Metro Storage Cluster (vMSC) across two datacenters or sites using the NetApp MetroCluster Solution with vSphere 5.0, 5.1, or 5.5 and 6.0 for ESXi 5.0, 5.1, or 5.5, and 6.0.  The article applies for FC, iSCSI, and NFS implementations of MetroCluster on Clustered Data ONTAP 8.3.

Resolution

What is vMSC?

A VMware vSphere Metro Storage Cluster configuration is a VMware vSphere certified solution that combines synchronous replication with array-based clustering. These solutions are implemented with the goal of reaping the same benefits that high-availability clusters provide to a local site, but in a geographically dispersed model with two datacenters in different locations. At its core, a VMware vMSC infrastructure is a stretched cluster. The architecture is built on the idea of extending what is defined as “local” in terms of network and storage. This enables these subsystems to span geographies, presenting a single and common base infrastructure set of resources to the vSphere cluster at both sites. In essence, it stretches network and storage between sites. All supported storage devices are listed on the VMware Storage Compatibility Guide.

What is a NetApp MetroCluster?

NetApp® MetroCluster™ is a highly cost-effective, synchronous replication solution for combining high availability and disaster recovery in a campus or metropolitan area to protect against both site disasters and hardware outages. MetroCluster configurations protect data by using two physically separated, mirrored clusters. Each cluster synchronously mirrors the data and Storage Virtual Machine (SVM) configuration of the other. When a disaster occurs at one site, an administrator can activate the mirrored SVM and begin serving the mirrored data from the surviving site. Additionally, the nodes in each cluster are configured as an HA pair, providing a level of local failover.

NetApp MetroCluster software provides continuous data availability across geographically separated data centers for mission-critical applications. MetroCluster high-availability and disaster recovery software runs on the clustered Data ONTAP® OS starting with Data ONTAP 8.3.0.

What is MetroCluster TieBreaker?

MetroCluster TieBreaker (MCTB) Solution is a plug-in that runs in the background as a Windows service or Unix daemon on an OnCommand Unified Manager (OC UM) host. The OC UM host can be a physical machine or a virtual machine. MCTB provides automatic failover in a MetroCluster Solution in scenarios where automatic failover is not possible. This can occur during an entire Site Failure.

The Tiebreaker software monitors relevant objects at the node, HA pair, and cluster level to create an aggregated logical view of a site’s availability. It utilizes a variety of direct and indirect checks on the cluster hardware and links to update its state depending on whether it has detected an HA takeover event, a site failure, or failure of all inter-site links. A direct link is via SSH to a node’s management LIF - failure of all direct links to a cluster would indicate a site failure – characterized by the cluster stopping serving all data (all SVM’s are down). An indirect link determines whether a cluster can reach its peer by any of the inter-site (FC-VI) links or intercluster LIFs. If the indirect links between clusters fail, while the direct links to the nodes succeed, this indicates the inter-site links are down.


Configuration Requirements

MetroCluster is a combined hardware and software solution. Specific hardware is required to create the shared storage fabric and inter-site links. Consult the NetApp Interoperability Matrix for supported hardware. On the software side, MetroCluster is completely integrated into Data ONTAP - no separate tools or interfaces are required. Once the MetroCluster relationships have been established, data and configuration is automatically continuously replicated between the sites, so manual effort is not required to establish replication of newly provisioned workloads. This not only simplifies the administrative effort required, it also eliminates the possibility of forgetting to replicate critical workloads.

These requirements must be satisfied to support this configuration:
  • The maximum distance between two sites is 200 km.
  • The maximum round trip latency for Ethernet Networks between two sites must be less than 10 ms, and for syncmirror replications must be less than 3 ms.
  • The storage network must be a minimum of 4 Gbps throughput between the two sites for ISL connectivity.
  • ESXi hosts in the vMSC configuration should be configured with at least two different IP networks, one for storage and the other for management and virtual machine traffic. The Storage network handles NFS and iSCSI traffic between ESXi hosts and NetApp Controllers. The second network (VM Network) supports virtual machine traffic as well as management functions for the ESXi hosts. End users can choose to configure additional networks for other functionality such as vMotion/Fault Tolerance. VMware recommends this as a best practice, but it is not a strict requirement for a vMSC configuration.
  • FC Switches are used for vMSC configurations where datastores are accessed via FC protocol, and ESX management traffic will be on an IP network. End users can choose to configure additional networks for other functionality such as vMotion/Fault Tolerance. This is recommended as a best practice but is not a strict requirement for a vMSC configuration.
  • For NFS/iSCSI configurations, a minimum of two uplinks for the controllers must be used. An interface group (ifgroup) should be created using the two uplinks in multimode configurations.
  • The VMware datastores and NFS volumes configured for the ESX servers are provisioned on mirrored aggregates.
  • vCenter Server must be able to connect to ESX servers on both the sites.
  • The maximum number of Hosts in an HA cluster must not exceed 32 hosts.
  • There is no specific license for MetroCluster functionality, including SyncMirror. It is included in the basic Data ONTAP license. Protocols and other features such as SnapMirror require licenses if used in the cluster. Licenses must be symmetrical across both site – For example,  SMB, if used, must be licensed in both cluster. Switchover will not work unless both sites have the same licenses.
  • All nodes should be licensed for the same node-locked features.
  • Infinite Volumes are not supported in a MetroCluster configuration.
  • All 4 nodes in the DR group must be the same FAS or FlexArray model (for example, four FAS8020 or four FAS8060). It is not supported to mix FAS and FlexArray controllers (even of the same model number) in the same MetroCluster DR group.
Notes:

Solution Overview

MetroCluster configurations protect data by using two physically separated, mirrored clusters, separated by a distance of up to 200 km. Each cluster synchronously mirrors the data and Storage Virtual Machine (SVM) configuration of the other. When a disaster occurs at one site, an administrator can activate the mirrored SVM and begin serving the mirrored data from the surviving site. In Data ONTAP 8.3.0, MetroCluster consists of a 2-node HA pair at each site, allowing the majority of planned and unplanned events to be handled by a simple failover and giveback within the local cluster. Only in the event of a disaster (or for testing purposes), is a full switchover to the other site required. Switchover, and the corresponding switchback operations transfers the entire clustered workload between the sites.

This figure shows the basic MetroCluster configuration. There are two data centers, A and B, separated at a distance up to 200km with ISLs running over dedicated fibre-links. Each site has a cluster in place, consisting of 2-nodes in an HA pair. In this example, cluster A at site A consists of nodes A1 and A2, and cluster B at site B consists of nodes B1 and B2. The two clusters and sites are connected via two separate networks which provide the replication transport. The Cluster Peering network is an IP network used to replicate cluster configuration information between the sites. The shared storage fabric is an FC connection and is used for storage and NVRAM synchronous replication between the two clusters. All storage is visible to all controllers via the shared storage fabric.


Note: This illustration is a simplified representation and does not indicate the redundant front-end components, such as Ethernet and fibre channel switches.

The vMSC configuration used in this certification program was configured with Uniform Host Access mode. In this configuration, the ESX hosts from a single site are configured to access storage from both sites.

In cases where RDMs are configured for virtual machines residing on NFS volumes, a separate LUN must be configured to hold the RDM mapping files. Ensure you present this LUN to all the ESX hosts.

vMSC test scenarios

This table outlines vMSC test scenarios:

ScenarioNetApp Controllers BehaviorVMware HA Behavior
Controller single path failureController path failover occurs. All LUNs and volumes remain connected.

For FC datastores, path failover is triggered from the host and the next available path to the same controller will be active.

All ESXi iSCSI/NFS sessions remain active in multimode configurations of two or more network interfaces.
No impact
ESXi single storage path failureNo impact on LUN and volume availability. ESXi storage path fails over to the alternative path. All sessions remain active.No impact
Site 1 or Site 2 single storage node failureSince there is an HA pair at each site, a failure of one node transparently and automatically triggers failover to the other node.No impact
MCTB VM failureNo impact on LUN and volume availability. All sessions remain active.No impact
MCTB VM single Link failureNo impact. Controllers continue to function normally.No impact
Complete Site 1 failure, including ESXi and controllerIn the case of a site-wide issue, the MetroCluster switchover operation allows immediate resumption of service by moving storage and client access from the Site 1 cluster to Site 2. The Site 2 partner nodes begin serving data from the mirrored plexes and the sync destination Storage Virtual Machine (SVM).Virtual machines on failed Site 1 ESXi nodes fail. HA restarts failed virtual machines on ESXi hosts on Site 2.
Complete Site 2 failure, including ESXi and controllerIn the case of a site-wide issue, the MetroCluster switchover operation allows immediate resumption of service by moving storage and client access from the Site 2 cluster to Site 1. The Site 1 partner nodes begin serving data from the mirrored plexes and the sync destination Storage Virtual Machine (SVM).Virtual machines on failed Site 2 ESXi nodes fail. HA restarts failed virtual machines on ESXi hosts on Site 1.
Single ESXi failure (shutdown)No impact. Controllers continue to function normally.Virtual machines on failed ESXi node fail. HA restarts failed virtual machines on surviving ESXi hosts.
Multiple ESXi host management network failureNo impact. Controllers continue to function normally.A new Master will be selected within the network partition.
Virtual machines will remain running. No need to restart virtual machines.
Site 1 and Site 2 simultaneous failure (shutdown) and restorationControllers boot up and resync. All LUNs and volumes become available. All iSCSI sessions and FC paths to ESXi hosts are re-established and virtual machines restarted successfully. As a best practice, NetApp controllers should be powered on first and allow the LUNs/volumes to become available before powering on the ESXi hosts.No impact
ESXi Management network all ISL links failureNo impact to controllers. LUNs and volumes remain available.If the HA host isolation response is set to Leave Powered On, virtual machines at each site continue to run as storage heartbeat is still active. Partitioned Hosts on site that do not have a Fault Domain Manager elect a new Master.
All Storage ISL Links failureNo Impact to controllers. LUNs and volumes remain available.
When the ISL links are back online, the aggregates resync.
No impact
System Manager - Management Server failureNo impact. Controllers continue to function normally.
NetApp controllers can be managed using Command Line.
No impact
vCenter Server failureNo impact. Controllers continue to function normally.No impact on HA. However, the DRS rules cannot be applied.

See Also

Update History

08/25/2015 - added updates for 6.0

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 53 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 53 Ratings
Actions
KB: