VMware vSphere 5.x and 6.0 support with NetApp MetroCluster (2031038)
What is vMSC?A VMware vSphere Metro Storage Cluster configuration is a VMware vSphere certified solution that combines synchronous replication with array-based clustering. These solutions are implemented with the goal of reaping the same benefits that high-availability clusters provide to a local site, but in a geographically dispersed model with two data centers in different locations. At its core, a VMware vMSC infrastructure is a stretched cluster. The architecture is built on the idea of extending what is defined as local in terms of network and storage. This enables these subsystems to span geographies, presenting a single and common base infrastructure set of resources to the vSphere cluster at both sites. In essence, it stretches network and storage between sites. All supported storage devices are listed on the VMware Storage Compatibility Guide.
What is a NetApp MetroCluster?NetApp® MetroCluster™ is a highly cost-effective, synchronous replication solution for combining high availability and disaster recovery in a campus or metropolitan area to protect against both site disasters and hardware outages. MetroCluster configurations protect data by using two physically separated, mirrored clusters. Each cluster synchronously mirrors the data and Storage Virtual Machine (SVM) configuration of the other. When a disaster occurs at one site, an administrator can activate the mirrored SVM and begin serving the mirrored data from the surviving site. Additionally, the nodes in each cluster are configured as an HA pair, providing a level of local failover.
NetApp MetroCluster software provides continuous data availability across geographically separated data centers for mission-critical applications. MetroCluster high-availability and disaster recovery software runs on the clustered Data ONTAP® OS starting with Data ONTAP 8.3.0.
What is MetroCluster TieBreaker?MetroCluster TieBreaker (MCTB) Solution is a plug-in that runs in the background as a Windows service or Unix daemon on an OnCommand Unified Manager (OC UM) host. The OC UM host can be a physical machine or a virtual machine. MCTB provides automatic failover in a MetroCluster Solution in scenarios where automatic failover is not possible. This can occur during an entire Site Failure.
The Tiebreaker software monitors relevant objects at the node, HA pair, and cluster level to create an aggregated logical view of a site availability. It utilizes a variety of direct and indirect checks on the cluster hardware and links to update its state depending on whether it has detected an HA takeover event, a site failure, or failure of all inter-site links. A direct link is through SSH to a node’s management LIF, failure of all direct links to a cluster would indicate a site failure, characterized by the cluster stopping serving all data (all SVM’s are down). An indirect link determines whether a cluster can reach its peer by any of the inter-site (FC-VI) links or intercluster LIFs. If the indirect links between clusters fail, while the direct links to the nodes succeed, this indicates the inter-site links are down.
Configuration RequirementsMetroCluster is a combined hardware and software solution. Specific hardware is required to create the shared storage fabric and inter-site links. Consult the NetApp Interoperability Matrix for supported hardware. On the software side, MetroCluster is completely integrated into Data ONTAP - no separate tools or interfaces are required. After the MetroCluster relationships are established, data and configuration is automatically continuously replicated between the sites, so manual effort is not required to establish replication of newly provisioned workloads. This not only simplifies the administrative effort required, it also eliminates the possibility of forgetting to replicate critical workloads.
These requirements must be satisfied to support this configuration:
- The maximum distance between two sites is 200 km.
- The maximum round trip latency for Ethernet Networks between two sites must be less than 10 ms, and for syncmirror replications must be less than 3 ms.
- The storage network must be a minimum of 4 Gbps throughput between the two sites for ISL connectivity.
- ESXi hosts in the vMSC configuration should be configured with at least two different IP networks, one for storage and the other for management and virtual machine traffic. The Storage network handles NFS and iSCSI traffic between ESXi hosts and NetApp Controllers. The second network (VM Network) supports virtual machine traffic as well as management functions for the ESXi hosts. End users can choose to configure additional networks for other functionality such as vMotion/Fault Tolerance. VMware recommends this as a best practice, but it is not a strict requirement for a vMSC configuration.
- FC Switches are used for vMSC configurations where datastores are accessed through FC protocol, and ESX management traffic is on an IP network. End users can choose to configure additional networks for other functionality such as vMotion/Fault Tolerance. This is recommended as a best practice but is not a strict requirement for a vMSC configuration.
- For NFS/iSCSI configurations, a minimum of two uplinks for the controllers must be used. An interface group (ifgroup) should be created using the two uplinks in multimode configurations.
- The VMware datastores and NFS volumes configured for the ESX servers are provisioned on mirrored aggregates.
- vCenter Server must be able to connect to ESX servers on both the sites.
- The maximum number of Hosts in an HA cluster must not exceed 32 hosts.
- There is no specific license for MetroCluster functionality, including SyncMirror. It is included in the basic Data ONTAP license. Protocols and other features such as SnapMirror require licenses if used in the cluster. Licenses must be symmetrical across both sites. For example, SMB, if used, must be licensed in both cluster. Switchover does not work unless both sites have the same licenses.
- All nodes should be licensed for the same node-locked features.
- Infinite Volumes are not supported in a MetroCluster configuration.
- All 4 nodes in the DR group must be the same FAS or FlexArray model (for example, four FAS8020 or four FAS8060). It is not supported to mix FAS and FlexArray controllers (even of the same model number) in the same MetroCluster DR group.
- A MetroCluster TieBreaker Machine should be deployed in a third site, and must be able to access the storage controllers in Site one and Site two to initiate an HA takeover event in case of an entire site failure.
- vMSC certification testing was conducted on vSphere 5.0 and NetApp Data ONTAP version 8.1 operating in 7 mode. For ESXi 5.5, vMSC certification testing was successfully completed on vSphere 5.5 and NetApp Data ONTAP version 8.3.
- For more information on NetApp MetroCluster Design and Implementation,
see the Clustered Data ONTAP 8.3 MetroCluster Installation and
- For information on deploying vSphere 6 on
NetApp MetroCluster, see vSphere
6 on NetApp MetroCluster 8.3.
- For information about NetApp in a
vSphere environment, see NetApp
Storage Best Practices for VMware vSphere.
- For more information on NetApp MetroCluster best practices, see the NetApp Technical Report, Best Practices for MetroCluster Design and Implementation.
Note: The preceding links were correct as of September 4, 2015. If you find a link is broken, provide a feedback and a VMware employee will update the link.
MetroCluster configurations protect data by using two physically separated, mirrored clusters, separated by a distance of up to 200 km. Each cluster synchronously mirrors the data and Storage Virtual Machine (SVM) configuration of the other. When a disaster occurs at one site, an administrator can activate the mirrored SVM and begin serving the mirrored data from the surviving site. In Data ONTAP 8.3.0, MetroCluster consists of a 2-node HA pair at each site, allowing the majority of planned and unplanned events to be handled by a simple failover and giveback within the local cluster. Only in the event of a disaster (or for testing purposes), is a full switchover to the other site required. Switchover, and the corresponding switchback operations transfers the entire clustered workload between the sites.
This figure shows the basic MetroCluster configuration. There are two data centers, A and B, separated at a distance up to 200km with ISLs running over dedicated fibre-links. Each site has a cluster in place, consisting of 2-nodes in an HA pair. In this example, cluster A at site A consists of nodes A1 and A2, and cluster B at site B consists of nodes B1 and B2. The two clusters and sites are connected through two separate networks which provide the replication transport. The Cluster Peering network is an IP network used to replicate cluster configuration information between the sites. The shared storage fabric is an FC connection and is used for storage and NVRAM synchronous replication between the two clusters. All storage is visible to all controllers through the shared storage fabric.
Note: This illustration is a simplified representation and does not indicate the redundant front-end components, such as Ethernet and fibre channel switches.
The vMSC configuration used in this certification program was configured with Uniform Host Access mode. In this configuration, the ESX hosts from a single site are configured to access storage from both sites.
In cases where RDMs are configured for virtual machines residing on NFS volumes, a separate LUN must be configured to hold the RDM mapping files. Ensure you present this LUN to all the ESX hosts.
vMSC test scenariosThis table outlines vMSC test scenarios:
|Scenario||NetApp Controllers Behavior||VMware HA Behavior|
|Controller single path failure||Controller path failover occurs. All LUNs and volumes remain connected. |
For FC datastores, path failover is triggered from the host and the next available path to the same controller will be active.
All ESXi iSCSI/NFS sessions remain active in multimode configurations of two or more network interfaces.
|ESXi single storage path failure||No impact on LUN and volume availability. ESXi storage path fails over to the alternative path. All sessions remain active.||No impact|
|Site 1 or Site 2 single storage node failure||Since there is an HA pair at each site, a failure of one node transparently and automatically triggers failover to the other node.||No impact|
|MCTB VM failure||No impact on LUN and volume availability. All sessions remain active.||No impact|
|MCTB VM single Link failure||No impact. Controllers continue to function normally.||No impact|
|Complete Site 1 failure, including ESXi and controller||In the case of a site-wide issue, the MetroCluster switchover operation allows immediate resumption of service by moving storage and client access from the Site 1 cluster to Site 2. The Site 2 partner nodes begin serving data from the mirrored plexes and the sync destination Storage Virtual Machine (SVM).||Virtual machines on failed Site 1 ESXi nodes fail. HA restarts failed virtual machines on ESXi hosts on Site 2.|
|Complete Site 2 failure, including ESXi and controller||In the case of a site-wide issue, the MetroCluster switchover operation allows immediate resumption of service by moving storage and client access from the Site 2 cluster to Site 1. The Site 1 partner nodes begin serving data from the mirrored plexes and the sync destination Storage Virtual Machine (SVM).||Virtual machines on failed Site 2 ESXi nodes fail. HA restarts failed virtual machines on ESXi hosts on Site 1.|
|Single ESXi failure (shutdown)||No impact. Controllers continue to function normally.||Virtual machines on failed ESXi node fail. HA restarts failed virtual machines on surviving ESXi hosts.|
|Multiple ESXi host management network failure||No impact. Controllers continue to function normally.||A new Master will be selected within the network partition. |
Virtual machines will remain running. No need to restart virtual machines.
|Site 1 and Site 2 simultaneous failure (shutdown) and restoration||Controllers boot up and resync. All LUNs and volumes become available. All iSCSI sessions and FC paths to ESXi hosts are re-established and virtual machines restarted successfully. As a best practice, NetApp controllers should be powered on first and allow the LUNs/volumes to become available before powering on the ESXi hosts.||No impact|
|ESXi Management network all ISL links failure||No impact to controllers. LUNs and volumes remain available.||If the HA host isolation response is set to Leave Powered On, virtual machines at each site continue to run as storage heartbeat is still active. Partitioned Hosts on site that do not have a Fault Domain Manager elect a new Master.|
|All Storage ISL Links failure||No Impact to controllers. LUNs and volumes remain available.|
When the ISL links are back online, the aggregates resync.
|System Manager - Management Server failure||No impact. Controllers continue to function normally.|
NetApp controllers can be managed using Command Line.
|vCenter Server failure||No impact. Controllers continue to function normally.||No impact on HA. However, the DRS rules cannot be applied.|