Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

VMware support with NetApp MetroCluster (1001783)

Details

This article provides information about using NetApp MetroCluster with VMware HA and vMotion. This configuration was verified by and is directly supported by NetApp.
 
Note: This solution is not directly supported by VMware. For issues with this configuration, contact NetApp directly. VMware offers best effort support to the partner to fix vSphere-related problems that may be found in the field. It is the partner's responsibility to verify that the configuration functions with future vSphere major and minor releases, as VMware does not guarantee that compatibility with future releases is maintained.

Solution

What is a MetroCluster?

MetroCluster allows for synchronous mirroring of volumes between two storage controllers providing storage high availability and disaster recovery. A MetroCluster configuration consists of two NetApp FAS controllers, each residing in the same datacenter or two different physical locations, clustered together. It provides recovery for any single storage component or multiple point failure, and single-command recovery in case of complete site disaster. For additional information such as maximum distance supported and configuration requirements and details, contact NetApp.
 
The following graphics demonstrate the validated configuration of using VMware HA and FT with NetApp MetroCluster:
 
Stretched MetroCluster
 
 
Fabric MetroCluster
 

What happens to an ESX host in the event of a single storage component failure?

Note: LUN 1 is used as an example in a MetroCluster configuration with two storage controllers with two ports each.

vmhba1:0:1 (FAS controller 1, port 1)
vmhba1:1:1 (FAS controller 2 port 2)
vmhba2:0:1 (FAS controller 1 port 2)
vmhba2:1:1 (FAS controller 2 port 1)

  • Storage controller failure (disk shelves have not failed)

    For ESX hosts accessing the NetApp FAS storage controller via iSCSI or NFS protocol, the surviving storage controller performs a "takeover", meaning the target IP address (used as iSCSI target or NFS datastore mounting) is brought up on the surviving storage controller. No manual intervention is required on the ESX hosts without causing any disruption to data availability.

    For ESX hosts accessing the NetApp FAS storage controller via FCP protocol, the HBAs see the two clustered FAS storage controller nodes as one storage array unit, with the same WWNN (World Wide Node Name). A given LUN, if configured properly, sees paths in the following manner.

    Under normal operation, LUN 1 is active on vmhba1:0:1 path. In the event of storage controller failure, the paths vmhba1:0:1 would be unavailable. The active path is failed over to vmhba1:1:1. Because of the appropriate multipathing policy, no manual intervention is required on the ESX hosts. This again cause no disruption to data availability.

  • Disk Shelf failure (Storage controller has not failed)

    In the event of an entire disk shelf failure, the storage controller accesses the mirrored interconnected enclosure. The ESX host continues to use the same HBA/NIC to access the same storage controller and port. No manual intervention is required for ESX hosts accessing the NetApp FAS storage array via NFS/iSCSI/FCP protocol.

What happens to an ESX host in the event of complete filer/array failure?

In the event of complete storage controller and/or all disk shelves failure (storage controller and associated local disk shelves), you must perform a manual failover of the MetroCluster. Contact NetApp for documentation and detailed steps or see the NetApp Technical Report TR-3788. Additional steps are required for ESX hosts depending on the version of NetApp Data ONTAP running on the FAS storage controllers.
  • For NetApp FAS storage controller running Data ONTAP 7.2.4 or newer:

    After you perform a manual MetroCluster failover, the UUIDs of the mirrored LUNs are retained. Perform a rescan for each ESX host to detect the VMFS volumes running on the mirrored LUNs. When the VMFS volumes are detected, power on the virtual machines.

  • For NetApp FAS storage controller running Data ONTAP older than 7.2.4:

    After you perform a manual MetroCluster failover, the mirrored LUNs do not maintain the same LUN UUID as the original LUNs. When these LUNs house the VMFS-3 file system, the volumes are detected by ESX 3.x as being on snapshot LUNs. In a similar fashion, if a RAW LUN that is mapped as an RDM (Raw Device Mapping), is replicated or mirrored through MetroCluster, the metadata entry for the RDM must be recreated to map to the replicated or mirrored LUN.
To ensure the ESX hosts have access to the VMFS volumes on the mirrored LUNs, set the advanced VMkernel option LVM.DisallowSnapshotLUN to 0 and perform a rescan for each ESX host. After the ESX host(s) detect the VMFS volumes, power on the virtual machines.

Running VMware HA and FT in a MetroCluster environment

The following configurations and operational scenarios are tested and supported by VMware and NetApp. For more information, see the NetApp Technical Report TR-3788.

MetroCluster configuration supports active workloads on both sites. The “failed site” is attributed to the site that experiences failure, or complete outage. The “remote site” is attributed to the site that has not failed.

Note: For details of configuration requirements and maximum distance supported, contact NetApp or see the NetApp Technical Report TR-3788.

#
Failure Scenario
Data Availability Impact
1
Complete loss of power to disk shelf
None
2
Loss of one link on one disk loop
None
3 Failure and failback of storage controller
None
4
Loss of mirrored storage, network isolation
None
5
Total network isolation, including all ESX hosts (FT or non-FT enabled) and loss of hard drive

Applications or data on the non-FT virtual machines running on the affected ESX hosts are available after it automatically comes up in the surviving nodes of the VMware HA cluster.

FT enabled virtual machines run uninterrupted.
6 Loss of all ESX hosts in one site

Applications or data on the non-FT virtual machines running on the affected ESX hosts are available after it automatically comes up in the surviving nodes of the VMware HA cluster.

FT enabled virtual machines run uninterrupted.
7
Loss of one Brocade Fabric Interconnect switch (applicable for continuous availability solution with Fabric MetroCluster only)
None
8 Loss of one ISL between the Brocade Fabric Interconnect switches (applicable for continuous availability solution with Fabric MetroCluster only)
None
9 Loss of an entire site Applications or data on the virtual machines (both FT enabled and non-FT) and running in the failed site are available after executing the force takeover command from the surviving site and manual power on operations of the virtual machines.
10 Loss of all ESX hosts in one site and loss of storage controller in the other site
None
Loss of disk pool 0 in both sites
None
Loss of storage controller in one site and loss of disk pool 0 in other
None
 
Note: If Distributed Resource Scheduler (DRS) is enabled and set to Fully Automated or Partially Automated for the HA cluster in a MetroCluster environment (Stretched or Fabric), virtual machines may access storage that is not local to its ESX host, meaning the data access is through the storage controller in the remote site, through fabric interconnect.

Disclaimer: The partner products referenced in this article are hardware devices that are developed and supported by stated partners. Use of these products are also governed by the end user license agreements of the partners. You must obtain the application, support, and licensing for using these products from the partners. For more information, see Support Information in this article.



Keywords

snapshot, DR, Disaster Recovery, Metro Cluster, NetApp, VMFS3

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 44 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 44 Ratings
Actions
KB: