Knowledge Base

The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides
 
Search the VMware Knowledge Base (KB)   View by Article ID
 

Microsoft Clustering on VMware vSphere: Guidelines for supported configurations (1037959)

Details

VMware provides customers additional flexibility and choice in architecting high-availability solutions. Microsoft has clear support statements for its clustering solutions on VMware.

Additionally, VMware provides guidelines in terms of storage protocols and number of nodes supported by VMware on vSphere, particularly for specific clustering solutions that access shared storage. Other clustering solutions that do not access shared storage, such as Exchange Cluster Continuous Replication (CCR) and Database Availability Group (DAG), can be implemented on VMware vSphere just like on physical systems without any additional considerations.

This article provides clear guidelines and vSphere support status for running various Microsoft clustering solutions and configurations.

Solution

VMware vSphere support for Microsoft clustering solutions on VMware products

This table outlines VMware vSphere support for Microsoft clustering solutions:

Microsoft
Clustering on
VMware
vSphere
support
VMware
HA
support
vMotion
DRS
support
Storage
vMotion
support
MSCS
Node
Limits
Storage Protocols supportShared Disk
FCIn-Guest
OS iSCSI
Native
iSCSI
In-Guest OS SMBFCoENFSRDMVMFS
Shared
Disk
MSCS with
Shared Disk
YesYes1NoNo2
5 (5.1 and 5.5)
Yes7YesYes6Yes5Yes4NoYes2Yes3
Exchange Single
Copy Cluster
YesYes1NoNo2
5 (5.1 and 5.5)
Yes7YesYes6Yes5Yes4NoYes2Yes3
SQL ClusteringYesYes1NoNo2
5 (5.1 and 5.5)
Yes7YesYes6Yes5Yes4NoYes2Yes3
SQL AlwaysOn
Failover Cluster
Instance
YesYes1NoNo2
5 (5.1 and 5.5)
Yes7YesYes6Yes5Yes4NoYes2Yes3
Non
shared
Disk
Network Load
Balance
YesYes1YesYesSame as
OS/app
YesYesYesN/AN/A
NoN/AN/A
Exchange CCRYesYes1YesYesSame as
OS/app
YesYesYesN/AN/ANoN/AN/A
Exchange DAGYesYes1YesYesSame as
OS/app
YesYesYesN/AN/A
NoN/AN/A
SQL AlwaysOn
Availability
Group
YesYes1YesYesSame as
OS/app
YesYesYesN/AN/A
NoN/AN/A

Table notes:
  1. When DRS affinity/anti-affinity rules are used. For more information, see the HA/DRS specific configuration for clustered virtual machines section in this article.
  2. For details on shared disk configurations, see the Disk Configurations section in this article.
  3. Supported in Cluster in a Box (CIB) configurations only. For more information, see the Considerations for Shared Storage Clustering section in this article.
  4. In vSphere 5.5, native FCoE is supported. In vSphere 5.1 Update 2, a two-node cluster using FCoE with Windows Server 2008 and Windows Server 2012 is supported. In vSphere 5.1 Update 1 and 5.0 Update 3, two-node cluster configurations with Cisco CNA cards (VIC 1240/1280) and driver version 1.5.0.8 are supported on Windows 2008 R2 SP1 64-bit guest OS. For more information, see the VMware Hardware Compatibility guide:
  5. Windows Server 2012 Failover Clustering only.
  6. vSphere 5.5 only.
  7. In vSphere 5.1 Update 2, up to a five-node cluster with FibreChannel is supported with Windows Server 2012.
Notes:
  • For vSphere 5.5 MSCS support enhancements, see MSCS support enhancements in vSphere 5.5 (2052238).
  • Microsoft Clustering Services (MSCS) virtual machines use a shared Small Computer System Interface (SCSI) bus. Any virtual machine using a shared bus cannot make hot changes to virtual machine hardware as this will disrupt the heartbeat between the MSCS nodes. These activities are not supported and will cause MSCS node failover:
    • vMotion migration
    • Increasing the size of disks
    • Hot adding memory
    • Hot adding CPU
    • Using snapshots
    • Pausing and/or resuming the virtual machine state
    • Memory over-commitment leading to virtual swapping or memory ballooning

      Note: For more information on MSCS limitations, see the vSphere MSCS Setup Limitation section in the Setup for Failover Clustering and Microsoft Cluster Service VMware white paper.

  • For the purpose of this document, SQL Mirroring is not considered by VMware to be a clustering solution. VMware fully supports SQL Mirroring and SQL Server AlwaysOn Availability Group on vSphere with no specific restrictions.
  • MSCS clusters are not supported for VMware Paravirtual SCSI Controllers.
  • Storage vMotion and vMotion migrations for Shared Disks configurations are not supported and will fail when the migration is attempted. For more information, see Troubleshooting migration compatibility error: Device is a SCSI controller engaged in bus-sharing (1003797).
  • ESXi 5.1 and 5.5 support up to five-node clusters for Windows Server 2008 SP2 and above, but earlier ESXi versions support only two-node clusters. For more information, see Setup for Failover Clustering and Microsoft Cluster Service - ESXi 5.1 vCenter Server 5.1.
  • A Microsoft cluster consisting of both physical Windows server nodes and virtual machine nodes is supported. For more information, see the Cluster Physical and Virtual Machines section in the Setup for Failover Clustering and Microsoft Cluster Service guide.
To avoid unnecessary cluster node failovers due to system disk I/O latency, virtual disks must be created using the EagerZeroedThick format on VMFS volumes only, regardless of the underlying protocol.

Note: Although EagerZeroedThick VMDKs can be created on VAAI-capable NAS arrays using a suitable VAAI NAS plug-in, NFS is not a supported storage protocol with Microsoft Clustering.

Commonly used Microsoft clustering solutions

These are common Microsoft clustering solutions used by VMware users in virtual machines:
  • Microsoft Clustering Services: MSCS or Windows Failover Clustering is a clustering function that provides failover and availability at the operating system level. Commonly clustered applications include:

    • Microsoft Exchange Server
    • Microsoft SQL Server
    • File and print services
    • Custom applications

  • Microsoft Network Load Balance (I/O Load Balance): Microsoft Network Load Balance (NLB) is suited for stateless applications or Tier 1 of multi-tiered applications, such as web servers providing a front end for back end database and application servers. A physical alternative is an appliance, such as those available from F5.
Note: Sharing RDMs between virtual machines without a clustering solution is not supported.

VMware vSphere support for running Microsoft clustered configurations

This table outlines VMware vSphere support for running Microsoft clustered configurations:

Clustering
Solution
Support
Status
Clustering VersionvSphere
Version
Notes
MSCS with
shared disk
SupportedWindows Server 20031
Windows Server 2008
Windows Server 2008 R2
Windows Server 20122
4.x/5.xSee additional considerations
Network
Load Balance
SupportedWindows Server 2003 SP2
Windows Server 2008
Windows 2008 R2
4.x/5.x
SQL clusteringSupportedWindows Server 20031
Windows Server 2008
Windows 2008 R2
Windows Server 20122
4.x/5.xSee additional considerations
SQL AlwaysOn
Failover Cluster Instance
SupportedWindows Server 2008 SP2 or higher
Windows Server 2008 R2 SP1 or higher
Windows Server 20122
4.x/5.xSee additional considerations
SQL AlwaysOn
Availability
Group
SupportedWindows Server 2008 SP2 or higher
Windows Server 2008 R2 SP1 or higher
Windows Server 20123
Windows Server 2012 R23 4
4.x/5.x
Exchange
Single copy
cluster
SupportedExchange 20031
Exchange 2007
4.x/5.xSee additional considerations
Exchange CCRSupportedWindows 20031
Windows 2008 SP1 or higher
Exchange 2007 SP1 or higher
4.x/5.x
Exchange DAGSupportedWindows 2008 SP2 or higher
Windows 2008 R2 or higher
Windows Server 20123
Windows Server 2012 R23 4
Exchange 2010
Exchange 2013
4.x/5.x

Table notes:
  1. This table lists the support status by VMware on vSphere. Check with your vendor as the status of third-party software vendor support may differ. For example, while VMware supports configurations using MSCS on clustered Windows Server 2003 virtual machines, Microsoft does not support it. The same applies for the support status of the operating system version. Support for software that has reached end-of-life may be limited or non-existent depending on the life cycle policies of the respective software vendor. VMware advises against using end-of-life products in production environments.
  2. Supported only with in-guest SMB and in-guest iSCSI for vSphere 5.1 and earlier. This restriction does not apply to vSphere 5.1 Update 2 and 5.0 Update 3. (See relevant footnotes under table 1 above)
  3. In-guest clustering solutions that do not use a shared-disk configuration, such as SQL Mirroring, SQL Server AlwaysOn Availability Group, and Exchange Database Availability Group (DAG), do not require explicit support statements from VMware.
  4. vSphere 5.0 Update 1 and later, 5.1 Update 1 and later, and 5.5 and later (where Guest OS is supported)
Additional notes:
  • System disk (C: drive) virtual disks can be on local VMFS or SAN-based VMFS datastores only, regardless of the underlying protocol. System disk virtual disks must be created with the EagerZeroedThick format. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0 or ESXi/ESX 4.x.
  • In Windows 2012, cluster validation completes with this warning: Validate Storage Spaces Persistent Reservation. You can safely ignore this warning.

For support information on Microsoft clustering for MSCS, SQL, and Exchange, go to the Windows Server Catalog and select the appropriate dropdown.

Windows Server 2012 failover clustering is not supported with ESXi-provided shared storage (such as RDMs or virtual disks) in vSphere 5.1 and earlier. For more information, see the Miscellaneous Issues section of the vSphere 5.1 Release Notes. VMware vSphere 5.5 provides complete support for 2012 failover clustering.

For related information, see these Microsoft pages:
Note: The preceding links were correct as of February 11, 2014. If you find a link is broken, provide feedback and a VMware employee will update the link.

Considerations for shared storage clustering

Storage protocols
  • Fibre Channel: In vSphere 5.1 and earlier, configurations using shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for cluster across boxes "CAB", virtual mode for cluster in a box "CIB"). RDMs on storage other than FC (such as NFS or iSCSI) are not supported in 5.1 and earlier. In vSphere 5.5, Quorum or data can be on both iSCSI or FCoE as well. Virtual disk based shared storage is supported with CIB configurations only and must be created using the EagerZeroedThick option on VMFS datastores.

  • Native iSCSI (not in the guest OS): Supported in vSphere 5.5. VMware does not support the use of ESXi/ESX host iSCSI initiators, also known as native iSCSI (hardware or software) with MSCS in vSphere 5.1 or earlier.

  • In-guest iSCSI software initiators: VMware fully supports a configuration of MSCS using in-guest iSCSI initiators, provided that all other configuration meets the documented and supported MSCS configuration. Using this configuration in VMware virtual machines is relatively similar to using it in physical environments. vMotion has not been tested by VMware with this configuration.

  • In-guest SMB (Server Message Block) protocol: VMware fully supports a configuration of MSCS using in-guest SMB, provided that all other configuration meets the documented and supported MSCS configuration. Using this configuration in VMware virtual machines is relatively similar to using it in physical environments. vMotion has not been tested by VMware with this configuration.

  • FCoE: FCoE is fully supported in vSphere 5.5. However, in earlier versions, FCoE is only supported in very specific configurations. For more information, see note 4 in the Microsoft clustering solutions table above.
Virtual SCSI adapters

Shared storage must be attached to a dedicated virtual SCSI adapter in the clustered virtual machine. For example, if the system disk (drive C:) is attached to SCSI0:0, the first shared disk would be attached to SCSI1:0, and the data disk attached to SCSI1:1.

The shared storage SCSI adapter for Windows Server 2008 and higher must be the LSI Logic SAS type, while earlier Windows versions must use the LSI Logic Parallel type.

Disk configurations
  • RDM: Configurations using shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for cluster across boxes "CAB", virtual mode for cluster in a box "CIB") in vSphere 5.1 and earlier. RDMs on storage other than FC (iSCSI and FCoE) are only supported in vSphere 5.5. However, in earlier versions, FCoE is supported in very specific configurations. For more information, see note 4 in the Microsoft clustering solutions table above.

  • VMFS: Virtual disks used as shared storage for clustered virtual machines must reside on VMFS datastores and must be created using the EagerZeroedThick option. This can be done using the vmkfstools command from the console, the vSphere CLI, or from the user interface. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0 or ESXi/ESX 4.x.

    To create EagerZeroedThick storage with the vmkfstools command:

    1. Log into the console of the host or launch the VMware vSphere CLI.
    2. For example, to create a 10 GB file in datastore1 named myVMData.vmdk, run the command:

      • Using the console:

        vmkfstools –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

        Note: Replace 10g with the desired size.

      • Using the vSphere CLI:

        vmkfstools.pl –-server ESXHost –-username username --password passwd –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

    To create EagerZeroedThick storage with the user interface:

    1. Using the vSphere Client, select the virtual machine for which you want to create the new virtual disk.
    2. Right-click the virtual machine and click Edit Settings.
    3. From the virtual machine properties dialog box, click Add to add new hardware.
    4. In the Add Hardware dialog box, select Hard Disk from the device list.
    5. Select Create a new virtual disk and click Next.
    6. Select the disk size you want to create.
    7. Select the datastore with the virtual machine or select a different datastore by clicking Specify a datastore and browsing to find the desired datastore.
    8. To create an EagerZeroedThick disk, select Support clustering features such as Fault Tolerance.

      Note: Step 8 must be the last configuration step. Changes to datastores after selecting Support clustering features such as Fault Tolerance cause it to become deselected.

    9. Complete the wizard to create the virtual disk.
Non-shared storage clustering

Non-shared storage clustering refers to configurations where no shared storage is required to store the application's data or quorum information. Data is replicated to other cluster nodes (for example, CCR) or distributed among the nodes (for example, DAG).

These configurations do not require additional VMware considerations regarding a specific storage protocol or number of nodes, and can be deployed on virtual in the same way as physical.

Notes:

HA/DRS specific configuration for clustered virtual machines

Affinity/Anti-affinity rules

For virtual machines in a cluster, you must create virtual machine to virtual machine affinity or anti-affinity rules. Virtual machine to virtual machine affinity rules specify which virtual machines should be kept together on the same host (for example, a cluster of MSCS virtual machines on one physical host). Virtual machine to virtual machine anti-affinity rules specify which virtual machines should be kept apart on different physical hosts (for example, a cluster of MSCS virtual machines across physical hosts).

For a cluster of virtual machines on one physical host, use affinity rules. For a cluster of virtual machines across physical hosts, use anti-affinity rules. For more information, see the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0 or ESXi/ESX 4.x.

To configure affinity or anti-affinity rules:
  1. In the vSphere Client, right-click the cluster in the inventory and click Edit Settings.
  2. In the left pane of the Cluster Settings dialog under VMware DRS, click Rules.
  3. Click Add.
  4. In the Rule dialog, enter a name for the rule.
  5. From the Type dropdown, select a rule:
    • For a cluster of virtual machines on one physical host, select Keep Virtual Machines Together.
    • For a cluster of virtual machines across physical hosts, select Separate Virtual Machines.
  6. Click Add.
  7. Select the two virtual machines to which the rule applies and click OK.
  8. Click OK.

Multipathing configuration

Path Selection Policy (PSP)

Round Robin PSP is not supported for LUNs mapped by RDMs used with shared storage clustering in vSphere 5.1 and earlier. If you chose to use Round Robin PSP with your storage arrays, or if the vSphere version in use defaults to Round Robin PSP for the array in use, you may change the PSP claiming the RDM LUNs to another PSP. For more information, see Changing a LUN to use a different Path Selection Policy (PSP) (1036189).

With native multipathing (NMP) in vSphere versions prior to 5.5, clustering is not supported when the path policy is set to Round Robin. For more information, see vSphere MSCS Setup Limitations in the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi 5.0 or ESXi/ESX 4.x.

In vSphere 5.5, Round Robin PSP (PSP_RR) support is introduced. For more information, see MSCS support enhancements in vSphere 5.5 (2052238).

Path Selection Policy (PSP) using third-party Multipathing Plug-ins (MPPs)

N+1 cluster configuration occurs when cluster nodes on physical machines are backed by nodes in virtual machines (that is, one node in each cluster nodes pair is in a virtual machine). In this configuration, the physical node cannot be configured with multipathing software. For more information, see your third-party vendor's best practices and support.

Failover Clustering and Microsoft Cluster Service setup guides

For more information, see the guide for your version:

Additional Information

For translated versions of this article, see:

Update History

06/21/2012 - Updated 5.x for CCR and DAG for supportability 10/31/2012 - Modified table column label for In-Guest iSCSI 01/29/2013 - Updated table with 5.1 MSCS 5 node limitations 05/29/2013 - Added "vMotion is not yet supported with this configuration" to "in-guest iSCSI" section 05/31/2013 - Corrected the Windows server 2012 failover clustering statement to "not supported" 06/19/2013 - Various edits to clarify support 06/21/2013 - Changed table 1 "SMB" column header to "in-Guest OS SMB" 09/23/2013 - Added vMotion support with 5.5 and clarified RDM limited support with FCoE (note 4) 09/24/2013 - Added 5.0U3 to note 4 03/16/2014 - mk - corrected footnote 2 under table 2 to reflect the original context. 03/20/2014 - mk - Removed references to Windows Server 2012 R2 from shared storage configurations. Merged FCoE footnotes into note 4 table 1.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 61 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.
What can we do to improve this information? (4000 or fewer characters)
  • 61 Ratings
Actions
KB: