Search the VMware Knowledge Base (KB)
View by Article ID

vMotion over Distance support with EMC VPLEX Metro (1021215)

  • 31 Ratings

Purpose

This article provides information about the support of vMotion over Distance operations over VPLEX Metro using Distributed Virtual Volume. It also specifies the VMware ESXi/ESX host and VPLEX configuration details and expected behavior under different operational scenarios.

Resolution

About EMC VPLEX

EMC VPLEX is a new federation solution that can be stretched across two geographically dispersed data centers separated by synchronous distances. Starting with VPLEX Geosynchrony 5.2 and ESXi 5.5 round-trip-time for a non-uniform host access configuration is supported up to 10 milliseconds. For detailed supported configuration please refer to the latest VPLEX EMC Simple Support Matrix (ESSM) on support.emc.com. It provides simultaneous access to storage devices at 2 sites through creation of VPLEX Distributed Virtual Volume.
 
For more information about Distributed Virtual Volume, see Additional Information in this article.

Provided all vMotion over Distance requirements are met, with Distributed Virtual volume configured on VPLEX, a distant vMotion operation can be done from Data center 1 and Data center 2. You can vMotion over Distance a virtual machine from one data center to another to avoid disasters, migrate workloads to save power, or load balance workloads.
 
Notes:
  • See Requirements in this article for more information about the requirements for vMotion over Distance.
  • See Best practice documents in this article for more information about Distributed Virtual Volume on VPLEX.
This graphic demonstrates vMotion over Distance over VPLEX Distributed volume:
 
 

Requirements

To perform a vMotion over Distance operation successfully over synchronous distance, these requirements must be met:
  • An IP network with a minimum bandwidth of 622 Mbps is required.
  • The maximum latency between the two VMware vSphere servers cannot exceed 5 milliseconds (ms). Round-trip-time for a non-uniform host access configuration is now supported up to 10 milliseconds for VPLEX Geosynchrony 5.2 and ESXi 5.5 with NMP and PowerPath. For detailed supported configuration please refer to latest VPLEX EMC Simple Support Matrix (ESSM) on support.emc.com.
  • The source and destination ESXi/ESX servers must have a private network on the same IP subnet and broadcast domain.
  • The IP subnet on which the virtual machine resides must be accessible from both the source and destination ESXi/ESX servers. This requirement is very important because a virtual machine retains its IP address when it moves to the destination ESXi/ESX server to help ensure that its communication with the outside world (for example, with TCP clients) continues smoothly after the move.
  • The data storage location including the boot device used by the virtual machine must be active and accessible by both the source and destination ESXi/ESX servers at all times.
  • Access from vCenter Server and vSphere Client to both the ESXi/ESX servers must be available to accomplish the migration.

Note: See Best practice documents in this article for additional requirements for VPLEX Distributed Virtual Volume.

Supported use cases for vMotion over Distance with VPLEX Metro

This table shows the supported cases with VPLEX configuration:
 
Case Support
Simultaneous access to a shared Distributed Virtual Volume from two separate ESXi/ESX clusters
Supported
vMotion between a host in ESXi/ESX cluster 1 / data center 1 to a host in ESXi/ESX cluster 2 / data center 2 leveraging the shared Distributed Virtual Volume
Supported
 

Tested Scenarios

When performing the vMotion over Distance over VPLEX Metro using Distributed Virtual Volume, these failure scenarios are tested and supported.
 
Note: Boot from SAN configuration using Distributed Virtual Volume is not supported.
 
On creation of a VPLEX distributed volume, a winner VPLEX cluster must be assigned. In the event of VPLEX inter-cluster communication failure, the winner VPLEX cluster continues to service I/Os destined to the VPLEX distributed volume. The loser VPLEX cluster does not service I/Os received on the distributed volume.

The VPLEX behavior and impact on ESXi/ESX servers are documented based on the fact that the VPLEX cluster at the source data center (Data center 1) is set as the winner for the VPLEX Distributed Virtual volume.
 
Scenario
VPLEX Behavior
Impact on ESXi/ESX hosts
Loss of ESXi/ESX server at source
No impact
Virtual machines that were running on the local ESXi/ESX server can be registered and restarted on the destination ESXi/ESX server.
Loss of ESXi/ESX server at destination
No impact
Virtual machines that were running on the remote ESXi/ESX server can be registered and restarted on the source ESXi/ESX server.
VPLEX cluster failure at source
I/O can be resumed on the VPLEX cluster at destination
Virtual machines fail on ESXi/ESX servers at both source and destination. They can be restarted on the destination ESXi/ESX server.
VPLEX cluster failure at destination
VPLEX continues to process I/O at the source. When the VPLEX cluster at destination data center is back online, any changes written during the outage are replicated to the destination's Distributed volume mirror leg.
No impact to source ESXi/ESX server. Virtual machines on destination ESXi/ESX server fail. They can be restarted on the source ESXi/ESX server.
Total Data center 1 failure (Total site failure)
I/O can be resumed on the VPLEX cluster at destination data center
Virtual machines fail at destination ESXi/ESX server. Virtual machines at source and destination ESXi/ESX servers can be restarted on the destination ESXi/ESX server.
Total Data center 2 failure (Total site failure)
VPLEX continues to process I/O at the source data center. When the VPLEX cluster at destination data center is back online, any changes written during the outage are replicated to the destination's Distributed volume mirror leg.
vMotion fails with no interruption to source ESXi/ESX. Virtual machines at destination ESXi/ESX server can be restarted on the source ESXi/ESX server with manual intervention.

No impact on source ESXi/ESX server.
Inter-site network failure (Network Partition)
VPLEX winner cluster at Data center 1 continues to function
Destination ESXi/ESX server is not able to perform I/O to VPLEX Distributed volumes. Manual intervention required to resume I/O at destination data center.

No impact on source ESXi/ESX server.
VPLEX inter-cluster communication link failure
(Network Partition)

VPLEX winner cluster at Data center 1 continues to function

Destination ESXi/ESX server is not able to perform I/O to VPLEX Distributed Virtual Volumes. Manual intervention is required to resume I/O at destination.
VPLEX director failure
No impact. I/O continues on the remaining directors.

No impact. Under some conditions, a director failure may cause a vMotion that is already in progress to abort. However, the virtual machine continues to run at source and the vMotion can be re-initiated and completed as expected.

VPLEX management server failure No impact No impact
Redundant front-end path failure No impact No impact
Redundant back-end path failure No impact No impact
Back-end array failure on Data center 1 No impact. VPLEX automatically starts rebuild when the failed Back-end array is back. The rebuild may affect the host I/O response. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details. ESXi/ESX may observe slower I/O responses over Distributed Virtual Volume due to rebuilds. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details.
Back-end array failure on Data center 2 No impact. VPLEX automatically starts rebuild once the failed Back-end array is back. The rebuild may affect the host I/O response. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details. ESXi/ESX may observe slower I/O responses over Distributed Virtual Volume due to rebuilds. The rebuild parameters can be adjusted to minimize the rebuild effect on host I/O response time. Contact EMC for more details.
 

Best practice documents

EMC best practice documents for vMotion over Distance using VPLEX:

Note: The preceding links were correct as of February 13, 2015. If you find a link is broken, provide a feedback and a VMware employee will update the link.

Additional Information

This table defines terms used in this article:
 
Term Definition
Distributed Virtual Volume A VPLEX virtual volume with complete, synchronized, copies of data (mirrors), exposed through 2 geographically separated VPLEX clusters. Distributed Virtual Volumes can be simultaneously accessed by servers at distant data centers thus allowing vMotion over Distance.
Winner Cluster On creation of a VPLEX Distributed Virtual Volume, a winner VPLEX cluster must be assigned. In the event of VPLEX inter-cluster communication failure, the winner VPLEX cluster continues to service I/O destined to the VPLEX Distributed Virtual Volume. The loser VPLEX cluster does not service I/O received on the Distributed Virtual Volume. This VPLEX behavior addresses split-brain issues and data corruption in case of VPLEX inter-cluster network partition.

 
Note: The preceding link was correct as of February 13, 2015. If you find the link is broken, provide a feedback and a VMware employee will update the link.
 

See Also

Update History

02/06/2015 - Added ESXi 5.x and vCenter Server 5.x to Products

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Feedback

  • 31 Ratings

Did this article help you?
This article resolved my issue.
This article did not resolve my issue.
This article helped but additional information was required to resolve my issue.

What can we do to improve this information? (4000 or fewer characters)




Please enter the Captcha code before clicking Submit.
  • 31 Ratings
Actions
KB: