Search the VMware Knowledge Base (KB)
View by Article ID
Performing vMotion fails at 14% despite vmkping succeeding from source to target IP address (2042654)
- You are attempting a vMotion migration between two ESX/ESXi hosts, and the vMotion task reaches 14%, then times out with this error message:
The vMotion migrations failed because the ESX hosts were not able to connect over the vMotion network. Check the vMotion network settings and physical network configuration.
- You have checked connectivity between the source and target hosts using the vmkping command and are able to ping successfully from source to destination and from destination to source.
- The correct vmkernel interface/interfaces has been checked off for vMotion.
- The hostd logs on the source host, located at /var/log, contain entries similar to:
[74312B90 verbose 'vm:/vmfs/volumes/1fdabcdd-7f8922bd/VMwaretest/VMwaretest.vmx'] Handling message _vmx3: Migration [a0a00d4:1357591370037703] failed to connect to remote host x.x.x.x from host y.y.y.y: Timeout
--> vMotion migration [a0a00d4:1357591370037703] failed to create a connection with remote host x.x.x.x: The ESX hosts failed to connect over the VMotion network
--> The vMotion migrations failed because the ESX hosts were not able to connect over the vMotion network. Check the vMotion network settings and physical network configuration.
- The vmkernel logs on the source host, located at /var/log, contain entries similar to:
WARNING: MigrateNet: 1282: 1362151976698641 D: failed to connect to remote host x.x.x.x from host y.y.y.y: Timeout
WARNING: Migrate: 269: 1362151976698641 D: Failed: The ESX hosts failed to connect over the VMotion network (0xbad010b) @0x0
WARNING: Migrate: 4998: 1362151976698641 D: Migration considered a failure by the VMX. It is most likely a timeout, but check the VMX log for the true error.
- In the /vmfs/volumes/ VM_Name/vmware.log file for the virtual machine that is being migrated, you see entries similar to:
vmx| Migrate_SetFailure: The vMotion failed because the destination host did not receive data from the source host on the vMotion network. Please check your vMotion network settings and physical network configuration and ensure they are correct.
[vob.migrate.net.connect.failed.status.addrs] Migration [c0a80148:1384914368408909] failed to connect to remote host x.x.x.x from host x.x.x.x: Network unreachable
[vob.vmotion.net.send.connect.failed.status] vMotion migration [c0a80148:1384914368408909] failed to create a connection with remote host x.x.x.x: The ESX hosts failed to connect over the VMotion network
[msg.moduletable.powerOnFailed] Module Migrate power on failed.
- There are multiple vmkernel ports in the same network, the ESX/ESXi host may not use the vmkernel port checked off for vMotion when using the vmkping command. The host uses the vmkernel port associated with that IP subnet in its routing/forwarding table. If there is a physical switch configuration problem, vmkping may show connectivity as working correctly, but the actual vmkernel port associated with vMotion may not have access to that IP network on the physical network.
- Incorrect vmkernel interface may be selected for vMotion. The ESX/ESXi host uses only the selected interface for vMotion. If a vmkernel interface is in the incorrect IP subnet, or if the physical network is not configured correctly, the vMotion vmkernel interface may not be able to communicate with the destination host.
- The MTU size of the vMotion vmkernel interface may not match the MTU size configured at an upstream switch.
- You are using multi-NIC vMotion, the vmkernel interfaces might be configured on different IP subnets. Multi-NIC vMotion in vSphere 5.x does not work correctly unless all vmkernel ports are in the same IP subnet and all checked off for vMotion in the vSphere Client. For more information, see Multiple-NIC vMotion in vSphere 5 (2007467).
- Ensure that all physical switch ports to be used for vMotion are configured for the correct VLAN, or have access to the correct IP subnet for vMotion on the physical network. In addition, ensure that the MTU size is same for all ports used for vMotion in the physical network.
- Avoid the use of multiple vmkernel ports in the same subnet. The only exception for iSCSI Multi-Pathing and Multiple-NIC vMotion in vSphere 5.x.
- Ensure that vMotion is not enabled on multiple vmkernal port groups. For example, do not enable vMotion on both Management port group and vMotion port group.
- Ensure that the subnet mask is consistent across all hosts and ensure that there are no IP address conflicts in the vMotion network.
- If you are using auto deploy and answer files to assign MAC addresses, ensure that there are no duplicate MAC addresses assigned to the vmkernel adapters.
- Ensure that there are no firewall rules restricting the local vMotion subnet.
- Adhere to vMotion requirements and best practices. For more information, see the vSphere vMotion Networking Requirements section in vSphere 5.1 vCenter Server and Host Management guide.
- Check the ESX/ESXi host's routing table to determine which vmkernel interface is used for each IP subnet. If multiple vmkernel ports exist in the same IP subnet, the one listed in the table is used when using the vmkping command:
# esxcfg-route -l
Network Netmask Gateway Interface
192.168.2.0 255.255.255.0 Local Subnet vmk1
192.168.3.0 255.255.255.0 Local Subnet vmk2
192.168.46.0 255.255.255.0 Local Subnet vmk0
default 0.0.0.0 192.168.46.2 vmk0
- If using ESXi 5.1, run the vmkping -I command to select the desired vmkernel interface to send ICMP traffic:
# vmkping -I vmkX x.x.x.x
Note: Replace vmkX with the correct vmkernel interface, for example vmk1. Replacex.x.x.x with the IP address you want to ping.
- Diagnose MTU- related issues by running the vmkping command:
# vmkping -I vmkX -d -s 8972 target.ip.address
vmkX is the vmk indicated for the vMotion interface on the host generating the ping traffic (source)
target.ip.address is the IP address of the vMotion interface on the other host (destination)
-d = do not fragment the frame
-s = use this frame size
Note: MTU size of 8972 allows for some overhead but still allows the frame to go through. You can use something similar to 8000. Anything over 1500 is a valid test for jumbo frames. Remember that some physical switches need to be configured for MTU of 9216 to pass 9000 byte frames.
For more information on Jumbo frames, see Enabling and verifying IOAT and Jumbo frames (1003712).
- If using Multiple-NIC vMotion in vSphere 5.x, ensure that the vmkernel ports are all in the same IP subnet. For more information on configuring Multiple-NIC vMotion, see Multiple-NIC vMotion in vSphere 5 (2007467).
Request a Product Feature
To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.