Host requirements for link aggregation (etherchannel, port channel, or LACP) in ESXi
search cancel

Host requirements for link aggregation (etherchannel, port channel, or LACP) in ESXi

book

Article ID: 324555

calendar_today

Updated On:

Products

VMware vCenter Server VMware vSphere ESXi

Issue/Introduction

These concepts are used in an ESXi network environment. To accomplish network redundancy, load balancing, and fail-over, you must:

  • Enable link aggregation on the physical switch.
    Note: Link aggregation is also known as Ether-Channel, Ethernet trunk, port channel, LACP, vPC, and Multi-Link Trunking.
     
  • Set up the ESXi Virtual Switch configuration to be compatible with these concepts.


Environment

VMware ESX 4.1.x
VMware vSphere ESXi 6.5
VMware vCenter Server 4.0.x
VMware VirtualCenter 2.0.x
VMware ESX 4.0.x
VMware vCenter Server 6.0.x
VMware vCenter Server 5.0.x
VMware vSphere ESXi 6.7
VMware ESX Server 3.5.x
VMware vSphere ESXi 5.5
VMware ESXi 4.0.x Installable
VMware vSphere ESXi 5.1
VMware vSphere ESXi 6.0
VMware vCenter Server 4.1.x
VMware ESXi 3.5.x Embedded
VMware vSphere ESXi 5.0
VMware ESXi 3.5.x Installable
VMware ESXi 4.1.x Installable
VMware ESXi 4.0.x Embedded
VMware ESX Server 3.0.x
VMware VirtualCenter 2.5.x
VMware vCenter Server 6.7.x
VMware ESXi 4.1.x Embedded
VMware vCenter Server 5.1.x
VMware vCenter Server 5.5.x

Resolution

ESXi requirements and limitations for link aggregation:

  • ESXi host only supports NIC teaming on a single physical switch or stacked switches.
    • Link aggregation is never supported on disparate trunked switches.
  • The switch must be set to perform 802.3ad link aggregation in static mode ON and the virtual switch must have its load balancing method set to Route based on IP hash
    • Enabling either Route based on IP hash without 802.3ad aggregation or vice-versa disrupts networking, so you must make the changes to the virtual switch first. That way, the service console is not available, but the physical switch management interface is, so you can enable aggregation on the switch ports involved to restore networking.
    • Note: Due to network disruption, changes to link aggregation should be done during a maintenance window.
  • Do not use for iSCSI software multipathing. iSCSI software mulitpathing requires just one uplink per vmkernel, and link aggregation gives it more than one.
  • Do not use beacon probing with IP HASH load balancing.
  • Do not configure standby or unused uplinks with IP HASH load balancing.
  • VMware supports only one Etherchannel bond per Virtual Standard Switch (vSS). 
  • ESXi supports LACP on vDS only.
  • In vSphere Distributed Switch 5.5 and later, all load balancing algorithms of LACP are supported.
    • ESXi load balancing should match the physical switch load balancing algorithm. For questions on which load balancing algorithm the physical switch uses, please refer to the physical switch vendor.

Limitations of LACP in vSphere:

  • LACP is only supported on vSphere Distributed Switches.
  • LACP is not supported for software iSCSI mulitpathing.
  • LACP configuration settings are not present in Host Profiles. 
  • Running LACP inside any guest OS (including nested ESXi hosts) is not supported.
  • LACP cannot be used in conjunction with the ESXi Dump Collector.
    • For this feature to work, the vmkernel port used for management purposes must be on a vSphere Standard Switch.
  • Port Mirroring cannot be used in conjunction with LACP to mirror LACPDU packets used for negotiation and control.
  • The teaming health check does not work for LAG ports as the LACP protocol itself is capable of ensuring the health of the individual LAG ports. However, VLAN and MTU health check can still check LAG ports.
  • Enhanced LACP support is limited to a single LAG to handle the traffic per distributed port (dvPortGroup) or port group.
  • You can create up to 64 LAGs on a distributed switch. A host can support up to 64 LAGs.
    • Note: the number of LAGs that you can actually use depends on the capabilities of the underlying physical environment and the topology of the virtual network. For example, if the physical switch supports up to four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG.
  • LACP is currently unsupported with SR-IOV.
  • As seen in the table below, Basic LACP (LACPv1) is only supported on vSphere versions 6.5 or below. Upgrading ESXi to 7.0 may result in the physical switch disabling the LAG ports on ESXi hosts using Basic LACP.
  • LACP Compatibility with vDS

     vCenter Server version 

     vDS version

     Host version 

    Compatibility

    vCenter Server 7.0

    vDS 7.0/6.6/6.5

    ESXi 7.0

    Only supports LACP v2

    vCenter Server 6.7

    vDS 6.6/6.5/6.0

    ESXi 6.7

    Only supports LACP v2

    vCenter Server 6.5

    vDS 6.5/6.0/5.5/5.1/5.0 

    ESXi 6.5

    • LACPv1 supports 5.1

    • LACP v2 supports 5.5/6.0/6.5 

    vCenter Server 6.0

    vDS 6.0/5.5/5.1/5.0

    ESXi 6.0

    • LACPv1 supports 5.1

    • LACP v2 supports 5.5/6.0

Note: As with any networking change, there is a chance for network disruption so a maintenance period is recommended for changes. This is especially true on a vSphere Distributed Switch (vDS) because the Distributed Switch is owned by vCenter and the hosts alone cannot make changes to the vDS if connection to vCenter is lost. Enabling LACP can complicate vCenter or host management recovery in production down scenarios, because the LACP connection may need to be broken to move back to a Standard Switch if necessary (since LACP is not supported on a Standard Switch).

Additional Information

Additional Information

For translated versions of this article, see: