Example Configuration of LACP on VMware, Cisco, HP, Dell switches
search cancel

Example Configuration of LACP on VMware, Cisco, HP, Dell switches

book

Article ID: 324490

calendar_today

Updated On:

Products

VMware

Issue/Introduction

This article provides information on the concepts, limitations, and some sample configurations of link aggregation, NIC Teaming, Link Aggregation Control Protocol (LACP), and EtherChannel connectivity between ESXi and Physical Network Switches, particularly for Cisco and HP.

Note: There are a number of requirements which need to be considered before implementing any form of link aggregation. For more/related information on these requirements, see Host requirements for link aggregation (etherchannel, port channel, or LACP) in ESXi (1001938).

Resolution

For more information on LACP in vSphere, see:

Configuring LACP within the vSphere/VMware Infrastructure Client

To configure vSwitch properties for load balancing:
  1. Select the Networking tab in the vCenter Server.
  2. Select the vSphere distributed switch and click LACP.

    Note: LACP is only supported in vSphere on Distributed Switches (vDS).
     
  3. Click +NEW to add a new LAG group.
  4. Select the number of uplinks that will be in the LAG per host.
  5. From the Load Balancing dropdown, select the correct load balancing policy. This will be determined by the physical switch. However, see note below.
  6. Click OK.
  7. Verify that there are adapters listed under Assigned Adapters in the LAG for each host.
    1. Click Actions on the distributed switch.
    2. Follow the wizard until the Manage Physical Adapters section.
    3. Choose the appropriate adapters and ensure that under Uplink they have the LAG listed.
    4. If the LAG is not listed, select Assign Adapter and assign it to the LAG. Select OK and repeat until finished, then finish the wizard.
Note: If the adapters being chosen to move to the LAG are currently in use for a vmkernel such as the management vmkernel, the adapter must be moved over to the LAG in steps- they cannot be moved all at once. See Configuring LACP on an Uplink Port Group for more information. As with any configuration change, this should be done during a maintenance window.

EtherChannel/LACP supported scenarios

  • One IP to many IP connections. (Host A making two connection sessions to Host B and C).
  • Many IP to many IP connections. (Host A and B multiple connection sessions to Host C, D, etc)

    Note: One IP to one IP connections over multiple NICs is not supported. (Host A one connection session to Host B uses only one NIC).
     
  • Compatible with all ESXi VLAN configuration modes: VST, EST, and VGT. For more information on these modes, see VLAN Configuration on Virtual Switch, Physical Switch, and Virtual Machines (1003806).
  • Supported Cisco configuration: EtherChannel Mode ON – ( Enable EtherChannel only).
  • Supported HP configuration: Trunk Mode.
  • Supported switch Aggregation algorithm: IP-SRC-DST, for example (short for IP-Source-Destination).
  • Supported Virtual Switch NIC Teaming mode to match the physical switch load balancing algorithm.
    • For information on the load balancing algorithm the physical switch uses, please refer to the physical switch vendor.
  • Lower model Cisco switches may have MAC-SRC-DST set by default, and may require additional configuration. For more information, see the Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches article from Cisco.

Cisco EtherChannel sample configuration

interface Port-channel1
switchport
switchport access vlan 100
switchport mode access
no ip address
!
interface GigabitEthernet1/1
switchport
switchport access vlan 100
switchport mode access
no ip address
channel-group 1 mode on
!


Run this command to verify EtherChannel load balancing mode configuration:

show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-dst-ip
mpls label-ip
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
IPv4: Source XOR Destination IP address
IPv6: Source XOR Destination IP address
MPLS: Label or IP


show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
Number of channel-groups in use: 2
Number of aggregators: 2
Group Port-channel Protocol Ports
------+-------------+-----------+--------------------------
1 Po1(SU) - Gi1/15(P) Gi1/16(P)
2 Po2(SU) - Gi1/1(P) Gi1/2(P)

Switch# show etherchannel protocol
Channel-group listing:
-----------------------
Group: 1
----------
Protocol: - (Mode ON)
Group: 2
----------
Protocol: - (Mode ON)

HP EtherChannel sample configuration

This configuration is specific to HP switches:
  • HP switches support only two modes of LACP:
    • ACTIVE
    • PASSIVE

      Note: LACP is only supported in vSphere with vSphere Distributed Switches.
       
  • Set the HP switch port mode to TRUNK to accomplish static link aggregation with ESXi.
  • TRUNK Mode of HP switch ports is the only supported aggregation method compatible with ESXi NIC teaming mode IP hash.
To configure a static portchannel in an HP switch using ports 10, 11, 12, and 13, run this command:

conf
trunk 10-13 Trk1 Trunk


To verify your portchannel, run this command:

show trunk
Load Balancing
Port | Name Type | Group Type
---- + --------- + ----- -----
10 | 100/1000T | Trk1 Trunk
11 | 100/1000T | Trk1 Trunk
12 | 100/1000T | Trk1 Trunk
13 | 100/1000T | Trk1 Trunk

 

Dell EtherChannel sample configuration:

How to create Link Aggregation Groups (LAGs) on Dell Networking PowerConnect Switches

Additional Information

Disclaimer: VMware is not responsible for the reliability of any data, opinions, advice, or statements made on third-party websites. Inclusion of such links does not imply that VMware endorses, recommends, or accepts any responsibility for the content of such sites.