jueves, 3 de agosto de 2017

EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX

Purpose

This article provides information on the concepts, limitations, and some sample configurations of link aggregation, NIC Teaming, Link Aggregation Control Protocol (LACP), and EtherChannel connectivity between ESXi/ESX and Physical Network Switches, particularly for Cisco and HP.

Resolution



Note
: There are a number of requirements which need to be considered before implementing any form of link aggregation. For more/related information on these requirements, see Host requirements for link aggregation for ESXi and ESX (1001938).

Link aggregation concepts:
  • EtherChannel: This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links. For additional information on Cisco EtherChannel, see the EtherChannel Introduction by Cisco. 
  • LACP or IEEE 802.3ad: The Link Aggregation Control Protocol (LACP) is included in IEEE specification as a method to control the bundling of several physical ports together to form a single logical channel. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer (directly connected device that also implements LACP). For more information on LACP, see the Link Aggregation Control Protocol whitepaperby Cisco.

    Note: LACP is only supported in vSphere 5.1, 5.5 and 6.0 using vSphere Distributed Switches (VDS) or the Cisco Nexus 1000v.
  • EtherChannel vs. 802.3ad: EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than EtherChannel is Cisco proprietary and 802.3ad is an open standard. 
  • For more information on EtherChannel implementation, see the Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches article from Cisco.

EtherChannel supported scenarios:
  • One IP to many IP connections. (Host A making two connection sessions to Host B and C)
  • Many IP to many IP connections. (Host A and B multiple connection sessions to Host C, D, etc)

    Note: One IP to one IP connections over multiple NICs is not supported. (Host A one connection session to Host B uses only one NIC).
  • Compatible with all ESXi/ESX VLAN configuration modes: VST, EST, and VGT. For more information on these modes, see VLAN Configuration on Virtual Switch, Physical Switch, and Virtual Machines (1003806).
  • Supported Cisco configuration: EtherChannel Mode ON – ( Enable EtherChannel only)
  • Supported HP configuration: Trunk Mode
  • Supported switch Aggregation algorithm: IP-SRC-DST (short for IP-Source-Destination)
  • Supported Virtual Switch NIC Teaming mode: IP HASH. However, see this note:

    Note: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5 and later, all the load balancing algorithms of LACP are supported:

    • Do not use beacon probing with IP HASH load balancing.
    • Do not configure standby or unused uplinks with IP HASH load balancing.
    • vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 and later supports multiple LAGs. 
  • Lower model Cisco switches may have MAC-SRC-DST set by default, and may require additional configuration. For more information, see the Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switchesarticle from Cisco.

This is a Cisco EtherChannel sample configuration:

interface Port-channel1
switchport
switchport access vlan 100
switchport mode access
no ip address
!
interface GigabitEthernet1/1
switchport
switchport access vlan 100
switchport mode access
no ip address
channel-group 1 mode on
!


ESX Server and Cisco switch sample topology and configuration:



Run this command to verify EtherChannel load balancing mode configuration:

Switch# show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-dst-ip
mpls label-ip
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
IPv4: Source XOR Destination IP address
IPv6: Source XOR Destination IP address
MPLS: Label or IP

Switch# show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
Number of channel-groups in use: 2
Number of aggregators: 2
Group Port-channel Protocol Ports
------+-------------+-----------+--------------------------
1 Po1(SU) - Gi1/15(P) Gi1/16(P)
2 Po2(SU) - Gi1/1(P) Gi1/2(P)

Switch# show etherchannel protocol
Channel-group listing:
-----------------------
Group: 1
----------
Protocol: - (Mode ON)
Group: 2
----------
Protocol: - (Mode ON)


HP switch sample configuration

This configuration is specific to HP switches:
  • HP switches support only two modes of LACP: 
    • ACTIVE
    • PASSIVE

      Note: LACP is only supported in vSphere 5.1, 5.5 and 6.0 with vSphere Distributed Switches and on the Cisco Nexus 1000V.
  • Set the HP switch port mode to TRUNK to accomplish static link aggregation with ESXi/ESX.
  • TRUNK Mode of HP switch ports is the only supported aggregation method compatible with ESXi/ESX NIC teaming mode IP hash.

To configure a static portchannel in an HP switch using ports 10, 11, 12, and 13, run this command:

conf
trunk 10-13 Trk1 Trunk


To verify your portchannel, run this command:

ProCurve# show trunk
Load Balancing
Port | Name Type | Group Type
---- + --------- + ----- -----
10 | 100/1000T | Trk1 Trunk
11 | 100/1000T | Trk1 Trunk
12 | 100/1000T | Trk1 Trunk
13 | 100/1000T | Trk1 Trunk


Configuring load balancing within the vSphere/VMware Infrastructure Client

To configure vSwitch properties for load balancing:
  1. Click the ESXi/ESX host.
  2. Click the Configuration tab.
  3. Click the Networking link.
  4. Click Properties.
  5. Click the virtual switch in the Ports tab and click Edit.
  6. Click the NIC Teaming tab.
  7. From the Load Balancing dropdown, select Route based on ip hash. However, see note below.
  8. Verify that there are two or more network adapters listed under Active Adapters.



Note
: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5 and later, all the load balancing algorithms of LACP are supported:
  • You must set NIC teaming to IP HASH in both the vSwitch and the included port group containing the kernel management port. See Additional Information section, For additional NIC teaming with EtherChannel information.
  • Do not use beacon probing with IP HASH load balancing.
  • Do not configure standby or unused uplinks with IP HASH load balancing.
  • vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 and later supports multiple LAGs.
  • ESX/ESXi running on a blade system do not require IP Hash load balancing if an EtherChannel exists between the blade chassis and upstream switch. This is only required if an EtherChannel exists between the blade and the internal chassis switch, or if the blade is operating in a network pass-through mode with an EtherChannel to the upstream switch. For more information on these various scenarios, contact your blade hardware vendor. 
Note: The preceding links were correct as of December 22, 2015. If you find a link is broken, provide a feedback and a VMware employee will update the link.

Additional Information

For more information on NIC teaming with EtherChannel information, see NIC teaming using EtherChannel leads to intermittent network connectivity in ESXi (1022751).

LACP is supported on vSphere ESXi 5.1, 5.5 and 6.0 on VMware vDistributed Switches only. For more information, see Enabling or disabling LACP on an Uplink Port Group using the vSphere Web Client (2034277) and the What's New in VMware vSphere 5.1 - Networking white paper.

Removing an EtherChannel configuration from a running ESX/ESXi host

To remove EtherChannel, there must only be one active network adapter on the vSwitch/dvSwitch. Ensure that the other host NICs in the EtherChannel configuration are disconnected (Link Down). Perform one of these options:
With only a single network card online, you can then remove the portchannel configuration from the physical network switch and change the network teaming settings on the vSwitch/dvSwitch from IP HASH to portID. For more information about teaming, see NIC teaming in ESXi and ESX (1004088).
For translated versions of this article, see:


Fuente:

No hay comentarios:

Publicar un comentario