Link Aggregation and LACP basics

From Thomas-Krenn-Wiki
Jump to: navigation, search

Link Aggregation according to IEEE 802.1AX-2008 (formerly IEEE 802.3ad) is a standard for bundling multiple network connections in parallel. The advantages of link aggregation compared to a conventional connection via a single cable are higher availability and higher possible transmission speed (depending on the respective Load Distribution Algorithms). In this article you will find basic information on link aggregation and LACP, a concrete example in the article Link Aggregation for the Modular Server.


The individual links of a link aggregation group must always be parallel point-to-point connections.

To be able to use link aggregation, the following prerequisites must be fulfilled.

All of the aggregated links must:

  • be in full duplex mode
  • use the same data transmission rates (at least 1 Gbit/s)
  • use parallel point-to-point connections
  • connect to precisely one endpoint on a switch or server. Link aggregation using multiple switches to one link-aggregated endpoint, such as with Nortel’s Split Multi-link Trunking (SMLT), is not possible.[1] Virtual switches consisting of multiple physical switches, which behave externally like a single switch (such as the Cisco Virtual Switching System 1440 (VSS1440)[2]. or the Juniper Virtual Chassis (3000 or 4000 series)[3]), represent the only exceptions.

Ethernet switches at Thomas-Krenn


The link aggregation sublayer is implemented in the network stack within the data link layer, specifically between the MAC client and MAC sublayers.

Link aggregation (LAG) in accordance with IEEE 802.1AX-2008 (previously IEEE 802.3ad) has the following properties:[4]

  • LAG provides automatic recovery when individual physical links fail. As long as at least one physical link exists, the LAG connection will continue to exist.
  • Data transmission will be distributed as frames over the physical links.
  • All frames forming part of a specific data communication packet will however be transmitted over the same physical connection. This ensures the delivery of the individual frames of a data communication packet will be received in the correct order (and prevents mis-ordering).

Distribution of the Data Transmission

Link aggregation allows for the distribution of Ethernet frames to all physical links available to a LAG connection. Thereby, the potential data throughput will exceed the data rate of a single physical link.

Of course, the IEEE standard does not define a specific algorithm for distribution (Frame Distribution). The individual guidelines are:

  • The order of frames for a specific data communication packet may not be transposed.
  • Frames may not be duplicated.

The original quotation from Section 5.2.4 Frame Distributor describes this as follows:[5]

This standard does not mandate any particular distribution algorithm(s); however, any distribution algorithm shall ensure that, when frames are received by a Frame Collector as specified in 5.2.3, the algorithm shall not cause:
a) misordering of frames that are part of any given conversation, or
b) duplication of frames.
The above requirement to maintain frame ordering is met by ensuring that all frames that compose a given conversation are transmitted on a single link in the order that they are generated by the MAC Client; hence, this requirement does not involve the addition (or modification) of any information to the MAC frame, nor any buffering or processing on the part of the corresponding Frame Collector in order to reorder frames.

How well the quality of the individual frames will be distributed and how quickly the practically possible data throughput accelerates will depend on the specific implementation of the link aggregation in a given switch or driver. For example, FreeBSD uses a hash of the protocol header for this. The hash includes the Ethernet and MAC source and target addresses, a VLAN Tag (if available), and the IPv4 and IPv6 source and target addresses.[6]

Link Aggregation

Static Link Aggregation

With a static link aggregate, all configuration settings will be setup on both participating LAG components once.

Annotation: VMware ESX/ESXi 4.0, 4.1 & ESXi 5.0 only support Static Link Aggregation.[7] Since ESXi 5.1 Dynamic Link Aggregation/LACP is also supported.[8] However, there are also certain restrictions on the use of LACP with ESXi 5.5.[9]

The IEEE standard describes controlling link aggregation in Section 5.3 Link Aggregation Control starting on page 23.[5]

Dynamic Link Aggregation - Link Aggregation Control Protocol (LACP)

Beyond that, Link Aggregation Control Protocol (LACP) allows the exchange of information with regard to the link aggregation between the two members of said aggregation. This information will be packetized in Link Aggregation Control Protocol Data Units (LACDUs).

Each individual port can be configured as an active or passive LACP using the control protocol.

  • Passive LACP: the port prefers not transmitting LACPDUs. The port will only transmit LACPDUs when its counterpart uses active LACP (preference not to speak unless spoken to).
  • Active LACP: the port prefers to transmit LACPDUs and thereby to speak the protocol, regardless of whether its counterpart uses passive LACP or not (preference to speak regardless).

In contrast to a static link aggregation, LACP provides the following advantages:[10]

  • Even if one physical links fails, it will detect if the point-to-point connection is using a media converter, so that the link status at the switching port remains up. Because LACPDUs do not form a component of this connection, the link will be removed from the link aggregate. This ensures that packets will not be lost due to the failed link.
  • Both of the devices can mutually confirm the LAG configuration. With static link aggregation, errors in the configuration or wiring will often not be detected as quickly.

The IEEE standard describes controlling link aggregation in Section 5.4 Link Aggregation Control (LACP) starting on page 30.[5]

Table of References

Additional Information

General Information:

Specific Information for Linux:

Foto Werner Fischer.jpg

Author: Werner Fischer

Werner Fischer, working in the Knowledge Transfer team at Thomas-Krenn, completed his studies of Computer and Media Security at FH Hagenberg in Austria. He is a regular speaker at many conferences like LinuxTag, OSMC, OSDC, LinuxCon, and author for various IT magazines. In his spare time he enjoys playing the piano and training for a good result at the annual Linz marathon relay.

Related articles

Determine Server Heat Loss for Air Conditioning
Difference between Volt-amperes and Watts
Enable Intel 10 Gigabit Network Cards PXE Boot