Question

Brocade VDX 6720 vLAG

  • 14 November 2019
  • 8 replies
  • 1214 views

NOS Version: 4.1.3d

The server is connected by two links to two switches Brocade VDX6720. There is a problem with load balancing in vLAG. If connect the server to one switch, then there are no problems with load balancing. Traffic is distributed evenly.

 

sh int tengi 1/0/41

Rate info:
    Input 326.962956 Mbits/sec, 49440 packets/sec, 3.27% of line-rate
    Output 666.638788 Mbits/sec, 92225 packets/sec, 6.67% of line-rate

 

sh int tengi 2/0/41

Rate info:
    Input 324.905664 Mbits/sec, 44864 packets/sec, 3.25% of line-rate
    Output 8.917360 Mbits/sec, 3498 packets/sec, 0.09% of line-rate


8 replies

Userlevel 3

Can you give us more input?

E.g.  on the VDX side running configuration:

show interface port-channel $PORT_ID
show port-channel detail
show running-config interface port-channel
show running-config interface $MEMBER

Also what OS is the server in question running and how is traffic distribution configured?

 

Can you give us more input?

E.g.  on the VDX side running configuration:

show interface port-channel $PORT_ID
show port-channel detail
show running-config interface port-channel
show running-config interface $MEMBER

Also what OS is the server in question running and how is traffic distribution configured?

 

 

swr2500-1# sh int Po 3
Port-channel 3 is up, line protocol is up
Hardware is AGGREGATE, address is 0027.f827.4cef
    Current address is 0027.f827.4cef
Description: LACP_20GE_bras05
Interface index (ifindex) is 671088643
Minimum number of links to bring Port-channel up is 1
MTU 9000 bytes
LineSpeed Actual     : 20000 Mbit
Allowed Member Speed : 10000 Mbit
Priority Tag disable
IPv6 RA Guard disable
Last clearing of show interface counters: 3w2d00h
Queueing strategy: fifo
Receive Statistics:
    149500613136 packets, 138326116470191 bytes
    Unicasts: 149499167734, Multicasts: 1444181, Broadcasts: 1221
    64-byte pkts: 13216543107, Over 64-byte pkts: 39220461980, Over 127-byte pkts: 3844289506
    Over 255-byte pkts: 2080036007, Over 511-byte pkts: 2597560066, Over 1023-byte pkts: 65881018237
    Over 1518-byte pkts(Jumbo): 22660704234
    Runts: 0, Jabbers: 0, CRC: 0, Overruns: 0
    Errors: 0, Discards: 4465935
Transmit Statistics:
    152201901151 packets, 140197468288086 bytes
    Unicasts: 152061840806, Multicasts: 66910847, Broadcasts: 73149495
    Underruns: 0
    Errors: 3, Discards: 867
Rate info:
    Input 630.356608 Mbits/sec, 83801 packets/sec, 3.15% of line-rate
    Output 630.588892 Mbits/sec, 84832 packets/sec, 3.15% of line-rate
Time since last interface status change: 23:45:22

 

swr2500-1# show port-channel detail

LACP Aggregator: Po 3 (vLAG)
 Aggregator type: Standard
 Ignore-split is enabled
  Member rbridges:
    rbridge-id: 1 (1)
    rbridge-id: 2 (1)
  Actor System ID - 0x8000,01-e0-52-00-00-01
  Admin Key: 0003 - Oper Key 0003
  Receive link count: 2 - Transmit link count: 2
  Individual: 0 - Ready: 1
  Partner System ID - 0x8000,90-e2-ba-74-35-64
  Partner Oper Key 0146
 Member ports on rbridge-id 1:
   Link: Te 1/0/41 (0x118148200) sync: 1   *

 Member ports on rbridge-id 2:
   Link: Te 2/0/41 (0x218148200) sync: 1

 

swr2500-1# sh run int Po 3
interface Port-channel 3
 vlag ignore-split
 mtu 9000
 description LACP_20GE_bras05
 switchport
 switchport mode trunk
 switchport trunk allowed vlan add 7,9,1030-1033,1045,1053-1054,1057,1064,1225-1270,1278-1279
 no switchport trunk tag native-vlan
 switchport trunk native-vlan 960
 spanning-tree edgeport bpdu-guard
 spanning-tree shutdown
 no shutdown

 

swr2500-1# sh run int tengi 1/0/41
interface TenGigabitEthernet 1/0/41
 description bras05_10GE_ix0
 no fabric isl enable
 no fabric trunk enable
 channel-group 3 mode active type standard
 lacp timeout long
 no shutdown

swr2500-1# sh run int tengi 2/0/41
interface TenGigabitEthernet 2/0/41
 description bras05_10GE_ix1
 no fabric isl enable
 no fabric trunk enable
 channel-group 3 mode active type standard
 lacp timeout long
 no shutdown

 

The server runs on the FreeBSD 12.0-RELEASE

/etc/rc.conf

ifconfig_ix0="up descr swr2500-1_10GE_1/0/41"
ifconfig_ix1="up descr swr2500-2_10GE_2/0/41"

ifconfig_lagg0="laggproto lacp laggport ix0 laggport ix1 lagghash l2,l3,l4 descr lagg0"

The same thing happens if connect a regular Cisco switch

Userlevel 3

So from my understanding the outgoing traffic to the FreeBSD server is not balanced? Can you try to add a statement to the port-channel configuration:

 load-balance src-dst-ip-port

E.g. the 6740 has several options:

Possible completions:
[src-dst-ip-port]
dst-mac-vid Destination MAC address and VID based load balancing
src-dst-ip Source and Destination IP address based load balancing
src-dst-ip-mac-vid Source and Destination IP and MAC address and VID based load balancing
src-dst-ip-mac-vid-port Source and Destination IP, MAC address, VID and TCP/UDP port based load balancing
src-dst-ip-port Source and Destination IP and TCP/UDP port based load balancing
src-dst-mac-vid Source and Destination MAC address and VID based load balancing
src-mac-vid Source MAC address and VID based load balancing

 

So from my understanding the outgoing traffic to the FreeBSD server is not balanced? Can you try to add a statement to the port-channel configuration:

 load-balance src-dst-ip-port

E.g. the 6740 has several options:

Possible completions:
[src-dst-ip-port]
dst-mac-vid Destination MAC address and VID based load balancing
src-dst-ip Source and Destination IP address based load balancing
src-dst-ip-mac-vid Source and Destination IP and MAC address and VID based load balancing
src-dst-ip-mac-vid-port Source and Destination IP, MAC address, VID and TCP/UDP port based load balancing
src-dst-ip-port Source and Destination IP and TCP/UDP port based load balancing
src-dst-mac-vid Source and Destination MAC address and VID based load balancing
src-mac-vid Source MAC address and VID based load balancing

 

The problem is not only with the server. Everything connected to a different switch has traffic balancing problems. I tested all balancing methods, the result was the same. If now without changing anything just connect the server with two links to one switch, for example, to the 1/0/41 and 1/0/42 ports, then traffic balancing will occur evenly.

Userlevel 3

Do you know where the outgoing traffic for this port-channel is originated from?

On the VDX6720 I have often seen that VLAG traffic could be biased on  the rbridge member. That means, that  traffic for a port-channel comes into e.g. rbridge-id 1, it won’t spread to member interfaces on a different rbridge as long as the link is working. A typical sign for this is a traffic empty ISL port.  I am not sure if this was a design thing or a protocol limitation. 

Do you know where the outgoing traffic for this port-channel is originated from?

On the VDX6720 I have often seen that VLAG traffic could be biased on  the rbridge member. That means, that  traffic for a port-channel comes into e.g. rbridge-id 1, it won’t spread to member interfaces on a different rbridge as long as the link is working. A typical sign for this is a traffic empty ISL port.  I am not sure if this was a design thing or a protocol limitation. 

Yes I know. 90% of the outgoing traffic is on the rbridge-id 1.The rest of the outgoing traffic to rbridge-id 2. If disable port 1/0/41, all traffic goes to 2/0/41 and if turn it back on, then all traffic also goes back to port 1/0/41. The stack between the switches is assembled from two ports 1/0/59 - 60 and 2/0/59 - 60. Traffic through ISL ports is also evenly distributed.

The problem is still relevant

Hi,

How can I verify the load-balance policy in place for a port-channel?

Reply