LACP Problems

  • 10 October 2017
  • 2 replies
  • 239 views

Userlevel 2
Hello, I have two older stacked C3G124-24 switches. I configure LACP and connected to Linux (Fedora) machines using teamd. From time to time I experience problems with few sec connectivity problems. I logs I found messages like this:

node210.10.2017 07:26:41kernelkernwarnbr1: received packet on team0.10 with own address as source address (addr:00:26:2d:09:39:76, vlan:0)node210.10.2017 07:26:41kernelkernwarnonebr.101: received packet on team0.101 with own address as source address (addr:00:26:2d:09:39:76, vlan:0)node210.10.2017 07:22:19kernelkernwarnbr4: received packet on team0.40 with own address as source address (addr:00:26:2d:09:39:76, vlan:0) [/code]This messages repeating continuously every few minutes. So I investiga on switch and found using 'show mac' command that switch see majority of MAC addresses on ge.2.3 interface instead of lag.0.3

UPLINKSTACK(su)->show mac port ge.1.3No entries found.
UPLINKSTACK(su)->show mac port ge.2.3
MAC Address FID Port Type
----------------- ---- ------------- --------
00-26-2D-09-39-76 1 ge.2.3 Learned
00-26-2D-09-39-77 1 ge.2.3 Learned
00-26-2D-09-39-76 10 ge.2.3 Learned
00-26-2D-09-39-76 20 ge.2.3 Learned
02-00-0A-00-0A-05 20 ge.2.3 Learned
00-26-2D-09-39-76 30 ge.2.3 Learned
00-26-2D-09-39-76 40 ge.2.3 Learned
00-26-2D-09-39-76 101 ge.2.3 Learned
02-00-B9-AE-A8-04 101 ge.2.3 Learned
02-00-B9-AE-A8-06 101 ge.2.3 Learned
02-00-B9-AE-A8-07 101 ge.2.3 Learned
02-00-B9-AE-A8-08 101 ge.2.3 Learned
02-00-B9-AE-A8-18 101 ge.2.3 Learned
02-00-B9-AE-A8-20 101 ge.2.3 Learned
02-00-B9-AE-A8-2F 101 ge.2.3 Learned
02-00-B9-AE-AA-05 103 ge.2.3 Learned
02-00-B9-AE-AA-07 103 ge.2.3 Learned
02-00-B9-AE-AA-08 103 ge.2.3 Learned
02-00-B9-AE-AA-0C 103 ge.2.3 Learned
02-00-B9-AE-AA-21 103 ge.2.3 Learned
00-26-2D-09-39-76 104 ge.2.3 Learned
02-00-85-CA-F2-5A 104 ge.2.3 Learned
UPLINKSTACK(su)->show mac port lag.0.3
No entries found.
UPLINKSTACK(su)->show lacp lag.0.3
Global Link Aggregation state: enabled
Single Port LAGs: disabled
Aggregator: lag.0.3
Actor Partner
System Identifier: 00:1F:45:78:0E:00 00:26:2D:09:39:76
System Priority: 1000 2000
Admin Key: 300
Oper Key: 300 300
Attached Ports: ge.1.3
ge.2.3
UPLINKSTACK(su)->show port lacp port ge.1.3 status detail
Global Link Aggregation state : Enabled
Port Instance: ge.1.3 Port enable state: Enabled
ActorPort: 3 PartnerAdminPort: 1
ActorSystemPriority: 1000 PartnerOperPort: 3
ActorPortPriority: 32768 PartnerAdminSystemPriority: 32768
ActorAdminKey: 300 PartnerOperSystemPriority: 2000
ActorOperKey: 300 PartnerAdminPortPriority: 32768
ActorAdminState: -----GSA PartnerOperPortPriority: 300
ActorOperState: --DCSGSA PartnerAdminKey: 1
ActorSystemID: 00:1F:45:78:0E:00 PartnerOperKey: 300
SelectedAggID: lag.0.3 PartnerAdminState: -----GSA
AttachedAggID: lag.0.3 PartnerOperState: --DCSGSA
MuxState: Coll_Dist PartnerAdminSystemID: 00:00:00:00:00:00
DebugRxState: Current PartnerOperSystemID: 00:26:2D:09:39:76
UPLINKSTACK(su)->show port lacp port ge.2.3 status detail
Global Link Aggregation state : Enabled
Port Instance: ge.2.3 Port enable state: Enabled
ActorPort: 55 PartnerAdminPort: 1
ActorSystemPriority: 1000 PartnerOperPort: 4
ActorPortPriority: 32768 PartnerAdminSystemPriority: 32768
ActorAdminKey: 300 PartnerOperSystemPriority: 2000
ActorOperKey: 300 PartnerAdminPortPriority: 32768
ActorAdminState: -----GSA PartnerOperPortPriority: 300
ActorOperState: --DCSGSA PartnerAdminKey: 1
ActorSystemID: 00:1F:45:78:0E:00 PartnerOperKey: 300
SelectedAggID: lag.0.3 PartnerAdminState: -----GSA
AttachedAggID: lag.0.3 PartnerOperState: --DCSGSA
MuxState: Coll_Dist PartnerAdminSystemID: 00:00:00:00:00:00
DebugRxState: Current PartnerOperSystemID: 00:26:2D:09:39:76[/code]
I also tried changing tx_hash policy, and also enable/disable active rebalancing.

Teamd config:

[root@node2 ~]# teamdctl team0 config dump{
"device": "team0",
"link_watch": {
"name": "ethtool"
},
"ports": {
"ens1f0": {
"lacp_key": 300,
"lacp_prio": 300
},
"ens1f1": {
"lacp_key": 300,
"lacp_prio": 300
}
},
"runner": {
"active": true,
"fast_rate": true,
"name": "lacp",
"sys_prio": 2000,
"tx_hash": [
"l3",
"l4",
"eth"
]
}
}[/code]
Teamd status:

[root@node2 ~]# teamdctl team0 state dump
{
"ports": {
"ens1f0": {
"ifinfo": {
"dev_addr": "00:26:2d:09:39:76",
"dev_addr_len": 6,
"ifindex": 3,
"ifname": "ens1f0"
},
"link": {
"duplex": "full",
"speed": 1000,
"up": true
},
"link_watches": {
"list": {
"link_watch_0": {
"delay_down": 0,
"delay_up": 0,
"down_count": 0,
"name": "ethtool",
"up": true
}
},
"up": true
},
"runner": {
"actor_lacpdu_info": {
"key": 300,
"port": 3,
"port_priority": 300,
"state": 63,
"system": "00:26:2d:09:39:76",
"system_priority": 2000
},
"aggregator": {
"id": 3,
"selected": true
},
"key": 300,
"partner_lacpdu_info": {
"key": 300,
"port": 3,
"port_priority": 32768,
"state": 63,
"system": "00:1f:45:78:0e:00",
"system_priority": 1000
},
"prio": 300,
"selected": true,
"state": "current"
}
},
"ens1f1": {
"ifinfo": {
"dev_addr": "00:26:2d:09:39:76",
"dev_addr_len": 6,
"ifindex": 4,
"ifname": "ens1f1"
},
"link": {
"duplex": "full",
"speed": 1000,
"up": true
},
"link_watches": {
"list": {
"link_watch_0": {
"delay_down": 0,
"delay_up": 0,
"down_count": 0,
"name": "ethtool",
"up": true
}
},
"up": true
},
"runner": {
"actor_lacpdu_info": {
"key": 300,
"port": 4,
"port_priority": 300,
"state": 63,
"system": "00:26:2d:09:39:76",
"system_priority": 2000
},
"aggregator": {
"id": 3,
"selected": true
},
"key": 300,
"partner_lacpdu_info": {
"key": 300,
"port": 55,
"port_priority": 32768,
"state": 63,
"system": "00:1f:45:78:0e:00",
"system_priority": 1000
},
"prio": 300,
"selected": true,
"state": "current"
}
}
},
"runner": {
"active": true,
"fast_rate": true,
"select_policy": "lacp_prio",
"sys_prio": 2000
},
"setup": {
"daemonized": false,
"dbus_enabled": true,
"debug_level": 0,
"kernel_team_mode_name": "loadbalance",
"pid": 2050,
"pid_file": "/var/run/teamd/team0.pid",
"runner_name": "lacp",
"zmq_enabled": false
},
"team_device": {
"ifinfo": {
"dev_addr": "00:26:2d:09:39:76",
"dev_addr_len": 6,
"ifindex": 5,
"ifname": "team0"
}
}
}[/code]
Any ideas? Thank you for any helpful reply. Best regards Feldsam

2 replies

Userlevel 1
Try enabling singleportlag. this will keep the link aggregation in the event that one link goes down and it won't change over to the individual ge ports.
Userlevel 2
Try enabling singleportlag. this will keep the link aggregation in the event that one link goes down and it won't change over to the individual ge ports.Hello, looks like that singleport lag was causing problems. I disable it and also in linux teaming I leave only tx_hash by eth. For now, it is working....

Reply