cancel
Showing results for 
Search instead for 
Did you mean: 

Link Problems, Flapping, Flood Rate Limit Activated, clear eee stats Feature unavailable?

Link Problems, Flapping, Flood Rate Limit Activated, clear eee stats Feature unavailable?

Anonymous
Not applicable
Have created an ACL that is meant to be blocking MDNS multicast addresses and an additional address used my Microsoft.

Have written the ACL to every port on Ingress so that I can see hits per port.

Problem is I'm not seeing the counters incrementing and aside from a packet trace I am confident there is this traffic on the network. I know this because we are trying to resolve an issue with a stack of X440's that keep rebooting because the CPU seems to be getting overwhelmed with packets from these address - as diagnosed by GTAC.

Policies at Policy Server:
Policy: Block_MDNS_Ingress
entry Block_1_MDNS_Ingress {
if match all {
source-address 224.0.0.251/32 ;
}
then {
deny ;
packet-count Block_251_MDNS_Ingress ;
}
}
entry Block_2_MDNS_Ingress {
if match all {
source-address 224.0.0.252/32 ;
}
then {
deny ;
packet-count Block_252_MDNS_Ingress ;
}
}
entry Block_3_MDNS_Ingress {
if match all {
source-address 239.255.255.250/32 ;
}
then {
deny ;
packet-count Block_250_MDNS_Ingress ;
}
}
Number of clients bound to policy: 1
Client: acl bound once

System Type: X440-48p (Stack)

SysHealth check: Enabled (Normal)
Recovery Mode: All
System Watchdog: Enabled

Current Time: Sat Sep 12 16:28:48 2015
Timezone: [Auto DST Disabled] GMT Offset: 0 minutes, name is UTC.
Boot Time: Fri Aug 28 00:37:38 2015
Boot Count: 135
Next Reboot: None scheduled
System UpTime: 15 days 15 hours 51 minutes 9 seconds

Slot: Slot-1 * Slot-2
------------------------ ------------------------
Current State: MASTER BACKUP (In Sync)

Image Selected: secondary secondary
Image Booted: secondary secondary
Primary ver: 15.3.1.4 15.3.1.4
Secondary ver: 15.5.4.2 15.5.4.2
patch1-5 patch1-5

Config Selected: primary.cfg
Config Booted: Factory Default

primary.cfg Created by ExtremeXOS version 15.5.4.2
2246563 bytes saved on Fri Sep 11 07:54:18 2015

Many thanks in advance.

14 REPLIES 14

Anonymous
Not applicable
That's exactly what I have been looking for...... Thanks Prashanth

Prashanth_KG
Extreme Employee
Hi Martin,

You might be interested in this article below for the packet capture at the port level from the CLI.

https://gtacknowledge.extremenetworks.com/articles/How_To/How-to-perform-a-local-packet-capture-on-a...

If you want to stop the capture at any time, just hit CTRL+C. And make sure to specify the cmd-args -c since the switch is in production.

Hope this helps!

Anonymous
Not applicable
Got I little bit further with this so thought I would share the update.....

When looking at the qosmonitor congestion I noticed that it was all showing on QP1, where as all the voice traffic being QoS marked uses QP6

Currently the port 1:5 as voice vlan configured for tagged and data vlan for untagged. Based on this I removed the data vlan from the port and now the packets have stopped dropping.

So I guess my next step is get a packet trace of the traffic on the data vlan........ unless anyone knows how I can do a TCPDUMP on the switch port directly...... that would be really useful.

Wonder if I'm simply seeing contention of a 1Gb PC being connected to a 100mb phone?

Thanks.

Stack 1.1 # show port 1:5 congestion no-refresh
Port Congestion Monitor
Port Link Packet
State Drop
================================================================================
1:5 A 425053
================================================================================

Stack 1.2 # show port 1:5 qosmonitor no-refresh
Port Qos Monitor
Port QP1 QP2 QP3 QP4 QP5 QP6 QP7 QP8
Pkt Pkt Pkt Pkt Pkt Pkt Pkt Pkt
Xmts Xmts Xmts Xmts Xmts Xmts Xmts Xmts
===============================================================================
1:5 1160455 0 0 0 0 7069 0 170

Stack 1.3 # show port 1:5 qosmonitor congestion no-refresh
Port Qos Monitor
Port QP1 QP2 QP3 QP4 QP5 QP6 QP7 QP8
Pkt Pkt Pkt Pkt Pkt Pkt Pkt Pkt
Cong Cong Cong Cong Cong Cong Cong Cong
===============================================================================
1:5 425122 0 0 0 0 0 0 0

Anonymous
Not applicable
Thanks for the info and will certainly look into that and great advise.

My thoughts though are that since I turned off flow-control the amount of ports showing packet drops has significantly grown, so perhaps it was these ports that where sending the pauses?

With that it mind is there anyway (aside from attaching a sniffer) that I could see what packets are being dropped on the ports, some examples could be:

  • debug command that outputs drops
  • turn on a filter that outputs drops
  • tcpdump on a single port
  • write an ACL for all traffic on port and log
Be interested to know if the same can be done at looking at what would be causing congestion on the switch fabric and CPU?

At the moment both ends are set to auto-negotiate and they are both negotiating 100mb full. I want to try fixing both ends but currently looking at the issue remotely.

There seems no logical reason packets should be being dropped, there are no other errors (like CRC) and there is plenty of bandwidth, and there is no other device attached to the phone. I could increase the buffer size for QP6 as its currently set to default but I really shouldn't be getting any contention as the traffic is so low?

If I could see what's being dropped, then that might give me a clue as to what's happening.

Also appreciate if there is any equivalent to configuring Link Flap in EXOS?

Many thanks.

GTM-P2G8KFN