cancel
Showing results for 
Search instead for 
Did you mean: 

Weird issue connecting EnGenius switch to Extreme EXOS Switch

Weird issue connecting EnGenius switch to Extreme EXOS Switch

BigRic
New Contributor III

Good afternoon. I've had 2 or 3 experiences with this issue, so it's more than a one-off, but still, I can't find an answer. When I connect an EnGenius switch (cloud managed, 8 port or 24 port in my examples) to an Extreme, the extreme starts dropping traffic. For example, I have an 8-port extreme on my desk with 6 things connected. One is an uplink, a trunk with 3 VLANs in it. All works great until I plug a simple 8-port EnGenius switch into a single port with only the default VLAN. The second I do, everything on the switch drops. STP is off, there's no loop (no other connection in the EG switch), etc. From the EXOS side, I can't ping out anymore. I turned on debug logging, and I saw the port come up, then a bunch of DNS/NTP errors. Nothing else. ELRP shows no disabled ports, etc. I've seen this with the SonicWALL branded version of the switch and the native EnGenius branded version. Any thoughts? Odd thing - there are 2 MACs in the FDB on the port showing on the Extreme side. 

2 REPLIES 2

Gabriel_G
Extreme Employee

Hey BigRic,

Hard to say what could be going on.
I'd start by checking the usual suspects:

Layer 1:

show port <#> configuration #Make sure the link is coming up both ways at the right speed/duplex
show port <#> rxerrors #corrupted frames
show port <#> txerrors #queing or half duplex issues

Layer 2:

show fdb port <#> #Make sure MACs learned are appropriate and stable
show port <#> congestion #Link over-saturation errors

#Check for MAC Moves or rapid reprogramming
configure log filter defaultfilter add events fdb.macmove
configure log filter defaultfilter add events fdb.macadd
configure log filter defaultfilter add events fdb.macdel
enable log debug-mode
show log
#'delete' events and 'disable' log debug mode as needed to clean up

 

Layer 3:

show iparp #Are ARPs learned and stable?
debug hal show congestion #Shows CPU congestion, too much traffic to the switch CPU


If that's all good, you could try to track a ping from one place to another to confirm if EXOS is dropping client traffic using ACL counters, example below:
create access-list Request "protocol icmp; source-address 10.1.1.3/32; destination-address 10.1.1.13/32;" "count Request;"
create access-list Reply "protocol icmp; source-address 10.1.1.13/32; destination-address 10.1.1.3/32;" "count Reply;"
configure access-list add <ACL NAME; IE Request/Reply> first port <#> [ingress | egress]
show access-list dynamic counters ingress
show access-list dynamic counters egress
clear access-list dynamic counters


If a client ping is making it to EXOS but EXOS is not egressing that ping towards the appropriate destination, I'd expect one of the common suspects to show something. If that's all clear, GTAC may be able to debug further if the issue is easily replicable.

Hope that helps!

BigRic
New Contributor III

Thanks for the suggestions. Led me to determine that the issue is actually occurring between the switch on my desk and my upstream core switch (I have another 8 port downstream that I can continue to reach from the one on my desk when the issue occurs). I connected to the core and added some filters (I'm not there so I can't be consoled in right this sec). I added InBPDU tracing as another engineer on our team mentioned having to filter BPDUs on a Cisco with the same EnGenius connected to it. Once the filter was updated, I caught Processing STP BPDU of type 2 on the port linking back to my desk when the EnGenius is connected. With some more filters, caught this (EG switch connected, then disconnected). Any thoughts? Is this a simple STP problem and if so, how does connecting a single port of a downstream switch have any impact?

01/04/2023 09:45:24.75 <Summ:STP.State.PortState> [s0:1:4] State Change : [E_FDWHILE_EXPIRED] LISTENING --> LEARNING (learn=1,forward=0)
01/04/2023 09:45:24.75 <Summ:STP.State.PortState> [s0:1:4] State Change : [E_FDWHILE_EXPIRED] rrwhile=0 (sync=0,reRoot=0)
01/04/2023 09:45:22.94 <Summ:STP.State.PortState> [s0:1:4] State Change : [E_PORT_BLOCKED] Ack received
01/04/2023 09:45:22.94 <Summ:STP.State.PortState> [s0:1:4] State Change : Processing VPST Ack
01/04/2023 09:45:22.92 <Summ:STP.State.PortState> [s0:1:4] State Change : [Sync] FORWARDING --> BLOCKING (learn=0,forward=0)
01/04/2023 09:45:22.92 <Summ:STP.State.PortState> [s0:1:4] State Change : [Record Dispute for CIST] :
01/04/2023 09:45:22.92 <Summ:STP.State.PortState> [s0:1:4] State Change : [Record Dispute] :
01/04/2023 09:45:22.92 <Verb:STP.InBPDU.Trace> Port=1:4: Processing STP BPDU of type 2
01/04/2023 09:45:22.92 <Verb:STP.InBPDU.Trace> Port=1:4: Received 802.1s frame
01/04/2023 09:45:22.92 <Verb:STP.InBPDU.Trace> Port=1:4: RECEIVED 64 bytes

GTM-P2G8KFN