02-10-2020 12:25 PM
I have a X460-G2 stack that I recently added a 4th switch to the stack. Ever since I did that I’ve been experiencing an SNMP issue.
I use SNMP to monitor the switch and a few ports. At least once a day my monitoring system will report some of my monitored ports are down (these can include APs or uplinks to the core). It’s not always the same ports. Additionally, XMC reports SNMP communication issues.
As near as I can tell, there isn’t an actual problem on the switch. None of my users have complained, nor am I dropping ICMP. This appears to be specifically isolated to monitoring through SNMP and didn’t begin until I added the 4th switch to the stack (still running the same firmware version).
Any thoughts?
Solved! Go to Solution.
02-20-2020 07:04 PM
I ran out of ideas, so I decided to try a reboot. I rebooted the stack and I have been error free for 7 days.
I don’t know what happened, but my issue is solved.
TP
02-20-2020 07:04 PM
I ran out of ideas, so I decided to try a reboot. I rebooted the stack and I have been error free for 7 days.
I don’t know what happened, but my issue is solved.
TP
02-10-2020 01:53 PM
i had a similar experience with an x460-48p stack (not G2) where we replaced a slot member.
And *then* we started receiving snmp timed out
The only thing I notice differently in a stack that doesn’t get the error and this one (which was NOT alerting prior to replacing a stack member) is this
Problem stack shows a value for Timeout : 15 Seconds Retries : 3
The non problem stacks show no value
Timeout : - Retries : -
I haven’t been able to figure out how to tell the problem stack to not have a timeout value.
02-10-2020 12:39 PM
Frank, I am under the impression that SNMP is essentially giving up. And at a core networking level, I’m good with. I’d rather it drop SNMP to ensure it can process I/O traffic for my users instead.
However, I have many stacks- in fact I have two separate stacks that one has (6) switches and the other have (7) switches. They’re the identical models. If two other stacks have more switches, I have to assume it can handle (4) in a stack.
The only difference is the other two stacks are connected at 10 GB while the one in question is only operating at 1 GB. However, we’re just sipping the 1 GB link and in no way saturating the link.
Thanks.
02-10-2020 12:32 PM
Just a hunch, because I’m dealing with something similar. Now that the SNMP walk/bulkwalk is going over an additional ~50 ports, is it possible that your poller is timing out? In my case, it appears that the port status is actually “NULL” (or no-data), which then gets compared to “UP” (or DOWN, whatever the state used to be), and I get alerted that a port is down (or up), just because the state changed.
And if you’re like me, with three different monitoring systems, all SNMP polling more-or-less at the same time, there’s also the chance of the switch/stack’s SNMP replies bailing out. But that, too, is an unverified hunch.