I'm seeing some odd behavior on a handful of my switch stacks. This behavior has only occurred on stacks where a failed node has been replaced. Stacks that have never had a failed node do not display this behavior. That may totally be a red herring, but I wanted to point it out.
The behavior we are seeing is that all ports that did not previously have a display-string configured are now configured with a simple (and useless) name of "switchSNMPname_portnumber". We only have two network engineers with access to our CLI, and neither of us would have (intentionally) done something quite this dumb.
For example, here is the expected output from a good configuration:
And here is the output from a stack showing the behavior in question:
Slot-1 IDF01.2 # sh vlan eng
Ports: 20. (Number of active ports=7)
Tag: *1:53bg, *6:53g
As you can see, this makes it super annoying to look at things in the CLI, and we have no idea why it happened. I can go through and manually unconfigure the display strings (and have been) but we are just generally curious what caused this in the first place.
This is exactly what is happening. All otherwise unlabeled ports are configured with a port alias in Netsight, so I assume that even though I manually cleaned up the switches by unconfiguring their display strings, the problem will recur until I correct the alias applied in Netsight. Currently trying to figure out if there is a bulk way to do that, though I'm still confused about how we configured an option we didn't even know existed.
Scratch that last bit, I figured out how to select multiple rows. It looks like this has been resolved for now, other than not knowing the origin of the original alias creation.
We are running 18.104.22.168 patch 1-9, currently. Planned upgrade to 22.214.171.124 this weekend. All X460-G1's. We do have Netsight configured but currently (in theory) it is just running configuration backups and some SNMP monitoring. I'm looking at that now, per David's comment, to see if maybe we have something in Netsight that is pushing this out to the switches. That said, a) I have no idea what exactly I'm looking for, and b) I'm curious as to why I'm not seeing this behavior across all our stacks, because we aren't doing anything to separate them into discrete management groups in Netsight.