where is the port buffer "default" defined?

  • 0
  • 2
  • Problem
  • Updated 2 months ago
  • Solved
We are currently gathering data from edge stacks with poor performance and are doing all we can to look at memory optimisation.  I have a situation as below which is troubling me....

stack 1 :
x450e-48P (8 slots) SW 15.3.4.6
config detail has :
configure ports 1:25 shared-packet-buffer default
show ports buffer shows:
Port 1:25  Max Shared Buffer Usage: 87680 bytes (100%)



stack 2:
x450e-48P (8 slots) SW 15.3.4.6
config detail has :
configure ports 1:25 shared-packet-buffer default
show ports buffer shows:
Port 1:25  Max Shared Buffer Usage: 21888 bytes (25%)



My question is this :

How can a parameter specified in the config as "default" differ between two identical stacks?

Can this be defined anywhere I can access or can at least see the "defaults" anywhere?

At current we have differeing performance as a result of this issue and it appears as if a "defined default" which varies somehow between like default configs is responsible.

Any light shed on it would be much appreciated :-)

Rich



Photo of Rich

Rich

  • 230 Points 100 badge 2x thumb

Posted 2 months ago

  • 0
  • 2
Photo of EtherMAN

EtherMAN, Embassador

  • 7,200 Points 5k badge 2x thumb
Rich I think your key here is stack 1 buffer is 100% in use at times.  Stack 2 never hits over 25%.  I would bet you have interfaces with congestion on stack one and so you are over running the buffer and dropping frames... Trick is to figure out what is cause your peak usage 
Photo of Rich

Rich

  • 230 Points 100 badge 2x thumb
Hi Etherman.  Thanks for your reply..   I agree theat we are over running buffers and seeing poor traffic performace on user ports.  If I explicitly define 100% of buffer each port is able to use from "shared" the situation imporves.  I know this is what it should be but what I cant get my head around is that is I do not explicitly define it, it reamins at "default" and the config uses the word "default" but the amount allocated to each port from shared appears to differ between two identical stacks..  almost as if the "default" defined somewhere deeper in the code is different.......  Thats what I need to ascertain...  going across my estate and explicitly defining buffer available to each port I know would fix the issue, but the "default" value should at least be the same everywhere enabling me to make decision on which switch tyoes need their "default" to be overriden and which I can leave at "default" because "default" already means 100......   I hope I am explaning this OK, and I appreciate your input :-)