08-24-2018 09:59 PM
A 3rd-party is running scheduled jobs to measure bandwidth/latency/jitter against various hops in the network (VLAN interface on core switch, next hop to edge firewall, upstream ISP gateway, etc.).
They're reporting an upper limit that's nowhere near wirespeed of the segment between the x450e access switch and the x450e core switch (2 x 1Gb fibre LACP).
Physical topology:
PC (via IP phone, 100Mb "computer" port, switchport 4) and 3rd-party test machine (1GbE, switchport 17) -------------- Access switch (x450e v12.x) ---- 2 x 1Gb fibre (LACP) --- core switch (x450e v12.x)
L3 info:
PC (192.168.30.42/GW 192.168.30.1)
Core switch VLAN interface: 192.168.30.1/24
Core switch VLAN interface (default route): 10.1.0.1/29
DC: 192.168.20.10/24 (VM on ToR switch, connected to core via copper LACP)
Firewall: 10.1.0.2/29
I've checked qosprofile (QP1 and QP8 only seeing packets; QP6 configured for VoIP/DSCP 46, minbw 2%, maxbw 4%), anomaly (0), utilization bandwidth (~0.5-1%), txerrors (0), rxerrors (0), and congestion on all access and trunk ports and there's no sign of packet loss or bandwidth consumption approaching anything remotely close to saturation.
They're testing by sending icmp packets to various hops, and the first hop, the VLAN interface that's the default gateway of the test machine, is where they're seeing ~40Mbps capacity, ~25Mbps utilization.
I've ran similar tests to pingable nodes with pingb, an open source tool that calculates bandwidth by size/latency via icmp (packet size varies, according to my WireShark capture, but seems to be consistent/repeatable).
Against core switch IP addresses/interfaces:
192.168.30.1 ~25-50Mbps
10.1.0.1 ~25-50Mbps
Against non-switch IP addresses/interfaces (directly or indirectly connected to core)
10.1.0.2 (edge firewall, next hop, directly patched into core switch) ~86Mbps*
192.168.20.10 ~86Mbps
*86Mbps seems to be the top-end of pingb for 100Mb connections: on two test nodes on a single unmanaged 10/100 Fast Ethernet switch I achieved the same results.
I also ran iperf tests with UDP and TCP to another node off of the core switch and saw speeds around ~95Mbps, which is pretty close to wirespeed.
I've ran show configuration | i icmp which returned no results (wasn't sure if there was a icmp/flood protection enabled, etc.).
I did find this article, which seems to suggest that icmp responses to/from the switch are treated with a lower priority and to use latency as a measurement instead:
https://extremeportal.force.com/ExtrArticleDetail?an=000091286
I ran top and the CPU was around 3-4% (.\fdb was nearly always the top process), so I don't really know how to interpret the "low priority" here and not sure if there's any debug/logging I can enable that was say, "hey I limited icmp".
Any insight would be greatly appreciated.
08-27-2018 05:09 PM
08-27-2018 04:29 PM
08-27-2018 01:50 PM