cancel
Showing results for 
Search instead for 
Did you mean: 

Summit: icmp-based bandwidth testing

Summit: icmp-based bandwidth testing

gravyface
New Contributor

A 3rd-party is running scheduled jobs to measure bandwidth/latency/jitter against various hops in the network (VLAN interface on core switch, next hop to edge firewall, upstream ISP gateway, etc.).

They're reporting an upper limit that's nowhere near wirespeed of the segment between the x450e access switch and the x450e core switch (2 x 1Gb fibre LACP).

Physical topology:
PC (via IP phone, 100Mb "computer" port, switchport 4) and 3rd-party test machine (1GbE, switchport 17) -------------- Access switch (x450e v12.x) ---- 2 x 1Gb fibre (LACP) --- core switch (x450e v12.x)

L3 info:
PC (192.168.30.42/GW 192.168.30.1)
Core switch VLAN interface: 192.168.30.1/24
Core switch VLAN interface (default route): 10.1.0.1/29
DC: 192.168.20.10/24 (VM on ToR switch, connected to core via copper LACP)
Firewall: 10.1.0.2/29

I've checked qosprofile (QP1 and QP8 only seeing packets; QP6 configured for VoIP/DSCP 46, minbw 2%, maxbw 4%), anomaly (0), utilization bandwidth (~0.5-1%), txerrors (0), rxerrors (0), and congestion on all access and trunk ports and there's no sign of packet loss or bandwidth consumption approaching anything remotely close to saturation.

They're testing by sending icmp packets to various hops, and the first hop, the VLAN interface that's the default gateway of the test machine, is where they're seeing ~40Mbps capacity, ~25Mbps utilization.

I've ran similar tests to pingable nodes with pingb, an open source tool that calculates bandwidth by size/latency via icmp (packet size varies, according to my WireShark capture, but seems to be consistent/repeatable).

Against core switch IP addresses/interfaces:
192.168.30.1 ~25-50Mbps
10.1.0.1 ~25-50Mbps

Against non-switch IP addresses/interfaces (directly or indirectly connected to core)
10.1.0.2 (edge firewall, next hop, directly patched into core switch) ~86Mbps*
192.168.20.10 ~86Mbps

*86Mbps seems to be the top-end of pingb for 100Mb connections: on two test nodes on a single unmanaged 10/100 Fast Ethernet switch I achieved the same results.

I also ran iperf tests with UDP and TCP to another node off of the core switch and saw speeds around ~95Mbps, which is pretty close to wirespeed.

I've ran show configuration | i icmp which returned no results (wasn't sure if there was a icmp/flood protection enabled, etc.).

I did find this article, which seems to suggest that icmp responses to/from the switch are treated with a lower priority and to use latency as a measurement instead:
https://extremeportal.force.com/ExtrArticleDetail?an=000091286

I ran top and the CPU was around 3-4% (.\fdb was nearly always the top process), so I don't really know how to interpret the "low priority" here and not sure if there's any debug/logging I can enable that was say, "hey I limited icmp".

Any insight would be greatly appreciated.

3 REPLIES 3

EtherMAN
Contributor III
I dont know of any way to check processes to see when ICMP would take a back seat to something else. I can tell you the 450e did not have much cpu mmmth from them being in the mix through the years. Remember looking at the top command you are getting snapshots of CPU usage not real time. Maybe one of the silicon jockeys from Extreme that worked on the 450e's could give you more insight into what you are looking for. Good hunting 

gravyface
New Contributor
Fair enough, and I agree. However, I've never seen this behavior in other switches and would like to confirm my hunch, hence asking if there's a means to verify that the switch cpu is indeed the culprit.

EtherMAN
Contributor III
There is only one accepted way to test through put, latency and jitter on an ethernet segment. Get a good test set and something to do a soft or hard loopback and run a RFC 2544 test. We use JDSU 5808 and can test up to line rate 10 Gbs full frame sizes for 9000 to 64. Anything you try to do from server to server. switch to switch, router to router using the router or switch or server will always be skewed and only as good as what the CPU's processing the ICMP packets is or is not doing at the time each packet arrives and has to be processed.

Even using CFM on most Extreme switches for basic network performance testing is dependent on the cpu as there are very few XOS products that have hardware based CFM support.

If a test set is not in your budget then look into leasing one to do you profiling or even seeing if there is a vendor in your market that would do the testing for you on a fee based per test. One gig testers are not too expensive and there are a healthy used and refurbished market to choice from. For me a good test set is as important as testing your cat 5 or 6 drop you just terminated or a fiber OTDR if you run your own fiber plant. If your are in the business of delivering ethernet services you should be able to test end to end...
GTM-P2G8KFN