Why testing throughput with FTP is inaccurate

  • 0
  • 1
  • Article
  • Updated 4 years ago
Article ID: 1324 


Performance testing 

The problem with using FTP (or any device to device transfer using TCP) is that you have no control over window sizing or actual disk/memory IO processes. You also have no control over common things like ARP resolution and route caching of the test devices. 

And a throughput test does not include flow setup. To test accurately, sample data is injected into the DUT (device under test) to ensure that all destinations are learned. The test is stopped and started after learning is completed to ensure accurate timing of total throughput. 

FTP applications calculate the time a transfer takes but fail to give you exacting information about how those calculations are formulated. 

TCP will change the size of the windows depending on traffic flow, halving the window if a packet is dropped during the test. Many stations have inefficient buffering and not all stations will handle TCP reassembly in a timely manner resulting in more variance in test results. 

To have a valid test, you need to have a consistent source, one capable of sending a stream of identical data in a precise and repeatable manner. You also need a receiver that is capable of precise measurement of the data received while doing standard error checking to ensure the data was not corrupted. Traffic generators such as IXIA or SmartBits are ideal for these situations.
Photo of FAQ User

FAQ User, Official Rep

  • 13,590 Points 10k badge 2x thumb

Posted 4 years ago

  • 0
  • 1

There are no replies.

This conversation is no longer open for comments or replies.