cancel
Showing results for 
Search instead for 
Did you mean: 

Inbound Rate Limiting is Overly Restrictive to TCP/IP

Inbound Rate Limiting is Overly Restrictive to TCP/IP

FAQ_User
Extreme Employee
Article ID: 11667

Products
Matrix N-Series DFE

Changes
Configured a Role-Based Inbound Rate Limiter (7537 ,'set cos...', 'set policy...').
-or-
Configured a Priority-Based Inbound Rate Limiter (7345, 'set port ratelimit...').

Symptoms
A sniffer shows many retransmissions due to dropped TCP packets.
The effectively applied rate limits are significantly more restrictive than as configured.
As examples, a 100% rate limiter permits line speed (as expected), a 90% rate limiter permits only 10-20% of traffic (should be 90%), and a 10% rate limiter permits only 1-2% of traffic (should be 10%).
The effect is the same regardless of whether percentages or absolute limits are specified.

Outbound Rate Shaping is not subject to this issue, so works fine.

Cause
There is a fundamental difference between the operation of Inbound Rate Limiting ('set cos...irl...') and Outbound Rate Shaping ('set cos...txq...').

Inbound Rate Limiting (IRL) works in the classic manner, by dropping packets in excess of the configured rate. This works fine for UDP/IP (1884) "best effort" delivery traffic, which is not monitored for pacing or completion, or subject to retransmission in the event of failed delivery. However, when TCP/IP (3590) "guaranteed" delivery traffic is rate-limited with frames dropped, the protocol initiates a fast-restart, always sizing the link for bandwidth. This sizing takes few packets to accomplish, often too few to trip the rate-limiter, so TCP sizes the link at full ethernet bandwidth, only to immediately have packets dropped due to the IRL. As a result, the perception of users and of certain measurement tools (which base their throughput figures on the sum of the actual correct data received as opposed to the full amount of data - original transmissions plus retransmissions - that traversed the link) will be that the effective link bandwidth is significantly less than the rate that has been configured. All vendors' Rate Limiting / Rate Policing implementations behave this way.

Outbound Rate Shaping, as implemented in the DFE's cos command set, actually performs Transmit Queue shaping, which delays the packets through the switch (a much more difficult hardware implementation, thus rarely seen in the industry). The resulting delivery delay causes TCP to size the link correctly - in conformance with the configured rate limits rather than the full ethernet bandwidth - and then transmit within those rate constraints. As a result, user perception of effective link bandwidth reasonably approaches the rate that has been configured. UDP traffic is also correctly throttled by Outbound Rate Shaping.

Solution
When limiting IP traffic, use the limiting method most compatible with the expected traffic type:
  • Inbound Rate Limiting works most predictably with UDP traffic.
  • Outbound Rate Shaping works fine with UDP or TCP traffic.
Workaround:
Regarding what you might see for actual TCP throughput with IRL, this is dependent upon whether any packets within a "TCP Window" get dropped and what is retransmitted in response. End stations detecting any dropped packets within the defined window will wait the full window time before transmitting the full window of data again.

That being the case, then if the end-station Network Interface Card (NIC) permits adjustment of the window size, decreasing it should improve performance. This is because if any packets are dropped within a window, the wait time is shorter, and only a smaller chunk of data is retransmitted in response making it less likely that any packets in the retransmission will be dropped. Basically, the larger the window, the more rapidly the drop pattern escalates into a larger effect - and lower effective throughput.

For a related discussion of "File Transfer Protocol" (FTP) - which runs under TCP - see 1324.
0 REPLIES 0
GTM-P2G8KFN