Notes on Testing RED Implementations by Sally Floyd October 12, 1996. These are informal notes of procedures to test that RED implementations are giving the correct behavior. (1) Check that RED will accommodate a transient queue of some size before dropping a packet. Test procedure: If incoming links A and B are of the same speed as the outgoing link C with RED, then have N incoming back-to-back packets on link A, and at the same time N incoming back-to-back packets on link B. A transient queue of N packets should be created for outgoing link C. Repeat with increasing values of N until the gateway first drops a packet. Note: For a RED queue to accomodate a transient burst of N packets, it is necessary that the buffer be sufficiently large to accomodate N packets. Implementing a RED queue with a small buffer is not recommended. (2) Check that RED will only drop an occasional packet, as needed, generally without queue overflow, when a small number of high-bandwidth TCP connections are using all of the link bandwidth. Check also that, over a number of repetitions, the fraction of packet drops received by each connection is roughly proportional (VERY roughly is fine) to the fraction of bandwidth received by that connection. Test procedure for lower bandwidth links: Start a Reno TCP connection, with the receiver's advertised window set sufficiently large to use all of the available bandwidth on link C. Then start a second Reno TCP connection, also capable of using all of the link bandwidth. The steady-state result should be a small, infrequent rate of packet drops, but with high link utilization, generally without bursts of packets dropped from a single connection. Add a small telnet connection, and check that the average queueing delay and drop rate for that connection is low. Note: Of course, the number of TCPs needed to create congestion will depend somewhat on the link bandwidth. On Gigabit links, a single TCP is not likely to be able to use all of the link bandwidth. (3) Check that RED will frequently drop packets, as needed, but without queue overflow, when the traffic consists of many short TCP connections. Test procedure: Start a large number of TCP connections, each with a small receiver's advertised window. Start enough TCPs, one at a time, until the demand is significantly higher that the available bandwidth. The result should be a large, frequent rate of packet drops, but with high link utilization and no queue overflow. Add a small telnet connection, and check that the average queueing delay for that connection is low. The drop rate will be higher that it would be in (2) above. (4) Check that there is not a bias against bursty connections using only a small fraction of the link bandwidth. Note: If the packet-dropping rate is a function of the instantaneous queue size instead of the average queue size, the result will be a bias against bursty connections. (5) Check that global synchronization across multiple TCP connections is avoided. Test procedure: Similar to test (2) above, with from two to four active TCP connections, each capable of using all of the link bandwidth. If there is global synchronization, with multiple TCP connections all going though slow-start at the same time, the result will be low link utilization of the congested link. Note: Drop-tail gateways can result in packets dropped from multiple TCP connections at roughly the same time. The result is that the TCP connections also reduce their congestion windows at the same time, reducing in lowered link utilization of the congested link. Global synchronization could also result from a RED implementation where the packet-dropping rate is a function of the instantaneous queue size instead of the average queue size.