Sally Floyd: Abstracts for Networking Papers and Talks
Papers
1997:
-
Floyd, S., and Fall, K.,
Router Mechanisms to Support End-to-End Congestion Control.
Technical report, February 1997.
Abstract:
This paper considers the potential negative impacts from an increasing
deployment of non-congestion-controlled best-effort traffic on the
Internet. These negative impacts range from extreme unfairness against
competing TCP traffic to the potential for congestion collapse. To
promote the inclusion of end-to-end congestion control for best-effort
traffic, we propose lightweight router mechanisms for identifying and
restricting the bandwidth of high-bandwidth best-effort flows that are
using a disproportionate share of the bandwidth in times of
congestion. Our method does not require per-flow state based on packet
arrivals, but instead relies on the history of packet drops from a
queue with RED (Random Early Detection) queue management.
Starting with high-bandwidth flows identified from the RED drop
history, we describe a sequence of tests capable of suggesting flows
for bandwidth regulation. These tests additionally identify a
high-bandwidth flow in times of congestion as unresponsive, ``not
TCP-friendly'', or simply very-high-bandwidth. An {\it unresponsive flow} is
one failing to reduce its offered load at a router in response to an
increased packet drop rate. A flow that is {\it not TCP-friendly} is one
whose long-term arrival rate exceeds that of any conformant TCP in the same
circumstances. A {\it very-high-bandwidth flow} uses a disproportionate
share of the bandwidth relative to other flows. Simulations show the
results of regulating the bandwidth of these unresponsive,
TCP-unfriendly, or very-high-bandwidth flows in times of congestion.
We end with a comparison between this approach and others using
per-flow scheduling for all best-effort traffic.
1995:
-
Mathis, M., Mahdavi, J., Floyd, S., and Romanow, A.,
TCP Selective Acknowledgement Options.
Internet draft, work in progress. January 1996.
Abstract:
TCP may experience poor performance when multiple packets are lost from
one window of data. With the limited information available from
cumulative acknowledgements, a TCP sender can only learn about a single
lost packet per round trip time. An aggressive sender could choose to
retransmit packets early, but such retransmitted segments may have
already been successfully received.
A Selective Acknowledgement (SACK) mechanism, combined with a selective
repeat retransmission policy, can help to overcome these limitations.
The receiving TCP sends back SACK packets to the sender informing the
sender of data that has been received. The sender can then retransmit
only the missing data segments.
This draft proposes an implementation of SACK and discusses its
performance and related issues.
-
Floyd, S., Jacobson, V., Liu, C., McCanne, S., and Zhang, L.,
A Reliable Multicast Framework for Light-weight Sessions and
Application Level Framing.
Abstract:
This paper describes SRM (Scalable Reliable Multicast),
a reliable multicast framework for light-weight sessions and
application level framing.
The algorithms of this framework are efficient, robust, and scale well
to both very large networks and very large sessions.
The SRM framework has been prototyped in wb, a distributed whiteboard
application, which has been used on a global scale with
sessions
ranging from a few to more than 1000 participants.
The paper describes the principles that have guided
the SRM design, including the IP multicast group delivery model,
an end-to-end, receiver-based model of reliability, and the
application level framing protocol model.
As with unicast communications, the performance of a reliable
multicast delivery algorithm depends on the underlying
topology and operational environment.
We investigate that dependence via analysis and simulation, and
demonstrate an adaptive
algorithm that uses the results of previous loss recovery events to
adapt the control parameters used for future loss recovery. With
the adaptive algorithm, our reliable multicast delivery algorithm
provides good performance over a wide range of underlying topologies.
-
Floyd, S., and Jacobson, V.,
Link-sharing and Resource Management Models for Packet Networks.
To appear in IEEE/ACM Transactions on Networking, August 1995.
Abstract:
This paper discusses the use of link-sharing mechanisms in packet
networks and presents algorithms for hierarchical link-sharing.
Hierarchical link-sharing allows multiple agencies, protocol families,
or traffic types to share the bandwidth on a link in a controlled
fashion. Link-sharing and real-time services both require resource
management mechanisms at the gateway. Rather than requiring a gateway
to implement separate mechanisms for link-sharing and real-time
services, the approach in this paper is to view link-sharing and
real-time service requirements as simultaneous, and in some respect
complementary, constraints at a gateway that can be implemented with a
unified set of mechanisms.
While it is not possible to completely predict the requirements that
might evolve in the Internet over the next decade, we argue that
controlled link-sharing is an essential component that can provide
gateways with the flexibility to accommodate emerging applications and
network protocols.
-
Paxson, V, and Floyd, S.,
Wide-Area Traffic: The Failure of Poisson Modeling.
IEEE/ACM Transactions on Networking,
Vol. 3, No. 3, pp. 226-244, June 1995.
Abstract:
Network arrivals are often modeled as Poisson processes for analytic
simplicity, even though a number of traffic studies have shown that
packet interarrivals are not exponentially distributed. We evaluate 21
wide-area traces, investigating a number of wide-area TCP arrival
processes (session and connection arrivals, FTPDATA connection arrivals
within FTP sessions, and TELNET packet arrivals) to determine the error
introduced by modeling them using Poisson processes. We find that
user-initiated TCP session arrivals, such as remote-login and
file-transfer, are well-modeled as Poisson processes with fixed hourly
rates, but that other connection arrivals deviate considerably from
Poisson; that modeling TELNET packet interarrivals as exponential
grievously underestimates the burstiness of TELNET traffic, but using
the empirical Tcplib [Danzig92] interarrivals preserves burstiness over
many time scales; and that FTPDATA connection arrivals within FTP
sessions come bunched into ``connection bursts'', the largest of which
are so large that they completely dominate FTPDATA traffic. Finally,
we offer some results regarding how our findings relate to the possible
self-similarity of wide-area traffic.
An shorter version of this paper
appeared in SIGCOMM 94, pp. 257-268, August 1994.
-
Romanow, A., and Floyd, S.,
Dynamics of TCP Traffic over ATM Networks.
IEEE JSAC, V. 13 N. 4, May 1995, p. 633-641.
Abstract:
We investigate the performance of TCP connections over ATM networks
without ATM-level congestion control, and compare it to the performance
of TCP over packet-based networks. For simulations of congested
networks, the effective throughput of TCP over ATM can be quite low
when cells are dropped at the congested ATM switch. The low throughput
is due to wasted bandwidth as the congested link transmits cells from
`corrupted' packets, i.e., packets in which at least one cell is
dropped by the switch. We investigate two packet discard strategies
which alleviate the effects of fragmentation. Partial Packet Discard,
in which remaining cells are discarded after one cell has been dropped
from a packet, improves throughput somewhat. We introduce Early Packet
Discard, a strategy that prevents fragmentation and brings throughput
up to maximal levels by having the switch drop whole packets prior to
buffer overflow.
An earlier version of this paper appeared in SIGCOMM '94, August 1994,
pp. 79-88.
1994:
-
Floyd, S.,
TCP and Explicit Congestion Notification.
ACM Computer Communication Review, V. 24 N. 5, October 1994, p. 10-23.
[This issue of CCR incorrectly has "1995" on the cover instead of "1994".]
Abstract:
This paper discusses the use of Explicit Congestion Notification (ECN)
mechanisms in the TCP/IP protocol. The first part proposes new
guidelines for TCP's response to ECN mechanisms (e.g., Source Quench
packets, ECN fields in packet headers). Next, using simulations, we
explore the benefits and drawbacks of ECN in TCP/IP networks. Our
simulations use RED gateways modified to set an ECN bit in the IP
packet header as an indication of congestion, with Reno-style TCP
modified to respond to ECN as well as to packet drops as indications of
congestion. The simulations show that one advantage of ECN mechanisms
is in avoiding unnecessary packet drops, and therefore avoiding
unnecessary delay for packets from low-bandwidth delay-sensitive TCP
connections. A second advantage of ECN mechanisms is in networks
(generally LANs) where the effectiveness of TCP retransmit timers is
limited by the coarse granularity of the TCP clock. The paper also
discusses some implementation issues concerning specific ECN mechanisms
in TCP/IP networks.
-
Floyd, S., and Jacobson, V.,
The Synchronization of Periodic Routing Messages.
IEEE/ACM Transactions on Networking, V.2 N.2, p. 122-136,
April 1994.
Abstract:
The paper considers a network with many apparently-independent periodic
processes and discusses one method by which these processes can
inadvertently become synchronized. In particular, we study the
synchronization of periodic routing messages, and offer guidelines on
how to avoid inadvertent synchronization. Using simulations and
analysis, we study the process of synchronization and show that the
transition from unsynchronized to synchronized traffic is not one of
gradual degradation but is instead a very abrupt `phase transition':
in general, the addition of a single router will convert a completely
unsynchronized traffic stream into a completely synchronized one. We
show that synchronization can be avoided by the addition of
randomization to the traffic sources and quantify how much
randomization is necessary. In addition, we argue that the inadvertent
synchronization of periodic processes is likely to become an increasing
problem in computer networks.
An earlier version of this paper appeared in
ACM SIGCOMM 93.
-
Agarwal, D., and Floyd, S.,
A Tool for Debugging Internet Multicast Routing.
22nd ACM Computer Science Conference, Phoenix, AZ, March 1994.
Abstract:
In this paper we describe a debugging tool that is an effective means of
analyzing problems with multicast packet routing in a network.
Multicast packet routing is a source-driven distributed calculation
performed by the routers in a multicast network. The routes taken by
multicast packets are difficult to predict manually due to the large
number of variables that must be considered. The multicast route
debugging tool allows off-line investigation of the route taken by a
multicast packet and the effects of network modifications on that route.
The tool has already proved useful in debugging the problems that have
occurred in the experimental Internet Multicast Backbone. The multicast
route debugging tool currently predicts multicast routes of packets
using the distance-vector truncated-broadcast algorithm implemented for
Internet multicast traffic. We will be upgrading the tool to allow the
user to choose other multicast routing algorithms.
1993:
-
Floyd, S., and Jacobson, V.,
Random Early Detection gateways for Congestion Avoidance:
[Part 1]
[Part 2]
[Part 3]
[Part 4]
[Part 5].
IEEE/ACM Transactions on Networking,
V.1 N.4, August 1993, p. 397-413.
Abstract:
This paper presents Random Early Detection (RED) gateways for
congestion avoidance in packet-switched networks. The gateway detects
incipient congestion by computing the average queue size. The gateway
could notify connections of congestion either by dropping packets
arriving at the gateway or by setting a bit in packet headers. When
the average queue size exceeds a preset threshold, the gateway drops or
marks each arriving packet with a certain probability, where the exact
probability is a function of the average queue size.
RED gateways keep the average queue size low while allowing occasional
bursts of packets in the queue. During congestion, the probability
that the gateway notifies a particular connection to reduce its window
is roughly proportional to that connection's share of the bandwidth
through the gateway. RED gateways are designed to accompany a
transport-layer congestion control protocol such as TCP. The RED
gateway has no bias against bursty traffic and avoids the global
synchronization of many connections decreasing their window at the same
time. Simulations of a TCP/IP network are used to illustrate the
performance of RED gateways.
1992:
-
Floyd, S., and Jacobson, V.,
On Traffic Phase Effects in Packet-Switched Gateways.
Internetworking: Research and Experience,
V.3 N.3, September 1992, p.115-156.
Abstract:
Much of the traffic in existing packet networks is highly periodic,
either because of periodic sources (e.g., real time speech or video,
rate control) or because window flow control protocols have a periodic
cycle equal to the connection roundtrip time (e.g., a network-bandwidth
limited TCP bulk data transfer). Control theory suggests that this
periodicity can resonate (i.e., have a b, non-linear interaction)
with deterministic control algorithms in network gateways. In this
paper we define the notion of traffic phase in a packet-switched
network and describe how phase differences between competing traffic
streams can be the dominant factor in relative throughput. Drop Tail
gateways in a TCP/IP network with bly periodic traffic can result
in systematic discrimination against some connections. We demonstrate
this behavior with both simulations and theoretical analysis. This
discrimination can be eliminated with the addition of appropriate
randomization to the network. In particular, analysis suggests that
simply coding a gateway to drop a random packet from its queue on
overflow, rather than dropping the tail, is often sufficient.
We do not claim that Random Drop gateways solve all of the problems of
Drop Tail gateways. Biases against bursty traffic and long roundtrip
time connections are shared by both Drop Tail and Random Drop
gateways. Correcting the bursty traffic bias has led us to investigate
a different kind of randomized gateway algorithm that operates on the
traffic stream, rather than on the queue. Preliminary results show
that the Random Early Detection gateway, a newly developed gateway
congestion avoidance algorithm, corrects this bias against bursty
traffic. The roundtrip time bias in TCP/IP networks results from the
TCP window increase algorithm, not from the gateway dropping policy,
and we briefly discuss changes to the window increase algorithm that
could eliminate this bias.
An earlier version of this paper
appeared in Computer Communication Review, V.21 N.2, April 1991.
1991:
-
Floyd, S.,
Connections with Multiple Congested Gateways in Packet-Switched Networks
Part 1: One-way Traffic.
Computer Communications Review, Vol.21, No.5, October 1991, p. 30-47.
Abstract:
In this paper we explore the bias in TCP/IP networks against
connections with multiple congested gateways. We consider the
interaction between the bias against connections with multiple
congested gateways, the bias of the TCP window modification algorithm
against connections with longer roundtrip times, and the bias of Drop
Tail and Random Drop gateways against bursty traffic. Using
simulations and a heuristic analysis, we show that in a network with
the window modification algorithm in 4.3 tahoe BSD TCP and with Random
Drop or Drop Tail gateways, a longer connection with multiple congested
gateways can receive unacceptably low throughput. We show that in a
network with no bias against connections with longer roundtrip times
and with no bias against bursty traffic, a connection with multiple
congested gateways can receive an acceptable level of throughput.
We discuss the application of several current measures of fairness to
networks with multiple congested gateways, and show that different
measures of fairness have quite different implications. One view is
that each connection should receive the same throughput in
bytes/second, regardless of roundtrip times or numbers of congested
gateways. Another view is that each connection should receive the same
share of the network's scarce congested resources. In general, we
believe that the fairness criteria for connections with multiple
congested gateways requires further consideration.
-
Floyd, S.,
Connections with Multiple Congested Gateways in Packet-Switched Networks
Part 2: Two-way Traffic.
Unpublished draft.
Abstract:
In a previous paper [F91] we explored the bias in TCP/IP networks
against connections with multiple congested gateways, for networks with
one-way traffic. Using simulations and a heuristic analysis, we showed
that in a network with the window modification algorithm in 4.3 tahoe
BSD TCP and with Random Drop or Drop Tail gateways, a longer connection
with multiple congested gateways can receive unacceptably low
throughput. However, we showed that in a network with no bias against
connections with longer roundtrip times and with no bias against bursty
traffic, a connection with multiple congested gateways can receive an
acceptable level of throughput.
In this paper we show that the the addition of two-way traffic to our
simulations results in compressed ACKs at the gateways, resulting in
much more bursty traffic in the network. We use Priority gateways to
isolate the effects of compressed ACKs in networks with two-way
traffic; these Priority gateways give priority to small packets such as
ACK packets, thereby allowing two-way traffic without compressed ACKs.
In our simulations with two-way traffic and multiple congested gateways
with FIFO service and Drop Tail or Random Drop packet-drop algorithms,
the throughput for the connection with multiple congested gateways
suffers significantly. This loss of throughput can be reduced either
by the use of Priority gateways, which reduces the burstiness of the
traffic, or by the use of Random Early Detection (RED) gateways, which
are designed to avoid a bias against bursty traffic.
Notes
-
Floyd, S.,
Simulator Tests. July 1995.
Abstract:
This note shows some of the tests that we use to verify that our simulator
is performing the way that we intend it to perform.
-
Floyd, S.,
TCP and Successive Fast Retransmits.
Technical report, October 1994.
Abstract:
In this note we point out a long-standing problem for current Tahoe and
Reno TCP implementations that results from invoking Fast Retransmit
more than once in one roundtrip time.
Talks
1995:
- Floyd, S.,
Random Early Detection (RED) gateways (viewgraphs)
March 20, 1995.
- Floyd, S.,
Packet scheduling research (viewgraphs),
DARTnet II Meeting, March 6, 1995.
- Floyd, S.,
Link-sharing and Resource Management Models for Packet Networks (viewgraphs)
, February 7, 1995.
1992:
Return to
[
Sally Floyd].
Last modified: March 1999. Links checked on October 2008.