Raj Jain, "Congestion Control in Computer Networks," in Proceedings of the 1989 ACM Sigmetrics and Performance '89 International Conference on Measurement and Modeling of Computer Systems, (Berkeley, California), pp. 216, May 1989.
Amarnath Mukherjee and John C. Strikwerda, "Analysis of dynamic congestion control protocols - a Fokker-Planck approximation," in Sigcomm '91 Conference: Communications Architectures and Protocols, (Zürich, Switzerland}), pp. 159--169, Sep. 1991.
Abstract: We present an approximate analysis of a queue with dynamically changing input rates that are based on implicit or explicit feedback information. This is motivated by proposals for adaptive congestion control algorithms, where the sender's window size at the transport level is adjusted based on perceived congestion level of a bottleneck node. We develop an analysis methodology for a simplified system; however, it is powerful enough to answer the important questions regarding stability, convergence (or oscillations), fairness and the significant effects that delayed feedback plays on performance. Specificially, we find that in the absence of feedback delay, the linear increase/exponential decrease algorithm of Jacobson and Ramakrishnan-Jain is provably stable and fair. Delayed feedback, on the other hand, introduces oscillations for every individual user as well as unfairness across those competing for the same resource. While the simulation study of Zhang and the fluid approximation study of Bolot and Shankar have observed the oscillations in cumulative queue length and measurements by Jacobson have revealed some of the unfairness properties, the reasons for these have not been identified. This study quantitatively identifies the cause of these effects, vis-a-vis the system parameters and properties of the algorithm used. The model presented here is fairly general and can be applied to evaluate the performance of a wide range of feedback control schemes. It is an extension of the classical Fokker-Planck equation. Because the Fokker-Planck equation models diffusion, it addresses traffic variability that fluid approximation techniques do not.
Brian A. Coan and Daniel Heyman, "Reliable Software and Communication - Part III: Congestion Control and Network Reliability," IEEE Journal on Selected Areas in Communications, vol. 12, no. 1, Jan. 1994.
Abstract: Existing congestion control and network management techniques have been designed to cope with problems such as unexpectedly high offered load and hardware failures. On rare occasions, networks have failed despite the deployment of such techniques. In this paper, we survey the current research in congestion control and network management techniques and their role in ensuring network stability in the presence of software errors. What situations are these techniques intended to address? What are the characteristics of the problems that have led to network collapse? What areas of research offer the best prospect for further improving network reliability?
Go Hasegawa, Masayuki Murata and Hideo Miyahara, "Fairness and Stability of Congestion Control Mechanism of TCP," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (New York), Mar. 1999.
Abstract: In this paper, we focus on fairness and stability of the congestion control mechanisms adopted in several versions of TCP by investigating their time--transient behaviors through an analytic approach. In addition to TCP Tahoe and TCP Reno widely used in the current Internet, we also consider TCP Vegas, which adjusts the sending window size by observing the round trip times of the connection, and enhanced TCP Vegas, which is proposed in this paper for fairness enhancements. We consider homogeneous case, where two connections have the equivalent propagation delays, and heterogeneous case, where each connection has different propagation delay. We show that TCP Tahoe and TCP Reno can achieve fairness among connections in homogeneous case, but cannot in heterogeneous case. We also show that TCP Vegas can provide almost fair service among connection, but there is some unfairness caused by the essential nature of TCP Vegas. Finally, we explain the effectiveness of our enhanced TCP Vegas in terms of fairness and throughput.
Gustavo De Veciana, Tae-Jin Lee and Takis Konstantopoulos, " Stability and Performance Analysis of Networks Supporting Services with Rate Control-Could the Internet be Unstable?," in Proceedings of the Conference on Computer Communications (IEEE Infocom), (New York), Mar. 1999.
Abstract: We consider the stability and performance of a model for networks supporting services that adapt their transmission rate to the available bandwidth. Not unlike real networks, in our model connection arrivals are stochastic and have a random amount of data to send, so the number of connections in the system changes over time. In turn the bandwidth allocated to, or throughput achieved by, a given connection, may change during its lifetime due to feedback control mechanisms that react to congestion and thus implicitly to the number of ongoing connections. Ideally, for a fixed number of connections, such mechanisms reach an equilibrium typically characterized in terms of its `fairness' in allocating bandwidth to users, e.g. max-min or proportionally fair. In this paper we prove the stability of such networks when the offered load on each link does not exceed its capacity. We use simulation to investigate the performance, in terms of average connection delays, for various network topologies and fairness criteria. Finally we pose an architectural problem in TCP/IP's decoupling of the transport and network layer from the point of view of guaranteeing connection level stability, which we claim may explain congestion phenomena on the Internet.