Monday, September 7, 2009

Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks

This paper sets out a formal analysis of various strategies for avoiding congestion in a network. In particular they define the point where the network is operating at its most efficient (the 'knee' of the throughput vs. load curve) and try to find algorithms that will utilize the network at that point. They also have some additional criteria, namely that all users should be using any bottleneck in the network equally (i.e. the system should be fair), no global state should be needed (the system is distributed) and that the control policy should converge to the optimal state as quickly as possible (responsiveness) and once there, should oscillate around the optimal point as little as possible (smoothness).

The model that the authors consider is one of binary feedback. That is, for each unit of time a client is informed if the network is underloaded (indicated by a congestion bit set to zero) or overload (congestion bit set to 1). It is also assumed that this feedback will be timely and not delayed, the implications of which are not considered in the paper. Given the feedback clients can either increase or decrease their utilization. The paper focuses on linear increase and decrease (i.e. adding a constant or multiplying by a constant), but does address the usefulness of non-linear approaches. Binary feedback is defended as being a simple approach that is actually possible to implement and that gives quite good results.

Through careful algebraic and graphical analysis the authors prove that the best policy should have an additive increase and multiplicative decrease, that responsiveness and smoothness are inversely related, and that non-linear approaches are not viable as they are too sensitive to system parameters.

The authors cite a number of related works on information flows and algorithms for dealing with them. Also relevant would be min/max flow algorithms in graph theory for determining the bottleneck, a problem the paper doesn't tackle.

I enjoyed this paper quite a lot. It gives a very intuitive idea of why the various parameters should be set as they are, as well as a rigorous mathematical proof of the results. The vector diagrams help to visualize the progress of the algorithms as well.

My primary concern was that the paper assumes all clients in the system will behave correctly. It seems that a client that simply floods the network would completely break this system and in a wide area network like the Internet malicious clients must be assumed to exist. There is no analysis of what effect such a client would have on the overall fairness and stability of the approaches in the paper.

It is also worrisome that prompt delivery of congestion status is assumed. In a congested network (the time when these algorithms are most needed) it seems very possible that notifications would be delayed and that these algorithms would fail. TCP's use of dropped packets as an indicator of congestion seems like a good way around this problem.

Overall a strong paper however, and one I would keep on the syllabus.

1 comment:

  1. I agree with your concern about misbehaving clients. I am also curious to find out if they had considered these misbehaving clients and had plans to build in mechanisms to control them (ie. if a client doesn't respond correctly to congestion control feedback, then possibly drop more of that clients packets).

    ReplyDelete