Wednesday, September 9, 2009

Analysis and Simulation of a Fair Queueing Algorithm

Summary
This paper tackles the problem of building gateways that provide fair service in the face of congestion and possibly malicious end-points. Unlike the previous papers we read, this tackles the problem in the middle of the network, not at the end-points. That is not to say that end-point flow control shouldn't also be used; the two can be used in conjunction.

The authors argue that first-come-first-served (FCFS) is not an adequate algorithm for routers since a misbehaving source that can send packets fast enough can use an arbitrarily high amount of the available bandwidth. They propose fair queuing (FQ) as an alternative. Zhang's work on the Virtual Clock gateway queueing algorithm is cited as a similar approach.

Fair Queuing is basically the idea that we track the amount of data used by each "flow" (in this paper the authors choose to consider source-destination pairs as the unit of a flow) and split the available bandwidth evenly between each flow. This requires a per flow queue, such that if a particular flow sends another unit of data before its previous one has been sent it only adds to its own queue, causing drops during congestion.

The "most fair" FQ algorithm is a bit-by-bit round-robin (BR) approach in which one bit is sent from each flow each round such that each flow gets exactly the same amount of bandwidth. Since this is clearly infeasible the authors look at doing packet level queuing. The original propsal from Nagle did not take packet size into account, allowing users to game the system by sending large packets. The authors instead basically order packets by their finishing times (always picking the packet with the soonest finishing time to be sent next) which closely approximates BR.

A simple optimization is done for 'telnet' style connections. If a connection is inactive (as a interactive connection often will be during wait times) it is given a priority boost, providing better performance for interactive sessions.

The authors evaluate FQ vs. FCFS with a variety of flow-control algorithms and show that FQ provides better server and avoids congestion.

Thoughts
FQ deals nicely with the problem of misbehaving senders. They will simply overflow their queues without effecting anyone else. It also clearly provides more fair allocations even when all senders are behaving, since there may be various different flow-control algorithms operating simultaneously and they all might send at different rates.

I had two main concerns while reading this paper. First is the added complexity it adds to routers, that would have to classify packets as belonging to a particular flow. This is non-trivial and would require considerably more powerful routers. Second is the (somewhat) arbitrary choice of source-destination pairs as the unit of flow. This can be gamed by opening connections to lots of receivers, or could unintentionally punish powerful machines that just happen to be hosting many users connecting to the same service. Still, I don't have a better solution.

1 comment:

  1. I also liked the policy they used to punish ill-behaved senders. It was elegant and simple which required little change to the algorithm.

    However, it is mentioned in the paper that a subsequent paper by Heybey and Davin (1989) questions whether this is a desirable quality. I couldn't find the paper when I looked online, but am curious to know what the drawbacks to punishing ill-behaved senders might be.

    ReplyDelete