This web site contains the data files and test scripts for the paper “T. Høiland-Jørgensen et al. The Good, the Bad and the WiFi: Modern AQMs in a Residential Setting, Computer Networks (2015), http://dx.doi.org/10.1016/j.comnet.2015.07.014”.

Test setup

Physical test setup. All computers run Debian Wheezy; the access point runs OpenWrt Barrier Breaker. The latency inducer runs the stock kernel (version 3.2) with the `dummynet` module added, while the others have had the kernel replaced with a vanilla kernel version 3.14.4 from kernel.org.

Figure 1

Physical test setup. All computers run Debian Wheezy; the access point runs OpenWrt Barrier Breaker. The latency inducer runs the stock kernel (version 3.2) with the `dummynet` module added, while the others have had the kernel replaced with a vanilla kernel version 3.14.4 from kernel.org.

The tests are run in a controlled environment consisting of five regular desktop computers, equipped with Intel 82571EB ethernet controllers, and networked together in a daisy-chain configuration, corresponding to a common dumbbell scenario (here multiple senders correspond to the individual flows established between the endpoint nodes). The middle machine adds latency by employing the dummynet emulation framework. The bottleneck routers employ software rate limiting (through the tbf rate limiter) to achieve the desired bottleneck speeds. The test computers are set up to avoid the most common testing pitfalls, as documented by the bufferbloat community best practices document.

This means that all hardware offload features are turned off, to allow all packet handling to happen in the kernel. Furthermore, the kernel Byte Queue Limits have been set to a maximum of one packet, and the kernel is compiled with the highest possible clock tick frequency (1000 Hz). The purpose of both of this is to eliminate other sources of latency and queueing than those induced by the queueing disciplines themselves, and also to prevent the network driver and hardware from skewing the results by queueing packets outside the control of the queue management algorithms. The testbed computers’ clocks are kept in sync by running the Precistion Time Protocol over the control network.

For all tests, the default Linux CUBIC TCP implementation is used. For the WiFi test, the laptop marked ‘WiFI client’ serves as the test client. The laptop is equipped with an Intel WiFi Link 5100 using the iwlwifi driver, while the access point is a Ubiquiti Nanostation M5 equipped with an AR7241 WiFi chipset using the ath9k driver.

Test utilities

The tests are run through the Flent testing tool and its batch mode. The batch file and test scripts used are available in this git repository.

A recent version (2.6 or newer) of Netperf is required for most tests. For the VoIP tests, D-ITG is used, run through the ditg-control-server.py script included with the Flent sources. A small patch to D-ITG is required for this to work; that is also included with the Flent sources. For running the HTTP tests, the http-getter client is used.

Data files

The following files contains the Flent data files for all tests:

Packet dumps

Packet dumps are available for the wired tests, captured at different vantage points in the test setup.

Each file contains dumps (first 128 bytes for each packet) for one test. The file names correspond to the test file names in the file above, with a host name appended, and a .cap.gz extension. The host names are as follows:

Hostname Function
tohojo-testbed-01 Test client
tohojo-testbed-02 Upstream bottleneck router
tohojo-testbed-04 Downstream bottleneck router
tohojo-testbed-05 Test server

Dump files:

Contact

This page written and maintained by Toke Høiland-Jørgensen. Questions, comments, etc. are very welcome on toke DOT hoiland-jorgensen AT kau DOT se.