Internet Technologies and Applications (ITS 413)

Assignment 2

Group project, 3 students per group
15% of course score
14 February 2011, 5pm
Late assignments:
-10 marks per hour


Study the throughput performance of TCP by running experiments under varying conditions.

Learning Objectives

This assignment involves measuring the performance of TCP under different conditions. The objective is for students to learn about factors that impact on TCP performance in the Internet. In addition, students will learn about software tools for network measurement and experimentation. Finally, students will understand how to design and interpret results from experiments.


Each group requires two computers with a direct connection between them. An ideal setup would be two laptops with an Ethernet cable connecting them (wireless LAN is not recommended). Lab computers can be used if necessary.


You should use iperf (or similar software) to measure the performance of TCP. You may also use the Java-based GUI to iperf, called jperf, if you desire (note, jperf is available from the iperf website and also includes a Windows binary executable). iperf/jperf is open-source software that runs on Linux, Windows and Mac.

Some tasks require you to force packets to be dropped and delayed. tc, which is part of the Linux Advanced Routing and Traffic Control (LARTC) tools, is excellent for this purpose. I don't know of any software that provides equivalent functionality that runs on Windows or BSD (MacOS) operating systems. If you do, then tell me! (You may be able to use firewall software ...)

Some hints on using the software are provided below.

Operating Systems

The tasks involve using network testing software. It is recommended that you use a Linux-based operating system (e.g. Ubuntu). Instructions are given using Ubuntu as an example. You can run some of the software on Windows (e.g. iperf), however I am not aware of software that allows simple control packet drops an delay (similar to tc in Linux) for Windows.

Scenarios and Parameters

You need to run a set of experiments, measuring the performance for each. The scenarios you must consider are:

  1. Single TCP session; varying application/protocol parameters.
  2. Single TCP session; varying network/link conditions.
  3. Multiple TCP sessions.
  4. Single/Multiple TCP sessions in presence of UDP sessions.
The application/protocol parameters are those that can be changed at the end-hosts and may impact on TCP performance. For example: The network/link conditions can be controlled using tc. They include:

The above parameters/conditions can be analysed for a single TCP session (no other traffic on the link). In addition, you should investigate TCP performance when multiple TCP and/or UDP sessions are run in parallel (e.g. 2xTCP; 3xTCP; 1xTCP+1xUDP). For these multi-session experiments you do not need to repeat for all the above parameters/conditions (simply choose a default set of parameters and conditions).

You should start with the default values (used by iperf). And then consider a range of values that are likely to give results that will demonstrate the impact of that parameter on performance. For example, if the default receive window size was 10,000 Bytes, then you could try tests for 5,000 Bytes and 15,000 Bytes. The results from these three tests should indicate what other values to try: if the throughput doesn't change for each value, then maybe try a much higher (100,000 B) or lower value (100 B). You may need to apply some trial-and-error. Finally, you should arrive at some measurements that if plotted in a graph would show the trend in performance as the parameter value changes. In most cases, you will need between 3 and 10 values.


The key performance metric/statistic is TCP throughput (reported as bandwidth by iperf). Other metrics such as delay, jitter and packet loss do not need to be reported (however you may use them to help in understanding why the TCP throughput changes).


Two things that you must consider when running performance experiments are: How long should each test run for? How many tests should be run?

For the single-session tests, I recommend using at least 30 seconds for tests. For multi-session tests if you manually start the individual sessions then you may need longer (e.g. 60 seconds). In addition, consider the conditions and how they may depend on test duration (e.g. the throughput may be different if you drop 1 packet in a 30 second test compared to dropping 1 packet in a 300 second test) - why?.

You should consider repeating each test multiple times. For example, run a 30 second TCP test and then repeat the same test (possible several times). Compare the results. If the performance results are about the same, then you may record just one value or the average of all values. However, if the results change significantly you may repeat more times and/or record the average results and discuss the difference. For example, if 3 tests gave you the throughput results of 10.1, 10.7 and 9.6MB/s, then they are quite similar. But if the results were 2.3, 10.7 and 25.6MB/s, then may need to repeat more times until you see a trend. If you do not notice any trend, then discuss this in your report.

Report and Deliverables

You should record all your test results in a file (or files). A spreadsheet is a good choice for most of you (e.g. .xls or .ods), as you can then easily create plots of your results. You need to submit this results file, however I will NOT look at it in detail. Therefore it doesn't have to be presented professionally - it is just your own record of the results. I am not interested in the exact data values - I am interested in how you present and interpret the data values you record.

You will be assessed on a report from your group. Read on for what is expected in the report...

How do I describe the tests?
For each test (or set of tests) you should try to give enough information so that someone else could repeat your test (and get the same results). You do not need detailed descriptions, instead you should use tables, lists and diagrams. Some things to consider include: host computer specification (hardware and software); network technologies; background traffic; parameter values; test software; test duration; test methodology (the steps you took to complete the test); number of tests. If you give all the details for the first test, then when describing the later tests you can simply list the differences.
How do I report results?
Most performance results should be presented as plots (e.g. throughput versus packet size). Some results may also be presented in tables. You do not have to report every result - concentrate on the results that show interesting trends. For example, if the throughput remainined the same for all packet sizes, then you could write that rather than giving a plot.
What do I need to discuss?
For each set of tests, state any conclusions. Explain why the results are as you present. For example, if your results show the throughput increasing as the packet size increases, then say that and then explain why!
How many pages should the report be?
The report should include descriptions of the tests, selected results and discussion of the results. There is no minimum or maximum page limit. It should be long enough to clearly demonstrate that you have completed the tests and understood what is happening; and it should be short enough so that I don't get bored reading it. My guess is 10-15 pages is reasonable, but a report less than 10 pages or longer than 15 pages still may get full marks.
How do I submit?
Submit the test file(s) and a copy of the report by email. The report should be a PDF. A hardcopy of the report is not needed.
What is the marking scheme?
15% for description of experiments (e.g. you have provided sufficient information about the setup of your experiments.
40% for correctness and technical quality (e.g. discussion and results are correct)
25% for coverage of tests (e.g. you have selected appropriate parameter values, you have perform sufficient tests)
20% for presentation (e.g. nice looking plots, professionally presented document)

Working in Groups

The assignment is group work. Each member of the group should contribute equally to the work. This includes providing intellectual input (e.g. thinking about the tests and measurements, understanding the results), editing input (e.g. writing the text, formatting the figures) and performing data collection.

Sharing information (with other groups) on how to use the software is allowed and encouraged. Using other peoples data and conclusions is NOT allowed. Copying text, results, figures etc. from other groups or external sources (e.g. web sites) is NOT allowed unless properly acknowledged.

Examples of using iperf and tc

I have written short guides to using iperf, tc and iptables, all which are relevant for this assignment. Further informaton about these can be found online or by reading the man pages. Some parts of this assignment are similar to previous years. Hence comments I provided on the ITS413 Mailing List may provide useful information. But beware, some parts of the assignments have changed, so not everything is relevant. Some specific posts include:

Return to: ITS413 Home | Course List | Steven Gordon's Home | SIIT