Study the throughput performance of TCP by running experiments under varying conditions.
This assignment involves measuring the performance of TCP under different conditions. The objective is for students to learn about factors that impact on TCP performance in the Internet. In addition, students will learn about software tools for network measurement and experimentation. Finally, students will understand how to design and interpret results from experiments.
Each group requires two computers with a direct connection between them. An ideal setup would be two laptops with an Ethernet cable connecting them (wireless LAN is not recommended). Lab computers can be used if necessary.
You should use iperf (or similar software) to measure the performance of TCP. You may also use the Java-based GUI to iperf, called jperf, if you desire (note, jperf is available from the iperf website and also includes a Windows binary executable). iperf/jperf is open-source software that runs on Linux, Windows and Mac.
Some tasks require you to force packets to be dropped and delayed. tc, which is part of the Linux Advanced Routing and Traffic Control (LARTC) tools, is excellent for this purpose. I don't know of any software that provides equivalent functionality that runs on Windows or BSD (MacOS) operating systems. If you do, then tell me! (You may be able to use firewall software ...)
Some hints on using the software are provided below.
The tasks involve using network testing software. It is recommended that you use a Linux-based operating system (e.g. Ubuntu). Instructions are given using Ubuntu as an example. You can run some of the software on Windows (e.g. iperf), however I am not aware of software that allows simple control packet drops an delay (similar to tc in Linux) for Windows.
You need to run a set of experiments, measuring the performance for each. The scenarios you must consider are:
The above parameters/conditions can be analysed for a single TCP session (no other traffic on the link). In addition, you should investigate TCP performance when multiple TCP and/or UDP sessions are run in parallel (e.g. 2xTCP; 3xTCP; 1xTCP+1xUDP). For these multi-session experiments you do not need to repeat for all the above parameters/conditions (simply choose a default set of parameters and conditions).
You should start with the default values (used by iperf). And then consider a range of values that are likely to give results that will demonstrate the impact of that parameter on performance. For example, if the default receive window size was 10,000 Bytes, then you could try tests for 5,000 Bytes and 15,000 Bytes. The results from these three tests should indicate what other values to try: if the throughput doesn't change for each value, then maybe try a much higher (100,000 B) or lower value (100 B). You may need to apply some trial-and-error. Finally, you should arrive at some measurements that if plotted in a graph would show the trend in performance as the parameter value changes. In most cases, you will need between 3 and 10 values.
The key performance metric/statistic is TCP throughput (reported as bandwidth by iperf). Other metrics such as delay, jitter and packet loss do not need to be reported (however you may use them to help in understanding why the TCP throughput changes).
Two things that you must consider when running performance experiments are: How long should each test run for? How many tests should be run?
For the single-session tests, I recommend using at least 30 seconds for tests. For multi-session tests if you manually start the individual sessions then you may need longer (e.g. 60 seconds). In addition, consider the conditions and how they may depend on test duration (e.g. the throughput may be different if you drop 1 packet in a 30 second test compared to dropping 1 packet in a 300 second test) - why?.
You should consider repeating each test multiple times. For example, run a 30 second TCP test and then repeat the same test (possible several times). Compare the results. If the performance results are about the same, then you may record just one value or the average of all values. However, if the results change significantly you may repeat more times and/or record the average results and discuss the difference. For example, if 3 tests gave you the throughput results of 10.1, 10.7 and 9.6MB/s, then they are quite similar. But if the results were 2.3, 10.7 and 25.6MB/s, then may need to repeat more times until you see a trend. If you do not notice any trend, then discuss this in your report.You should record all your test results in a file (or files). A spreadsheet is a good choice for most of you (e.g. .xls or .ods), as you can then easily create plots of your results. You need to submit this results file, however I will NOT look at it in detail. Therefore it doesn't have to be presented professionally - it is just your own record of the results. I am not interested in the exact data values - I am interested in how you present and interpret the data values you record.
You will be assessed on a report from your group. Read on for what is expected in the report...
The assignment is group work. Each member of the group should contribute equally to the work. This includes providing intellectual input (e.g. thinking about the tests and measurements, understanding the results), editing input (e.g. writing the text, formatting the figures) and performing data collection.
Sharing information (with other groups) on how to use the software is allowed and encouraged. Using other peoples data and conclusions is NOT allowed. Copying text, results, figures etc. from other groups or external sources (e.g. web sites) is NOT allowed unless properly acknowledged.
Return to: ITS413 Home | Course List | Steven Gordon's Home | SIIT