Internet Technologies and Applications (ITS 413)

Assignment 2 Comments

General Comments to All Groups

All groups run a good number of experiments, and most groups obtained reasonable results. However some groups did not understand why the results were as they were. Firstly, here is some explanation of the different experiments:

Window Size (Receive Buffer) and Bandwidth Delay Product
The performance of TCP depends on many factors including the buffer size and BDP. In brief, in ideal conditions TCP throughput will be at maximum (i.e. the link data rate minus overheads) when the receive buffer is greater than or equal to the BDP. If the receive buffer is less than the BDP then the TCP throughput will be limited by the receive buffer. For a detailed explanation read my 3 page description here.
Link Data Rate and Link Delay
Changing the link data rate and delay using tc impacts on TCP throughput as everyone saw. Of course the throughput can never be above the data rate (if it is, as one group showed, then there is a mistake with your experiment). Also, since changing the data rate and/or delay changes the BDP, the size of the receive buffer may impact on performance as well.
Packet Drops
Everyone say that packet loss significantly reduces the TCP throughput. This is because TCP responds as if the packet loss was caused by congestion, reducing its sending rate.
Multiple TCP Sessions
TCP is fair amongst connections (in cases when all conditions are the same for each connection). As everyone saw, TCP will share the link data rate among each connection. However in practice the conditions for each connections may differ (e.g. one starts slightly before another, the RTT for one is different than for the other) and thats why you may see different throughputs for different connections.
TCP and UDP
UDP has no flow or congestion control. A UDP client just sends as fast as you tell it to (by default, in iperf it sends at 1Mb/s). When UDP and TCP share a link then you should see that, unless packets are dropped, UDP obtains a throughput equal to the sending rate, while TCP obtains the rest of the available capacity. E.g. in a 100Mb/s link, if UDP sends at 10Mb/s, then the UDP throughput will be 10Mb/s, while TCP throughput will be about 85Mb/s (not 90Mb/s because of overheads). Some of you may have seen this in your experiments. However everyone used only the default UDP sending rate of 1Mb/s. This is too small to see the impact UDP has on TCP throughput. That is, normally TCP obtains about 95Mb/s. Introducing a 1Mb/s UDP client means TCP obtains about 94Mb/s. But that is only a small (1%) change in TCP throughput: you cannot really know whether it was UDP that reduced the throughput or if it was some other factor (different RTT, some packets lost). You needed to use larger UDP sending rates (10Mb/s, 20Mb/s, ...) to obtain more accurate results in your experiments.

Finally, the reports were ok, but here are some recommendations for the future (to make your reports better and to make my life easier):

Group 1

Expt1: Changing buffer size at server: results show that for buffer less than 85KB, the throughput is suboptimal (above 85KB, the throughput is around 90 to 95% of 1Gb/s capacity - thats expected). The explanation does not provide any insights into this.

Expt2: Read length at buffer: results show for large values of read length at the throughput doesn't vary, but for small values, the throughput decreases significantly. Why? No explanation of write length (at client).

Expt3: The results show as expected - an approximately linear relationship between link data rate and throughput. For data rates of 100Kb/s to 10Mb/s the throughput is about 95% of the data rate (the rest is protocol overheads). However for data rate of 100Mb/s it drops to about 60% - why?

Expt4: Link delay and throughput: Results as expected. Decrease in throughput as delay increases. Explanation is ok.

Expt5: Packet Drop and Throughput. Results as expected, however more tests between 1 to 10% would be useful (after 10%, the throughput is almost 0).

Expt6: Multiple TCP sessions. The results look ok. As expected, TCP is fair amongst connections. The link capacity is divided equally amongst each TCP connection.

Expt8: Multiple TCP and UDP sessions. Although a number of tests are run, unfortunately the UDP sessions do not take much bandwidth. That is, by default UDP sends at 1Mb/s. This is just 0.1% of the 1Gb/s. So without UDP (Experiment 6) with 2TCP they each get about 470Mb/s. With UDP, they again get about 470Mb/s. But since the UDP is so small compared to TCP, we are unsure what impact it has on TCP. But what if UDP was sending at 10Mb/s? 50Mb/s? 100Mb/s? It would be easier to see the impact on TCP.

Expt9: BDP vs Buffer Size. Good, this shows that when the BDP is 1250KB, when the receive buffer is less than BDP, then the throughput is limited by the receive buffer.

Report: good presentation, with brief but clear explanation of method and results. Would have been nice (easier to follow) if plots were shown in same section as explanatin and table. A detailed list of the equipment used would also be nice (eg. see Group 7 report). Sometimes if problems arise, they can be traced to the operating system version or specific types/manufacturers of LAN cards.

Group 2

Expt1: Receive buffer. Results as expected. Shows for small window the throughput is less. BDP is probably around 6KB to 10KB (e.g. RTT of 0.5 to 1ms).

Expt2: Read length. Ok

Expt3: Data rate. Results as expected, howrever why didn't you try larger than 10Mb/s? Up to 100Mb/s? There is no explanation as to why the throughput is less than data rate.

Expt4: Link delay. Results as expected. However the reason for reduced throughput is not necessarily due to packet drops.

Expt5: Packet drop. Ok

Expt6: Multiple TCP.

Expt7: TCP/UDP. Should have used larger bandwidth for UDP.

Expt8: BDP. The RTT is about 80ms in your case (you set the delay of 1 link to be 80ms, the other return link is NOT 80ms). You should have tried larger window sizes relative to the BDP, e.g. up to 2000KB.

Report: good presentation. A detailed explanation of the equipment and setup would be nice. The presentations of all the tables and then plots/discussion 'overloads' the reader with too much information. Preferably to use only one, and put the other (e.g. tables) in an appendix.

Group 3

Expt1: Receive buffer. Results as expected. The explanation, buffered transfer vs flow control, doesn't make sense (although there is something related flow control).

Expt2: BDP. Figure 1.2: I don't believe the default RTT is 100ms - that seems far too high (probably closer to 1ms). Hence I think your calculation of BDP is wrong. Figure 1.3 you set the BDP to 1000KB (by setting RTT). When the window reaches 400KB the throughput is around maximum. This looks ok, however your explanation doesn't point this out.

Expt3: Read. Ok

Expt4: Delay. Ok

Expt5: Drop. Ok

Expt6: Data rate. Ok and good explanation, but why did you limit to 10Mb/s?

Expt7: Multple TCP. Good.

Expt8: UDP/TCP. Should have used larger UDP bandwidth.

Report: would be nice if the report had a title page, or at least a list of authors. Good description of tests and default parameters. Some plots have inconsistent scales.

Group 4

Expt1: Receive buffer. The results look strange. The throughput is no larger than 14Mb/s (yet later experiments give throughput up to 95Mb/s?). Either the BDP is very high (e.g. delay of 100ms across link - but very unlikely) or the experiment was setup incorrectly. Your explanation does not identify this problem.

Expt2: Read length. ok.

Expt3: Link data rate. This is wrong. If the link data rate is 100kb/s then how can the throughput be 95Mb/s? The link data rate is the capacity of your link, the absolute maximum. The throughput is the real transfer rate (considering protocol overheads) - it will always be less than the link data rate.

Expt4: Delay. Ok, but no explanation.

Expt5: Drop. Ok.

Expt6: Multiple TCP. Ok.

Expt7: UDP/TCP. Should have used larger UDP bandwidth. No separate analysis of BDP.

Report: ok, although scale of some plots is inconsistent.

Group 5

Expt1: receiver buffer. I don't know what figure 1 shows??? The results in Figure 2 and the explanation are ok.

Expt2: BDP. It is not clear what the plots are showing. I'd expect to see Throughput Vs ... . The axes are not labelled.

Expt3: Window size vs delay. Again, need to present results of throughput, not window size.

Expt4: Read length. Ok.

Expt5: Link data rate. Ok, but why not higher data rates (>10Mb/s)?

Expt6: Drop packets. Ok.

Expt7: Link delay. Ok

Expt8: Multiple TCP sessions. Ok.

Expt9: UDP. Should have used larger bandwidth.

Report: more details of network and experiment setup is needed so that someone could repeat your experiments. Many of your plots do not have axes labelled.

Group 6

Expt1: Receive buffer. Results are ok, but explanation does not explain why the cutoff is around 5000B for window size (i.e. BDP).

Expt2: Length. Ok

Expt3: Data rate. Ok

Expt4: Delay. Ok

Expt5: Drop. Ok, but should have had more values less than 10%.

Expt6: BDP. I think the calculation of BDP is wrong (or maybe the estimated of RTT is incorrect). Data rate of 100Mb/s and RTT of 80ms gives BDP of 1MB. You have only used window for much smaller values. If you had a window of 1MB you should see the throughput approach 100Mb/s.

Expt7: TCP. Good, mahy different cases.

Expt8: TCP/UDP. Should have set UDP bandwidth to larger values.

Report: good presentation. Some of your plots are not to scale.

Group 7

Expt1: Receive buffer. Tested by client and server window size. Made a good observation about the relationship between client and server window size (with a small client buffer the transmission rate is limited). Results are as expected: growth in throughput for small windows, fixed throughput for large windows. Cutoff is about 10KB to 15KB.

Expt2: Length of data. Ok.

Expt3: Link data rate. Results as expected. Good explanation of protocol overheads meaning cannot get the full 100%.

Expt4: Link Delay. Results as expected. The increase in delay does not necesarrily lead to packet drops due to timeouts. TCP in fact dynamically adjusts its timeout interval based on the measured delay.

Expt5: packet drops. Results as expected. Good explanation.

Expt6: BDP. Strange setup of experiment. Why is data rate set to 200Mb/s, when your link only supports 100Mb/s? The 100Mb/s is a phyiscal limit - you cannot go above it. With a delay of 80ms and link rate of 100Mb/s, the BDP is 1MB. Hence you should vary the window to be less than BDP and greater than BDP. You only did it for (much) less than, hence the throughput is very low.

Expt7: Muliple TCP. Ok.

Expt8: UDP/TCP. as with others, should have increased UDP.

Report: good presentation. However the scales on the plots is not consistent. For example in Task 1, it is neither linear or logarithmic. Then in task 2 it is logarithmic, but maybe a linear scale would be better.

Return to: ITS413 Home | Course List | Steven Gordon's Home | SIIT