All groups run a good number of experiments, and most groups obtained reasonable results. However some groups did not understand why the results were as they were. Firstly, here is some explanation of the different experiments:
Finally, the reports were ok, but here are some recommendations for the future (to make your reports better and to make my life easier):
Expt1: Changing buffer size at server: results show that for buffer less than 85KB, the throughput is suboptimal (above 85KB, the throughput is around 90 to 95% of 1Gb/s capacity - thats expected). The explanation does not provide any insights into this.
Expt2: Read length at buffer: results show for large values of read length at the throughput doesn't vary, but for small values, the throughput decreases significantly. Why? No explanation of write length (at client).
Expt3: The results show as expected - an approximately linear relationship between link data rate and throughput. For data rates of 100Kb/s to 10Mb/s the throughput is about 95% of the data rate (the rest is protocol overheads). However for data rate of 100Mb/s it drops to about 60% - why?
Expt4: Link delay and throughput: Results as expected. Decrease in throughput as delay increases. Explanation is ok.
Expt5: Packet Drop and Throughput. Results as expected, however more tests between 1 to 10% would be useful (after 10%, the throughput is almost 0).
Expt6: Multiple TCP sessions. The results look ok. As expected, TCP is fair amongst connections. The link capacity is divided equally amongst each TCP connection.
Expt8: Multiple TCP and UDP sessions. Although a number of tests are run, unfortunately the UDP sessions do not take much bandwidth. That is, by default UDP sends at 1Mb/s. This is just 0.1% of the 1Gb/s. So without UDP (Experiment 6) with 2TCP they each get about 470Mb/s. With UDP, they again get about 470Mb/s. But since the UDP is so small compared to TCP, we are unsure what impact it has on TCP. But what if UDP was sending at 10Mb/s? 50Mb/s? 100Mb/s? It would be easier to see the impact on TCP.
Expt9: BDP vs Buffer Size. Good, this shows that when the BDP is 1250KB, when the receive buffer is less than BDP, then the throughput is limited by the receive buffer.
Report: good presentation, with brief but clear explanation of method and results. Would have been nice (easier to follow) if plots were shown in same section as explanatin and table. A detailed list of the equipment used would also be nice (eg. see Group 7 report). Sometimes if problems arise, they can be traced to the operating system version or specific types/manufacturers of LAN cards.
Expt1: Receive buffer. Results as expected. Shows for small window the throughput is less. BDP is probably around 6KB to 10KB (e.g. RTT of 0.5 to 1ms).
Expt2: Read length. Ok
Expt3: Data rate. Results as expected, howrever why didn't you try larger than 10Mb/s? Up to 100Mb/s? There is no explanation as to why the throughput is less than data rate.
Expt4: Link delay. Results as expected. However the reason for reduced throughput is not necessarily due to packet drops.
Expt5: Packet drop. Ok
Expt6: Multiple TCP.
Expt7: TCP/UDP. Should have used larger bandwidth for UDP.
Expt8: BDP. The RTT is about 80ms in your case (you set the delay of 1 link to be 80ms, the other return link is NOT 80ms). You should have tried larger window sizes relative to the BDP, e.g. up to 2000KB.
Report: good presentation. A detailed explanation of the equipment and setup would be nice. The presentations of all the tables and then plots/discussion 'overloads' the reader with too much information. Preferably to use only one, and put the other (e.g. tables) in an appendix.
Expt1: Receive buffer. Results as expected. The explanation, buffered transfer vs flow control, doesn't make sense (although there is something related flow control).
Expt2: BDP. Figure 1.2: I don't believe the default RTT is 100ms - that seems far too high (probably closer to 1ms). Hence I think your calculation of BDP is wrong. Figure 1.3 you set the BDP to 1000KB (by setting RTT). When the window reaches 400KB the throughput is around maximum. This looks ok, however your explanation doesn't point this out.
Expt3: Read. Ok
Expt4: Delay. Ok
Expt5: Drop. Ok
Expt6: Data rate. Ok and good explanation, but why did you limit to 10Mb/s?
Expt7: Multple TCP. Good.
Expt8: UDP/TCP. Should have used larger UDP bandwidth.
Report: would be nice if the report had a title page, or at least a list of authors. Good description of tests and default parameters. Some plots have inconsistent scales.
Expt1: Receive buffer. The results look strange. The throughput is no larger than 14Mb/s (yet later experiments give throughput up to 95Mb/s?). Either the BDP is very high (e.g. delay of 100ms across link - but very unlikely) or the experiment was setup incorrectly. Your explanation does not identify this problem.
Expt2: Read length. ok.
Expt3: Link data rate. This is wrong. If the link data rate is 100kb/s then how can the throughput be 95Mb/s? The link data rate is the capacity of your link, the absolute maximum. The throughput is the real transfer rate (considering protocol overheads) - it will always be less than the link data rate.
Expt4: Delay. Ok, but no explanation.
Expt5: Drop. Ok.
Expt6: Multiple TCP. Ok.
Expt7: UDP/TCP. Should have used larger UDP bandwidth. No separate analysis of BDP.
Report: ok, although scale of some plots is inconsistent.
Expt1: receiver buffer. I don't know what figure 1 shows??? The results in Figure 2 and the explanation are ok.
Expt2: BDP. It is not clear what the plots are showing. I'd expect to see Throughput Vs ... . The axes are not labelled.
Expt3: Window size vs delay. Again, need to present results of throughput, not window size.
Expt4: Read length. Ok.
Expt5: Link data rate. Ok, but why not higher data rates (>10Mb/s)?
Expt6: Drop packets. Ok.
Expt7: Link delay. Ok
Expt8: Multiple TCP sessions. Ok.
Expt9: UDP. Should have used larger bandwidth.
Report: more details of network and experiment setup is needed so that someone could repeat your experiments. Many of your plots do not have axes labelled.
Expt1: Receive buffer. Results are ok, but explanation does not explain why the cutoff is around 5000B for window size (i.e. BDP).
Expt2: Length. Ok
Expt3: Data rate. Ok
Expt4: Delay. Ok
Expt5: Drop. Ok, but should have had more values less than 10%.
Expt6: BDP. I think the calculation of BDP is wrong (or maybe the estimated of RTT is incorrect). Data rate of 100Mb/s and RTT of 80ms gives BDP of 1MB. You have only used window for much smaller values. If you had a window of 1MB you should see the throughput approach 100Mb/s.
Expt7: TCP. Good, mahy different cases.
Expt8: TCP/UDP. Should have set UDP bandwidth to larger values.
Report: good presentation. Some of your plots are not to scale.
Expt1: Receive buffer. Tested by client and server window size. Made a good observation about the relationship between client and server window size (with a small client buffer the transmission rate is limited). Results are as expected: growth in throughput for small windows, fixed throughput for large windows. Cutoff is about 10KB to 15KB.
Expt2: Length of data. Ok.
Expt3: Link data rate. Results as expected. Good explanation of protocol overheads meaning cannot get the full 100%.
Expt4: Link Delay. Results as expected. The increase in delay does not necesarrily lead to packet drops due to timeouts. TCP in fact dynamically adjusts its timeout interval based on the measured delay.
Expt5: packet drops. Results as expected. Good explanation.
Expt6: BDP. Strange setup of experiment. Why is data rate set to 200Mb/s, when your link only supports 100Mb/s? The 100Mb/s is a phyiscal limit - you cannot go above it. With a delay of 80ms and link rate of 100Mb/s, the BDP is 1MB. Hence you should vary the window to be less than BDP and greater than BDP. You only did it for (much) less than, hence the throughput is very low.
Expt7: Muliple TCP. Ok.
Expt8: UDP/TCP. as with others, should have increased UDP.
Report: good presentation. However the scales on the plots is not consistent. For example in Task 1, it is neither linear or logarithmic. Then in task 2 it is logarithmic, but maybe a linear scale would be better.
Return to: ITS413 Home | Course List | Steven Gordon's Home | SIIT