Return to search

Transport Control Protocol (TCP) over Optical Burst Switched Networks

Transport Control Protocol (TCP) is the dominant protocol in modern communication networks, in which the issues of reliability, flow, and congestion control must be handled efficiently. This thesis studies the impact of the next-generation bufferless optical burst-switched (OBS) networks on the performance of TCP congestion-control implementations (i.e., dropping-based, explicit-notification-based, and delay-based).
The burst contention phenomenon caused by the buffer-less nature of OBS occurs randomly and has a negative impact on dropping-based TCP since it causes a false indication of network congestion that leads to improper reaction on a burst drop event. In this thesis we study the impact of these random burst losses on dropping-based TCP throughput. We introduce a novel congestion control scheme for TCP over OBS networks, called Statistical Additive Increase Multiplicative Decrease (SAIMD). SAIMD maintains and analyzes a number of previous round trip times (RTTs) at the TCP senders in order to identify the confidence with which a packet-loss event is due to network congestion. The confidence is derived by positioning short-term RTT in the spectrum of long-term historical RTTs. The derived confidence corresponding to the packet loss is then taken in to account by the policy developed for TCP congestion-window adjustment.
For explicit-notification TCP, we propose a new TCP implementation over OBS networks, called TCP with Explicit Burst Loss Contention Notification (TCP-BCL). We examine the throughput performance of a number of representative TCP implementations over OBS networks, and analyze the TCP performance degradation due to the misinterpretation of timeout and packet-loss events. We also demonstrate that the proposed TCP-BCL scheme can counter the negative effect of OBS burst losses and is superior to conventional TCP architectures in OBS networks.
For delay-based TCP, we observe that this type of TCP implementation cannot detect network congestion when deployed over typical OBS networks since RTT fluctuations are minor. Also, delay-based TCP can suffer from falsely detecting network congestion when the underlying OBS network provides burst retransmission and/or deflection. Due to the fact that burst retransmission and deflection schemes introduce additional delays for bursts that are retransmitted or deflected, TCP cannot determine whether this sudden delay is due to network congestion or simply to burst recovery at the OBS layer. In this thesis we study the behaviour of delay-based TCP Vegas over OBS networks, and propose a version of threshold-based TCP Vegas that is suitable for the characteristics of OBS networks. The threshold-based TCP Vegas is able to distinguish increases in packet delay due to network congestion from burst contention at low traffic loads.
The evolution of OBS technology is highly coupled with its ability to support upper-layer applications. Without fully understanding the burst transmission behaviour and the associated impact on the TCP congestion-control mechanism, it will be difficult to exploit the advantages of OBS networks fully.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OWTU.10012/3129
Date10 July 2007
CreatorsShihada, Basem
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
Format1067092 bytes, application/pdf

Page generated in 0.0023 seconds