• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 14
  • 14
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 52
  • 47
  • 38
  • 35
  • 21
  • 21
  • 20
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Internet Congestion Control: Modeling and Stability Analysis

Wang, Lijun 08 August 2008 (has links)
The proliferation and universal adoption of the Internet has made it become the key information transport platform of our time. Congestion occurs when resource demands exceed the capacity, which results in poor performance in the form of low network utilization and high packet loss rate. The goal of congestion control mechanisms is to use the network resources as efficiently as possible. The research work in this thesis is centered on finding ways to address these types of problems and provide guidelines for predicting and controlling network performance, through the use of suitable mathematical tools and control analysis. The first congestion collapse in the Internet was observed in 1980's. To solve the problem, Van Jacobson proposed the Transmission Control Protocol (TCP) congestion control algorithm based on the Additive Increase and Multiplicative Decrease (AIMD) mechanism in 1988. To be effective, a congestion control mechanism must be paired with a congestion detection scheme. To detect and distribute network congestion indicators fairly to all on-going flows, Active Queue Management (AQM), e.g., the Random Early Detection (RED) queue management scheme has been developed to be deployed in the intermediate nodes. The currently dominant AIMD congestion control, coupled with the RED queue in the core network, has been acknowledged as one of the key factors to the overwhelming success of the Internet. In this thesis, the AIMD/RED system, based on the fluid-flow model, is systematically studied. In particular, we concentrate on the system modeling, stability analysis and bounds estimates. We first focus on the stability and fairness analysis of the AIMD/RED system with a single bottleneck. Then, we derive the theoretical estimates for the upper and lower bounds of homogeneous and heterogeneous AIMD/RED systems with feedback delays and further discuss the system performance when it is not asymptotically stable. Last, we develop a general model for a class of multiple-bottleneck networks and discuss the stability properties of such a system. Theoretical and simulation results presented in this thesis provide insights for in-depth understanding of AIME/RED system and help predict and control the system performance for the Internet with higher data rate links multiplexed with heterogeneous flows.
12

Congestion control algorithms of TCP in emerging networks

Bhandarkar, Sumitha 02 June 2009 (has links)
In this dissertation we examine some of the challenges faced by the congestion control algorithms of TCP in emerging networks. We focus on three main issues. First, we propose TCP with delayed congestion response (TCP-DCR), for improving performance in the presence of non-congestion events. TCP-DCR delays the conges- tion response for a short interval of time, allowing local recovery mechanisms to handle the event, if possible. If at the end of the delay, the event persists, it is treated as congestion loss. We evaluate TCP-DCR through analysis and simulations. Results show significant performance improvements in the presence of non-congestion events with marginal impact in their absence. TCP-DCR maintains fairness with standard TCP variants that respond immediately. Second, we propose Layered TCP (LTCP), which modifies a TCP flow to behave as a collection of virtual flows (or layers), to improve eficiency in high-speed networks. The number of layers is determined by dynamic network conditions. Convergence properties and RTT-unfairness are maintained similar to that of TCP. We provide the intuition and the design for the LTCP protocol and evaluation results based on both simulations and Linux implementation. Results show that LTCP is about an order of magnitude faster than TCP in utilizing high bandwidth links while maintaining promising convergence properties. Third, we study the feasibility of employing congestion avoidance algorithms in TCP. We show that end-host based congestion prediction is more accurate than previously characterized. However, uncertainties in congestion prediction may be un- avoidable. To address these uncertainties, we propose an end-host based mechanism called Probabilistic Early Response TCP (PERT). PERT emulates the probabilistic response function of the router-based scheme RED/ECN in the congestion response function of the end-host. We show through extensive simulations that, similar to router-based RED/ECN, PERT provides fair bandwidth sharing with low queuing delays and negligible packet losses, without requiring the router support. It exhibits better characteristics than TCP-Vegas, the illustrative end-host scheme. PERT can also be used for emulating other router schemes. We illustrate this through prelim- inary results for emulating the router-based mechanism REM/ECN. Finally, we show the interactions and benefits of combining the different proposed mechanisms.
13

A Modified SCTP with Load Balancing

Tseng, Cheng-Liang 26 August 2003 (has links)
To support the transmission of real-time multimedia stream, Stream Control Transmission Protocol (SCTP) developed by IETF is considered to be more efficient because of its high-degree expandability and compatibility. Today we can observe that instead of using SCTP may become the transmission protocol of next-generation IP network. In this Thesis, we propose a mechanism to upgrade TCP and UDP, the multi-home feature of SCTP to ensure that multiple paths can exist between two SCTP ends. Not only can the primary path continue to function, but the secondary paths covey part of data packets once the network congestion occurs. Considering the dynamic change of our Internet, the proposed mechanism can enhance the effectiveness of SCTP data transmission, and increase overall network utilization. Cutting user data into chunks in SCTP, we can analyze the transmission performance of individual path by measuring the transmission delay from the sender to the receiving end. By modifying the simulator of NS-2, we set up different topologies in the experiment to analyze the performance of our mechanism. We compare the modified SCTP with the original SCTP to highlight our proposed mechanism in increasing throughput and network utilization by adjusting the background traffic on the paths.
14

Buffer Management with Consideration of States in TCP Connections

Lin, Chiun-Chau 03 August 2001 (has links)
TCP is the most popular transport layer protocol. When there is congestion in the network, either sender¡¦s TCP or router¡¦s buffer management has its way to resist the penalties of congestion. But each of them achieves this goal in an independent way. In TCP, Tahoe, Reno, New Reno, SACK, Vegas, FACK, and some modifications to TCP to improve performance were proposed. Although they have better performance than previous TCP, the cooperation between different types of TCP is not well. And TCP-unfriendly connections will be adverse to TCP connections. In buffer management, the fairness between different connections can be maintained. But some phenomena will be adverse to TCP connection because of buffer management is TCP-unawareness. In this paper, we show a problem that buffer management scheme may be unfriendly to new connection which is going to join the network with congestion. This problem will incur (1) TCP-unfriendly behavior, (2) alleviating congestion inefficiently, (3) unfairness between two connections. We propose a scheme to alleviate this problem and this scheme is easy to implement with existing buffer management scheme.
15

Stable and scalable congestion control for high-speed heterogeneous networks

Zhang, Yueping 10 October 2008 (has links)
For any congestion control mechanisms, the most fundamental design objectives are stability and scalability. However, achieving both properties are very challenging in such a heterogeneous environment as the Internet. From the end-users' perspective, heterogeneity is due to the fact that different flows have different routing paths and therefore different communication delays, which can significantly affect stability of the entire system. In this work, we successfully address this problem by first proving a sufficient and necessary condition for a system to be stable under arbitrary delay. Utilizing this result, we design a series of practical congestion control protocols (MKC and JetMax) that achieve stability regardless of delay as well as many additional appealing properties. From the routers' perspective, the system is heterogeneous because the incoming traffic is a mixture of short- and long-lived, TCP and non-TCP flows. This imposes a severe challenge on traditional buffer sizing mechanisms, which are derived using the simplistic model of a single or multiple synchronized long-lived TCP flows. To overcome this problem, we take a control-theoretic approach and design a new intelligent buffer sizing scheme called Adaptive Buffer Sizing (ABS), which based on the current incoming traffic, dynamically sets the optimal buffer size under the target performance constraints. Our extensive simulation results demonstrate that ABS exhibits quick responses to changes of traffic load, scalability to a large number of incoming flows, and robustness to generic Internet traffic.
16

Adapting a delay-based protocol to heterogeneous environments

Kotla, Kiran 10 October 2008 (has links)
We investigate the issues in making a delay-based protocol adaptive to heterogeneous environments. We assess and address the problems a delay-based protocol faces when competing with a loss-based protocol such as TCP. We investigate if noise and variability in delay measurements in environments such as cable and ADSL access networks impact the delay-based protocol behavior significantly. We investigate these issues in the context of incremental deployment of a new delay-based protocol, PERT. We propose design modifications to PERT to compete with the TCP flavor SACK. We show through simulations and real network experiments that, with the proposed changes, PERT experiences lower drop rates than SACK and leads to lower overall drop rates with different mixes of PERT and SACK protocols. Delay-based protocols, being less aggressive, have problems in fully utilizing a highspeed link while operating alone. We show that a single PERT flow can fully utilize a high-speed, high-delay link. We performed several experiments with diverse parameters and simulated numerous scenarios using ns-2. The results from simulations indicate that PERT can adapt to heterogeneous networks and can operate well in an environment of heterogeneous protocols and other miscellaneous scenarios like wireless networks (in the presence of channel errors). We also show that proposed changes retain the desirable properties of PERT such as low loss rates and fairness when operating alone. To see how the protocol performs with the real-world traffic, the protocol has also been implemented in the Linux kernel and tested through experiments on live networks, by measuring the throughput and losses between nodes in our lab at TAMU and different machines at diverse location across the globe on the planet-lab. The results from simulations indicate that PERT can compete with TCP in diverse environments and provides benefits as it is incrementally deployed. Results from real-network experiments strengthen this claim as PERT shows similar behavior with the real-world traffic.
17

User Datagram Protocol with Congestion Control

Cox, Spencer L. 22 March 2006 (has links) (PDF)
Communication through the Internet is one of the dominant methods of exchanging information. Whether at an individual or large corporate level the Internet has become essential to gathering and disseminating information. TCP and UDP are the transport layer protocols responsible for transit of nearly all Internet communications. Due to the growth of real-time audio and video applications, UDP is being used more frequently as a transport protocol. As UDP traffic increases potential problems arise. Unlike TCP, UDP has no mechanism for congestion control leading to wasted bandwidth and poor performance for other competing protocols. This thesis defines a congestion control protocol called UDPCC to replace UDP. Several other protocols or applications have been proposed to provide UDP-like transport with congestion control. DCCP and UDP-MM are two schemes examined and will be used as comparetors to UDPCC. This thesis will show that the proposed UDPCC can perform at the level of DCCP, UDP-MM or higher while maintaining a simple implementation.
18

Improving TCP performance over heterogeneous networks : the investigation and design of End to End techniques for improving TCP performance for transmission errors over heterogeneous data networks

Alnuem, M. A. January 2009 (has links)
Transmission Control Protocol (TCP) is considered one of the most important protocols in the Internet. An important mechanism in TCP is the congestion control mechanism which controls TCP sending rate and makes TCP react to congestion signals. Nowadays in heterogeneous networks, TCP may work in networks with some links that have lossy nature (wireless networks for example). TCP treats all packet loss as if they were due to congestion. Consequently, when used in networks that have lossy links, TCP reduces sending rate aggressively when there are transmission (non-congestion) errors in an uncongested network. One solution to the problem is to discriminate between errors; to deal with congestion errors by reducing TCP sending rate and use other actions for transmission errors. In this work we investigate the problem and propose a solution using an end-to-end error discriminator. The error discriminator will improve the current congestion window mechanism in TCP and decide when to cut and how much to cut the congestion window. We have identified three areas where TCP interacts with drops: congestion window update mechanism, retransmission mechanism and timeout mechanism. All of these mechanisms are part of the TCP congestion control mechanism. We propose changes to each of these mechanisms in order to allow TCP to cope with transmission errors. We propose a new TCP congestion window action (CWA) for transmission errors by delaying the window cut decision until TCP receives all duplicate acknowledgments for a given window of data (packets in flight). This will give TCP a clear image about the number of drops from this window. The congestion window size is then reduced only by number of dropped packets. Also, we propose a safety mechanism to prevent this algorithm from causing congestion to the network by using an extra congestion window threshold (tthresh) in order to save the safe area where there are no drops of any kind. The second algorithm is a new retransmission action to deal with multiple drops from the same window. This multiple drops action (MDA) will prevent TCP from falling into consecutive timeout events by resending all dropped packets from the same window. A third algorithm is used to calculate a new back-off policy for TCP retransmission timeout based on the network's available bandwidth. This new retransmission timeout action (RTA) helps relating the length of the timeout event with current network conditions, especially with heavy transmission error rates. The three algorithms have been combined and incorporated into a delay based error discriminator. The improvement of the new algorithm is measured along with the impact on the network in terms of congestion drop rate, end-to-end delay, average queue size and fairness of sharing the bottleneck bandwidth. The results show that the proposed error discriminator along with the new actions toward transmission errors has increased the performance of TCP. At the same time it has reduced the load on the network compared to existing error discriminators. Also, the proposed error discriminator has managed to deliver excellent fairness values for sharing the bottleneck bandwidth. Finally improvements to the basic error discriminator have been proposed by using the multiple drops action (MDA) for both transmission and congestion errors. The results showed improvements in the performance as well as decreases in the congestion loss rates when compared to a similar error discriminator.
19

Traffic Sensitive Active Queue Management for Improved Quality of Service

Phirke, Vishal Vasudeo 07 May 2002 (has links)
The Internet, traditionally FTP, e-mail and Web traffic, is increasingly supporting emerging applications such as IP telephony, video conferencing and online games. These new genres of applications have different requirements in terms of throughput and delay than traditional applications. For example, interactive multimedia applications, unlike traditional applications, have more stringent delay constraints and less stringent loss constraints. Unfortunately, the current Internet offers a monolithic best-effort service to all applications without considering their specific requirements. Adaptive RED (ARED) is an Active Queue Management (AQM) technique, which optimizes the router for throughput. Throughput optimization provides acceptable QoS for traditional throughput sensitive applications, but is unfair for these new delay sensitive applications. While previous work has used different classes of QoS at the router to accommodate applications with varying requirements, thus far all have provided just 2 or 3 classes of service for applications to choose from. We propose two AQM mechanisms to optimize router for better overall QoS. Our first mechanism, RED-Worcester, is a simple extension to ARED in order to tune ARED for better average QoS support. Our second mechanism, REDBoston, further extends RED-Worcester to improve the QoS for all flows. Unlike earlier approaches, we do not predefine classes of service, but instead provide a continuum from which applications can choose. We evaluate our approach using NS-2 and present results showing the amount of improvement in QoS achieved by our mechanisms over ARED.
20

Adaptive Layered Multicast TCP-Friendly : análise e validação experimental / Adaptive layered multicast TCP-friendly

Krob, Andrea Collin January 2009 (has links)
Um dos obstáculos para o uso disseminado do multicast na Internet global é o desenvolvimento de protocolos de controle de congestionamento adequados. Um fator que contribui para este problema é a heterogeneidade de equipamentos, enlaces e condições de acesso dos receptores, a qual aumenta a complexidade de implementação e validação destes protocolos. Devido ao multicast poder envolver milhares de receptores simultaneamente, o desafio deste tipo de protocolo se torna ainda maior, pois além das questões relacionadas ao congestionamento da rede, é necessário considerar fatores como sincronismo, controle de feedbacks, equidade de tráfego, entre outros. Por esses motivos, os protocolos de controle de congestionamento multicast têm sido um tópico de intensa pesquisa nos últimos anos. Uma das alternativas para o controle de congestionamento multicast na Internet é o protocolo ALMTF (Adaptive Layered Multicast TCP-Friendly), o qual faz parte do projeto SAM (Sistema Adaptativo Multimídia). Uma vantagem desse algoritmo é inferir o nível de congestionamento da rede, determinando a taxa de recebimento mais apropriada para cada receptor. Além disso, ele realiza o controle da banda recebida, visando à justiça e a imparcialidade com os demais tráfegos concorrentes. O ALMTF foi desenvolvido originalmente em uma Tese de doutorado e teve a sua validação no simulador de redes NS-2 (Network Simulator). Este trabalho tem como objetivo estender o protocolo para uma rede real, implementando, validando os seus mecanismos e propondo novas alternativas que o adaptem para esse ambiente. Além disso, efetuar a comparação dos resultados reais com a simulação, identificando as diferenças e promovendo as pesquisas experimentais na área. / One of the obstacles for the widespread use of the multicast in the global Internet is the development of adequate protocols for congestion control. One factor that contributes for this problem is the heterogeneity of equipments, enlaces and conditions of access of the receivers, which increases the implementation and validation complexity of these protocols. Due to the number (thousands) of receivers simultaneously involved in multicast, the challenge of these protocols is even higher. Besides the issues related to the network congestion, it is necessary to consider factors such as synchronism, feedback control, fairness, among others. For these reasons, the multicast congestion control protocols have been a topic of intense research in recent years. The ALMTF protocol (Adaptive Layered Multicast TCP-Friendly), which is part of project SAM, is one of the alternatives for the multicast congestion control in the Internet. One advantage of this algorithm is its ability to infer the network congestion level, assigning the best receiving rate for each receptor. Besides that, the protocol manages the received rate, aiming to achieve fairness and impartiality with the competing network traffic. The ALMTF was developed originally in a Ph.D. Thesis and had its validation under NS-2 simulator. The goal this work is to extend the protocol ALMTF for a real network, validating its mechanisms and considering new alternatives to adapt it for this environment. Moreover, to make the comparison of the real results with the simulation, being identified the differences and promoting the experimental research in the area.

Page generated in 0.1061 seconds