Spelling suggestions: "subject:" congestion control"" "subject:" kongestion control""
71 |
Multipath TCP and Measuring endto-end TCP Throughput : Measuring TCP Metrics and ways to improve TCP Throughput performanceSANA, VINEESHA January 2018 (has links)
Internet applications make use of the services provided by a transport protocol, such as TCP (a reliable, in-order stream protocol). We use the term Transport Service to mean the endtoend service provided to application by the transport layer. That service can only be provided correctly if information about the intended usage is supplied from the application. The application may determine this information at the design time, compile time, or run time, and it may include guidance on whether a feature is required, a preference by the application, or something in between. Multipath TCP (MPTCP) adds the capability of using multiple paths to a regular TCP session. Even though it is designed to be totally backward compatible to applications. The data transport differs compared to regular TCP, and there are several additional degrees of freedom that the particular application may want to exploit. Multipath TCP is particularly useful in the context of wireless networks using both Wi-Fi and a mobile network is a typical use case. In addition to the gains in throughput from inverse multiplexing, links may be added or dropped as the user moves in or out of coverage without disrupting the end-to-end TCP connection. The problem of link handover is thus solved by abstraction in the transport layer, without any special mechanisms at the network or link level. Handover functionality can then be implemented at the endpoints without requiring special functionality in the sub-networks according to the Internet's end-to-end principle. Multipath TCP can balance a single TCP connection across multiple interfaces and reach very high throughput.
|
72 |
Fast retransmit inhibitions for TCPHurtig, Per January 2006 (has links)
The Transmission Control Protocol (TCP) has been the dominant transport protocol in the Internet for many years. One of the reasons to this is that TCP employs congestion control mechanisms which prevent the Internet from being overloaded. Although TCP's congestion control has evolved during almost twenty years, the area is still an active research area since the environments where TCP are employed keep on changing. One of the congestion control mechanisms that TCP uses is fast retransmit, which allows for fast retransmission of data that has been lost in the network. Although this mechanism provides the most effective way of retransmitting lost data, it can not always be employed by TCP due to restrictions in the TCP specification. The primary goal of this work was to investigate when fast retransmit inhibitions occur, and how much they affect the performance of a TCP flow. In order to achieve this goal a large series of practical experiments were conducted on a real TCP implementation. The result showed that fast retransmit inhibitions existed, in the end of TCP flows, and that the increase in total transmission time could be as much as 301% when a loss were introduced at a fast retransmit inhibited position in the flow. Even though this increase was large for all of the experiments, ranging from 16-301%, the average performance loss, due to an arbitrary placed loss, was not that severe. Because fast retransmit was inhibited in fewer positions of a TCP flow than it was employed, the average increase of the transmission time due to these inhibitions was relatively small, ranging from 0,3-20,4%.
|
73 |
An End-to-End Solution for High Definition Video Conferencing over Best-Effort NetworksJavadtalab, Abbas January 2015 (has links)
Video streaming applications over best-effort networks, such as the Internet, have become very popular among Internet users. Watching live sports and news, renting movies, watching clips online, making video calls, and participating in videoconferences are typical video applications that millions of people use daily. One of the most challenging aspects of video communication is the proper transmission of video in various network bandwidth conditions. Currently, various devices with different processing powers and various connection speeds (2G, 3G, Wi-Fi, and LTE) are used to access video over the Internet, which offers best-effort services only. Skype, ooVoo, Yahoo Messenger, and Zoom are some well-known applications employed on a daily basis by people throughout the world; however, best-effort networks are characterized by dynamic and unpredictable changes in the available bandwidth, which adversely affect the quality of the video. For the average consumer, there is no guarantee of receiving an exact amount of bandwidth for sending or receiving video data. Therefore, the video delivery system must use a bandwidth adaptation mechanism to deliver video content properly. Otherwise, bandwidth variations will lead to degradation in video quality or, in the worst case, disrupt the entire service. This is especially problematic for videoconferencing (VC) because of the bulkiness of the video, the stringent bandwidth demands, and the delay constraints. Furthermore, for business grade VC, which uses high definition videoconferencing (HDVC), user expectations regarding video quality are much higher than they are for ordinary VC. To manage network fluctuations and handle the video traffic, two major components in the system should be improved: the video encoder and the congestion control.
The video encoder is responsible for compressing raw video captured by a camera and generating a bitstream. In addition to the efficiency of the encoder and compression speed, its output flow is also important. Though the nature of video content may make it impossible to generate a constant bitstream for a long period of time, the encoder must generate a flow around the given bitrate.
While the encoder generates the video traffic around the given bitrate, congestion management plays a key role in determining the current available bandwidth. This can be done by analyzing the statistics of the sent/received packets, applying mathematical models, updating parameters, and informing the encoder. The performance of the whole system is related to the in-line collaboration of the encoder and the congestion management, in which the congestion control system detects and calculates the available bandwidth for a specific period of time, preferably per incoming packet, and informs rate control (RC) to adapt its bitrate in a reasonable time frame, so that the network oscillations do not affect the perceived quality on the decoder side and do not impose adverse effects on the video session.
To address these problems, this thesis proposes a collaborative management architecture that monitors the network situation and manages the encoded video rate. The goal of this architecture is twofold: First, it aims to monitor the available network bandwidth, to predict network behavior and to pass that information to the encoder. So encoder can encode a suitable video bitrate. Second, by using a smart rate controller, it aims for an optimal adaptation of the encoder output bitrate to the bitrate determined by congestion control.
Merging RC operations and network congestion management, to provide a reliable infrastructure for HDVC over the Internet, represents a unique approach. The primary motivation behind this project is that by applying videoconference features, which are explained in the rate controller and congestion management chapter, the HDVC application becomes feasible and reliable for the business grade application even in the best-effort networks such as the Internet.
|
74 |
Transport-Layer Performance for Applications and Technologies of the Future InternetHurtig, Per January 2012 (has links)
To provide Internet applications with good performance, the transport protocol TCP is designed to optimize the throughput of data transfers. Today, however, more and more applications rely on low latency rather than throughput. Such applications can be referred to as data-limited and are not appropriately supported by TCP. Another emerging problem is associated with the use of novel networking techniques that provide infrastructure-less networking. To improve connectivity and performance in such environments, multi-path routing is often used. This form of routing can cause packets to be reordered, which in turn hurts TCP performance. To address timeliness issues for data-limited traffic, we propose and experimentally evaluate several transport protocol adaptations. For instance, we adapt the loss recovery mechanisms of both TCP and SCTP to perform faster loss detection for data-limited traffic, while preserving the standard behavior for regular traffic. Evaluations show that the proposed mechanisms are able to reduce loss recovery latency with 30-50%. We also suggest modifications to the TCP state caching mechanisms. The caching mechanisms are used to optimize new TCP connections based on the state of old ones, but do not work properly for data-limited flows. Additionally, we design a SCTP mechanism that reduces overhead by bundling several packets into one packet in a more timely fashion than the bundling normally used in SCTP. To address the problem of packet reordering we perform several experimental evaluations, using TCP and state of the art reordering mitigation techniques. Although the studied mitigation techniques are quite good in helping TCP to sustain its performance during pure packet reordering events, they do not help when other impairments like packet loss are present. / <p>Paper V was in manuscript form at the time of the defense.</p>
|
75 |
Zefektivnění alokace toků v RINA / Towards More Effective Flow Allocation in RINAKoutenský, Michal January 2019 (has links)
This master's thesis focuses on design and implementation of a flow allocator policy which supports bandwidth reservation for the Recursive InterNetwork Architecture (RINA). Each flow has some dedicated bandwidth, which is guaranteed to be available during the whole lifetime of the flow. The allocator, which operates as a distributed system, attempts to find a suitable path in the network graph. To achieve this goal, it must keep the information about link utilization up to date. The proposed allocator has been implemented in the open source project rlite. The first half of the thesis is concerned with congestion control theory, and also studies a number of algorithms used in TCP. Additionally, it contains an overview of the structure of RINA and the Raft consensus algorithm.
|
76 |
Vývojové trendy protokolu TCP pro vysokorychlostní sítě / Development trends of TCP for high-speed networksModlitba, Jan January 2008 (has links)
The master's thesis solve the problem of setting new TCP variants for high-speed IP networks. The first goal was to describe in detail the behaviour of TCP and then analyse a problem of utilization the available bandwidth with standard TCP in high-speed network. Work consequently deals with selection and description the most perspective ones. Further the reader is familiarized with reasonable simulation tools of existing problems and their brief description. Main part of this thesis presents examination of performance of new TCP variants for high-speed network. During the examination the aspects on efficiency and fairness of competition flows on shared bottleneck are taken. The results are tabularly displayed plus compared with each other.
|
77 |
Gateway Adaptive Pacing for TCP across Multihop Wireless Networks and the InternetElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce an effective congestion control scheme for TCP over hybrid wireless/wired networks comprising a multihop wireless IEEE 802.11 network and the wired Internet. We propose an adaptive pacing scheme at the Internet gateway for wired-to-wireless TCP flows. Furthermore, we analyze the causes for the unfairness of oncoming TCP flows and propose a scheme to throttle aggressive wired-to-wireless TCP flows at the Internet gateway to achieve nearly optimal fairness. Thus, we denote the introduced congestion control scheme TCP with Gateway Adaptive Pacing (TCP-GAP). For wireless-to-wired flows, we propose an adaptive pacing scheme at the TCP sender. In contrast to previous work, TCP-GAP does not impose any control traffic overhead for achieving fairness among active TCP flows. Moreover, TCP-GAP can be incrementally deployed because it does not require any modifications of TCP in the wired part of the network and is fully TCP-compatible. Extensive simulations using ns-2 show that TCPGAP is highly responsive to varying traffic conditions, provides nearly optimal fairness in all scenarios and achieves up to 42% more goodput than TCP NewReno.
|
78 |
TCP with Adaptive Pacing for Multihop Wireless NetworksElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce a novel congestion control algorithm for TCP over multihop IEEE 802.11 wireless networks implementing rate-based scheduling of transmissions within the TCP congestion window. We show how a TCP sender can adapt its transmission rate close to the optimum using an estimate of the current 4-hop propagation delay and the coefficient of variation of recently measured round-trip times. The novel TCP variant is denoted as TCP with Adaptive Pacing (TCP-AP). Opposed to previous proposals for improving TCP over multihop IEEE 802.11 networks, TCP-AP retains the end-to-end semantics of
TCP and does neither rely on modifications on the routing or the link layer nor requires cross-layer information from intermediate nodes along the path. A comprehensive simulation study using ns-2 shows that TCP-AP achieves up to 84% more goodput than TCP NewReno, provides excellent fairness in almost all scenarios, and is highly responsive to changing traffic conditions.
|
79 |
TCP with gateway adaptive pacing for multihop wireless networks with Internet connectivityElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
This paper introduces an effective congestion control pacing scheme for TCP over multihop wireless networks with Internet connectivity. The pacing scheme is implemented at the wireless TCP sender as well as at the Internet gateway, and reacts according to the direction of TCP flows running across the wireless network and the Internet. Moreover, we analyze the causes for the unfairness of oncoming TCP flows and propose a scheme to throttle aggressive wired-to-wireless TCP flows at the Internet gateway to achieve nearly optimal fairness. The proposed scheme, which we denote as TCP with Gateway Adaptive Pacing (TCP-GAP), does not impose any control traffic overhead for achieving fairness among active TCP flows and can be incrementally deployed since it does not require any modifications of TCP in the wired part of the network. In an extensive set of experiments using ns-2 we show that TCP-GAP is highly responsive to varying traffic conditions, provides nearly optimal fairness in all scenarios and achieves up to 42% more goodput for FTP-like traffic as well as up to 70% more goodput for HTTP-like traffic than TCP NewReno. We also investigate the sensitivity of the considered TCP variants to different bandwidths of the wired and wireless links with respect to both aggregate goodput and fairness.
|
80 |
A Hop-by-Hop Architecture for Multicast Transport in Ad Hoc Wireless NetworksPandey, Manoj Kumar 29 July 2009 (has links) (PDF)
Ad hoc wireless networks are increasingly being used to provide connectivity where a wired networking infrastructure is either unavailable or inaccessible. Many deployments utilize group communication, where several senders communicate with several receivers; multicasting has long been seen as an efficient way to provide this service. While there has been a great deal of research on multicast routing in ad hoc networks, relatively little attention has been paid to the design of multicast transport protocols, which provide reliability and congestion control. In this dissertation we design and implement a complete multicast transport architecture that includes both routing and transport protocols. Our multicast transport architecture has three modules: (a) a multicast routing and state setup protocol, (b) a mobility detection algorithm, and (c) a hop-by-hop transport protocol. The multicast routing and state setup protocol, called ASSM, is lightweight and receiver-oriented, making it both efficient and scalable. A key part of ASSM is its use of Source Specific Multicast semantics to avoid broadcasting when searching for sources. ASSM also uses routes provided by the unicast protocol to greatly reduce routing overhead. The second module, MDA, solves the problem of determining the cause of frame loss and reacting properly. Frame loss can occur due to contention, a collision, or mobility. Many routing protocols make the mistake of interpreting all loss as due to mobility, resulting in significant overhead when they initiate a repair that is not required. MDA enables routing protocols to react to frame loss only when necessary. The third module is a hop-by-hop multicast transport protocol, HCP. A hop-by-hop algorithm has a faster response time than that of an end-to-end algorithm, because it invokes congestion control at each hop instead of waiting for an end-to-end response. An important feature of HCP is that it can send data at different rates to receivers with different available bandwidth. We evaluate all three components of this architecture using simulations, demonstrating the improved performance, efficiency and scalability of our architecture as compared to other solutions.
|
Page generated in 0.1224 seconds