61 |
LTCP-RC: RTT compensation technique to scale high-speed protocol in high RTT linksJain, Saurabh 01 November 2005 (has links)
In this thesis, we propose a new protocol named Layered TCP with RTT Compensation
(LTCP-RC, for short). LTCP-RC is a simple modification to the congestion
window response of the high-speed protocol, Layered TCP (LTCP). In networks characterized
by large link delays and high RTTs, LTCP-RC makes the LTCP protocol
more scalable. Ack-clocked schemes, similar to TCP, suffer performance problems
like long convergence time and throughput degradation, when RTT experienced by
the flow increases. Also, when flows with different RTTs compete, the problem of
unfairness among competing flows becomes worse in the case of high-speed protocols.
LTCP-RC uses an RTT Compensation technique in order to solve these problems.
This thesis presents a general framework to decide the function for RTT Compensation
factor and two particular design choices are analyzed in detail. The first
algorithm uses a fixed function based on the minimum RTT observed by the flow.
The second algorithm uses an adaptive scheme which regulates itself according to
the dynamic network conditions. Evaluation of the performance of these schemes is
done using analysis and ns-2 simulations. LTCP-RC exhibits significant performance
improvement in terms of reduced convergence time, low drop rates, increased utilization
in presence of links with channel errors and good fairness properties between
the flows,. The scheme is simple to understand, easy to implement on the TCP/IP
stack and does not require any additional support from the network resources. The choice of parameters can be influenced to tune the RTT unfairness of the scheme,
which is not possible in TCP or other high-speed protocols. The flexible nature of
the analysis framework has laid the ground work for the development of new schemes,
which can improve the performance of the window based protocols in high delay and
heterogeneous networks.
|
62 |
Improving Fairness among TCP Flows by Cross-layer Stateless ApproachTsai, Hsu-Sheng 26 July 2008 (has links)
Transmission Control Protocol (TCP) has been recognized as the most important transport-layer protocol for the Internet. It is distinguished by its reliable transmission, flow control, and congestion control. However, the issue of fair bandwidth-sharing among competing flows was not properly addressed in TCP. As web-based applications and interactive applications grow more popular, the number of short-lived flows conveyed on the Internet continues to rise. With conventional TCP, short-lived flows will be unable to obtain a fair share of available bandwidth. As a result, short-lived flows will suffer from longer delays and a lower service rate. It is essential for the Internet to come up with an effective solution to this problem in order to accommodate the new traffic patterns.
With a more equitable sharing of bottleneck bandwidth as its goal, two cross-layer stateless queue management schemes featuring Drop Maximum (DM) and Early Drop Maximum (EDM) are developed and presented in this dissertation. The fundamental idea is to drop packets from those flows having more than an equal share of bandwidth and retain low level of queue occupancy. The congestion window size of a TCP sender is carried in the options field on each packet. These proposed schemes will be exercised on routers and make its decision on packet dropping according to the congestion windows. In case of link congestion, the queued packet with the largest congestion window will be dropped from the queue. This will lower the sending rate of its sender and release part of the occupied bandwidth for the use of other competing flows. By so doing, the entire system will approach an equilibrium point with a rapid and fair distribution of bandwidth. As a stateless approach, these proposed schemes inherit numerous advantages in implementation and scalability.
Extensive simulations were conducted to verify the feasibility and the effectiveness of the proposed schemes. For the simple proposed packet discard scheme, Drop Maximum outperforms the other two stateless buffer management schemes, i.e. Drop Tail and Random Early Drop, in the scenario of homogeneous flows. However, in heterogeneous flows, Random Early Drop gains superiority to packet discard schemes due to its additional buffer occupancy control mechanism. To overcome the lack of proper buffer occupancy control, Early Drop Maximum is thus proposed. As shown in the simulation results, this proposed scheme outperforms existing stateless techniques, including Drop Tail, Drop Maximum and Random Early Drop, in many respects, such as a fair sharing of available bandwidth and a short response time for short-lived flows.
|
63 |
Evaluation of available bandwidth estimation tools (abets) and their application in improving tcp performanceEaswaran, Yegyalakshmi 01 June 2005 (has links)
Available bandwidth is a time-dependant variable that defines the spare bandwidth in an end-to-end network path. Currently, there is significant focus in the research community on the design and development of Available Bandwidth Estimation Tools (ABETs), and a few tools have resulted from this research. However, there is no comprehensive evaluation of these tools and the research work in this thesis attempts to fill that gap. A performance evaluation of important ABETs like Pathload, IGI and pathChirp in terms of their accuracy, convergence time and intrusiveness is conducted in several scenarios. A 2k factorial design is carried out to analyze the importance of the size of probe packets, number of probe packets per train, number of trains, and frequency of runs in these performance metrics. ABETs are very important because of their potential in solving many network research problems.
For example, ABETs can be used in congestion control in transport layer protocols, network management tools, route selection and configuration in overlay networks, SLA verification, topology building in peer to peer networks, call admission control, dynamic encoding rate modification in streaming applications, traffic engineering, capacity planning, intelligent routing systems, etc. This thesis looks at applying ABETs in the congestion control of transmission control protocol (TCP).Current implementations of TCP in the Internet perform reasonably well in terms of containing congestion, but their sending rate adjustment algorithm is unaware of the accurate network conditions and available resources. TCP's Additive Increase Multiplicative Decrease (AIMD) congestion control algorithm cannot efficiently utilize the available bandwidth to the full potential and this is especially true in high bandwidth networks.
|
64 |
Σύστημα VoIP με χρήση δορυφορικών επικοινωνιώνΤσουκαλής, Αχιλλέας 03 March 2008 (has links)
Για περιοχές με μικρή ή καθόλου επίγεια τηλεπικοινωνιακή υποδομή η επικοινωνία μέσω δορυφορικής σύνδεσης είναι μία αποτελεσματική και λογικού κόστους λύση. Το DVB-RCS (Digital Video Broadcast - Return Channel System) standard επιτρέπει την αμφίδρομη μετάδοση IP κίνησης πάνω από δορυφορικό κανάλι κάνοντας έτσι δυνατή την VoIP επικοινωνία μέσω δορυφόρου. Η μετάδοση VoIP μέσω μιας δορυφορικής σύνδεσης όμως, μπορεί προκαλέσει ορισμένα προβλήματα τόσο στην ίδια την ποιότητα της VoIP συνδιάλεξης, όσο και σε άλλες υπηρεσίες που ενδεχομένως μοιράζονται με τις VoIP ροές το διαθέσιμο εύρος ζώνης του δορυφορικού καναλιού. Η διπλωματική αυτή εργασία επικεντρώνεται στη μελέτη και ανάπτυξη end to end μηχανισμών ελέγχου του ρυθμού κωδικοποίησης και μετάδοσης στα VoIP τερματικά τηλέφωνα ανάλογα με τον βαθμό συμφόρησης του καναλιού, ώστε να αξιοποιείται πιο αποτελεσματικά το διαθέσιμο εύρος ζώνης του καναλιού μετάδοσης (δορυφορικού ή μη), να αντιμετωπίζεται ενδεχόμενη συμφόρησή του, να βελτιώνεται η ποιότητα της συνδιάλεξης, να αυξάνεται ο μέγιστος αριθμός των ταυτόχρονων συνδιαλέξεων για ένα δεδομένο εύρος ζώνης και να εξασφαλίζεται ο δίκαιος διαμοιρασμός του διαθέσιμου εύρους ζώνης ανάμεσα στις TCP και VoIP ροές. Σε αυτά τα πλαίσια αρχικά εξετάζονται οι παράγοντες που επιδρούν στην ποιότητα της VoIP υπηρεσίας, παρουσιάζονται ορισμένοι τρόποι για την αξιολόγηση της και γίνεται μελέτη της χρήσης του TFRC (TCP- Friendly Rate Control) μηχανισμού σε VoIP εφαρμογές. Προτείνεται ένας νέος, πολύ απλός στην υλοποίηση, μηχανισμός ελέγχου μετάδοσης για VoIP ροές, που αντιθέτως με τους υπάρχοντες μηχανισμούς, στοχεύει ταυτόχρονα στην βελτίωση της ποιότητας της συνδιάλεξης και στην φιλικότητα προς τις TCP ροές. Αναλύονται επίσης η δομή, η λειτουργία και ορισμένα θέματα υλοποίησης του VoIP τερματικού συστήματος που αναπτύχθηκε (σε μορφή λογισμικού) στα πλαίσια αυτής της διπλωματικής εργασίας και που υλοποιεί τον προτεινόμενο μηχανισμό ελέγχου μετάδοσης. / For areas with limited or no terrestrial telecommunication infrastructure, communication via satellite is a cost effective alternative. The DVB-RCS (Digital Video Broadcast - Return Channel System) standard supports the bidirectional transmission of IP data, making VoIP communication via satellite possible. However, the transmission of VoIP through a satellite link raises some serious issues concerning the VoIP quality of service and the plain functionality of other applications that might share the same link with the VoIP flows. This thesis focuses in the study and the development of end to end rate control mechanisms in VoIP terminal phones, which mechanisms can enhance the utilization of the available channel (satellite or not) bandwidth, tackle potential congestion, enhance the conversational quality, increase the maximum number of simultaneous VoIP conversations for a given bandwidth, and ensure that bandwidth is being fairly shared between TCP and VoIP flows. The factors that affect the VoIP quality of service, the ways this quality can be evaluated and the use of TFRC (TCP- Friendly Rate Control) mechanism in VoIP are discussed. A new, easy-to-implement rate control mechanism which, in contrast to the existing mechanisms, targets on both conversational quality enhancement and TCP friendliness is proposed. Finally, some implementation issues, regarding the VoIP terminal software system that it has been developed as part of this thesis and implements the proposed rate control mechanism, are discussed.
|
65 |
Adaptive Forwarding in Named Data NetworkingYi, Cheng January 2014 (has links)
Named Data Networking (NDN) is a recently proposed new Internet architecture. By naming data instead of locations, it changes the very basic network service abstraction from "delivering packets to given destinations" to "retrieving data of given names." This fundamental change creates an abundance of new opportunities as well as many intellectual challenges in application development, network routing and forwarding, communication security and privacy. The focus of this dissertation is a unique feature introduced by NDN: its adaptive forwarding plane. Communication in NDN is done by exchanges of Interest and Data packets. Consumers send Interest packets to request desired Data, routers forward them based on data names, and producers answer with Data packets, which take the same path of Interests but in reverse direction. During this process, routers maintain state information of pending Interests. This state information, coupled with the symmetric exchange of Interest and Data, enables NDN routers to detect loops, observe data retrieval performance, and explore multiple forwarding paths, all at the forwarding plane. Since NDN is still in its early stage, however, none of these powerful features has been systematically designed, valuated, or explored. In this dissertation, we present a concrete design of NDN's forwarding plane to make the network resilient and efficient. First, we design the basic adaptation mechanism and evaluate its effectiveness in circumventing prefix hijack attacks. Second, we propose a novel NACK mechanism for fast failure detection and evaluate its benefits in handling network failures. We also show that a resilient forwarding plane makes routing more stable and more scalable. Third, we design a congestion control mechanism, Dynamic Interest Limiting, to adapt traffic rate in a hop-by-hop and multipath fashion, which is effective even with a large number of flows in a large network topology.
|
66 |
Cooperative End-to-end Congestion Control in Heterogeneous Wireless NetworksMohammadizadeh, Neda 20 August 2013 (has links)
Sharing the resources of multiple wireless networks with overlapped coverage areas has a potential of improving the transmission throughput. However, in the existing frameworks, the improvement cannot be achieved in congestion scenarios because of independent congestion control procedures among the end-to-end paths. Although various network characteristics make the congestion control complex, this variety can be useful in congestion avoidance if the networks cooperate with each other. When congestion happens in an end-to-end path, it is inevitable to have a packet transmission rate less than the minimum requested rate due to congestion window size adjustments.
Cooperation among networks can help to avoid this problem for better service quality. When congestion is predicted for one path, some of the on-going packets can be sent over other paths instead of the congested path. In this way, the traffic can be shifted from a congested network to others, and the overall transmission throughput does not degrade in a congestion scenario. However, cooperation is not always advantageous since the throughput of cooperative transmission in an uncongested scenario can be less than that of non-cooperative transmission due to cooperation costs such as cooperation setup time, additional signalling for cooperation, and out-of-order packet reception. In other words, a trade-off exists between congestion avoidance and cooperation cost. Thus, cooperation should be triggered only when it is beneficial according to congestion level measurements.
In this research, our aim is to develop an efficient cooperative congestion control scheme for a heterogeneous wireless environment. To this end, a cooperative congestion control algorithm is proposed, in which the state of an end-to-end path is provided at the destination terminal by measuring the queuing delay and estimating the congestion level. The decision on when to start/stop cooperation is made based on the network characteristics, instantaneous traffic condition, and the requested quality of service (QoS). Simulation results demonstrate the throughput improvement of the proposed scheme over non-cooperative congestion control.
|
67 |
Internet Multicast Congestion ControlOnal, Kerem 01 February 2004 (has links) (PDF)
Congestion control is among the fundamental problems of Internet multicast. It is an active research area with many challenges. In this study, an introduction to Internet congestion control and a brief literature survey of current multicast congestion control protocols is presented. Then two recently proposed &ldquo / single-rate, end-to-end, rate based&rdquo / class of protocols, namely LESBCC and TFMCC are evaluated with respect to their intersession fairness (TCP-friendliness), smoothness and responsiveness criteria. Throughout the experiments, which are conducted using a widely accepted network simulation tool &lsquo / ns&rsquo / , different topologies have been employed.
|
68 |
路由器輔助的TCP擁塞控制技術之設計鍾永彬, Chung, Yung-Pin Unknown Date (has links)
隨著網路訊務流量的快速成長,如何妥善的運用網路資源是一個成功的擁塞控制機制要面對的根本問題。在終端設備上執行的TCP是網路上最廣為使用者使用的傳輸層協定,它有很多不同的版本被設計出來改進使用的效能,例如TCP Reno、TCP Vegas 等。由於TCP所棲身的終端設備並未具有網路內部狀態的資訊,大部份的TCP 擁塞控制機制僅能依賴封包遺失觸發擁塞控制機制,本研究提出TCP Muzha協定,藉由路由器協助,提供網路內部資訊給傳送端,在未發生擁塞前不需依賴封包遺失便可進行適度的傳輸速度控制,以減少因為封包遺失所造成劇烈的傳輸速度下降,並可更快速達到最佳傳輸速度。本研究的設計理念是設法尋找傳送路徑中的瓶頸,進而計算出瓶頸提供的可用頻寬,藉由瓶頸所提供的資訊動態的進行流量控制以充份利用頻寬並避免產生擁塞,增進整體的效能。本研究之重點在於路由器應提供何種資訊及如何運用所獲得的資訊進行動態速率調整。我們提出模糊化的多層級速率調整方法,藉著動態所獲得的細膩資訊做擁塞避免。最後於NS2平台實驗模擬,評估我們所提出的方法,實驗結果中顯示本方法能有效避免擁塞的產生,降低封包遺失,提升整體效能,和TCP Reno共存的環境下不因為Reno侵略性的傳輸方式而降低過多的效能並保有較低的封包遺失率。 / With the tremendous growth of Internet traffic, to utilize network resources efficiently is essential to a successful congestion control protocol. Transmission Control Protocol (TCP) is a widely used end-to-end transport protocol across the Internet. It has several enhencing versions (i.e. TCP Reno, TCP Vegas…) which intend to improve the drawbacks of the initial version of TCP. Most congestion control techniques use trial-and-error-based flow control to handle network congestion. In this paper, we propose a new method (TCP Muzha) that requires routers to feedback their status to the sender. Based on this information, the sender is able to adjust the sending data rate dynamically. Our approach can prevent data rate from decreasing dramatically due to packet loss. It can also help to increase the data rate quickly to where it supposes to be. Our design philosophy is to find out the bottleneck of the path, and its available bandwidth. Our goal is to increase network performance and avoid congestion by using the information obtained from the bottleneck. The design challenges are to determine which information is essential and how to use this information to dynamically adjust the data rate. We also propose the multi-level data rate adjustment method. Congestion can be avoided by dynamically adjusting data rate using this information. Finally, we use NS2 simulator to evaluate the performance of our approaches. From the experiment results, it shows our method can avoid congestion before it actually happen, decrease packet-loss rate and increase the network utilization. In the fairness experiment, our method will only suffer a minor throughputs decreasing when TCP Reno is coexisting.
|
69 |
A System, Tools and Algorithms for Adaptive HTTP-live Streaming on Peer-to-peer OverlaysRoverso, Roberto January 2013 (has links)
In recent years, adaptive HTTP streaming protocols have become the de facto standard in the industry for the distribution of live and video-on-demand content over the Internet. In this thesis, we solve the problem of distributing adaptive HTTP live video streams to a large number of viewers using peer-to-peer (P2P) overlays. We do so by assuming that our solution must deliver a level of quality of user experience which is the same as a CDN while trying to minimize the load on the content provider’s infrastructure. Besides that, in the design of our solution, we take into consideration the realities of the HTTP streaming protocols, such as the pull-based approach and adaptive bitrate switching. The result of this work is a system which we call SmoothCache that provides CDN-quality adaptive HTTP live streaming utilizing P2P algorithms. Our experiments on a real network of thousands of consumer machines show that, besides meeting the the CDN-quality constraints, SmoothCache is able to consistently deliver up to 96% savings towards the source of the stream in a single bitrate scenario and 94% in a multi-bitrate scenario. In addition, we have conducted a number of pilot deployments in the setting of large enterprises with the same system, albeit tailored to private networks. Results with thousands of real viewers show that our platform provides an average offloading of bottlenecks in the private network of 91.5%. These achievements were made possible by advancements in multiple research areas that are also presented in this thesis. Each one of the contributions is novel with respect to the state of the art and can be applied outside of the context of our application. However, in our system they serve the purposes described below. We built a component-based event-driven framework to facilitate the development of our live streaming application. The framework allows for running the same code both in simulation and in real deployment. In order to obtain scalability of simulations and accuracy, we designed a novel flow-based bandwidth emulation model. In order to deploy our application on real networks, we have developed a network library which has the novel feature of providing on-the-fly prioritization of transfers. The library is layered over the UDP protocol and supports NAT Traversal techniques. As part of this thesis, we have also improved on the state of the art of NAT Traversal techniques resulting in higher probability of direct connectivity between peers on the Internet. Because of the presence of NATs on the Internet, discovery of new peers and collection of statistics on the overlay through peer sampling is problematic. Therefore, we created a peer sampling service which is NAT-aware and provides one order of magnitude fresher samples than existing peer sampling protocols. Finally, we designed SmoothCache as a peer-assisted live streaming system based on a distributed caching abstraction. In SmoothCache, peers retrieve video fragments from the P2P overlay as quickly as possible or fall back to the source of the stream to keep the timeliness of the delivery. In order to produce savings, the caching system strives to fill up the local cache of the peers ahead of playback by prefetching content. Fragments are efficiently distributed by a self-organizing overlay network that takes into account many factors such as upload bandwidth capacity, connectivity constraints, performance history and the currently being watched bitrate. / <p>QC 20131122</p>
|
70 |
Multipath TCP and Measuring end-to-end TCP Throughput : Multipath TCP Descriptions and Ways to Improve TCP PerformanceBONAM, VEERA VENKATA SIVARAMAKRISHNA January 2018 (has links)
Internet applications make use of the services provided by a transport protocol, such as TCP (a reliable, in-order stream protocol). We use this term Transport Service to mean the end-to- end service provided to application by the transport layer. That service can only be provided correctly if information about the intended usage is supplied from the application. The application may determine this information at the design time, compile time, or run time, and it may include guidance on whether a feature is required, a preference by the application, or something in between. Multipath TCP (MPTCP) adds the capability of using multiple paths to a regular TCP session. Even though it is designed to be totally backward compatible to applications. The data transport differs compared to regular TCP, and there are several additional degrees of freedom that the particular application may want to exploit. Multipath TCP is particularly useful in the context of wireless networks using both Wi-Fi and a mobile network is a typical use case. In addition to the gains in throughput from inverse multiplexing, links may be added or dropped as the user moves in or out of coverage without disrupting the end-to-end TCP connection. The problem of link handover is thus solved by abstraction in the transport layer, without any special mechanisms at the network or link level. Handover functionality can then be implemented at the endpoints without requiring special functionality in the sub-networks according to the Internet's end-to-end principle. Multipath TCP can balance a single TCP connection across multiple interfaces and reach very high throughput.
|
Page generated in 0.0911 seconds