• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 6
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving TCP performance over heterogeneous networks : the investigation and design of End to End techniques for improving TCP performance for transmission errors over heterogeneous data networks

Alnuem, M. A. January 2009 (has links)
Transmission Control Protocol (TCP) is considered one of the most important protocols in the Internet. An important mechanism in TCP is the congestion control mechanism which controls TCP sending rate and makes TCP react to congestion signals. Nowadays in heterogeneous networks, TCP may work in networks with some links that have lossy nature (wireless networks for example). TCP treats all packet loss as if they were due to congestion. Consequently, when used in networks that have lossy links, TCP reduces sending rate aggressively when there are transmission (non-congestion) errors in an uncongested network. One solution to the problem is to discriminate between errors; to deal with congestion errors by reducing TCP sending rate and use other actions for transmission errors. In this work we investigate the problem and propose a solution using an end-to-end error discriminator. The error discriminator will improve the current congestion window mechanism in TCP and decide when to cut and how much to cut the congestion window. We have identified three areas where TCP interacts with drops: congestion window update mechanism, retransmission mechanism and timeout mechanism. All of these mechanisms are part of the TCP congestion control mechanism. We propose changes to each of these mechanisms in order to allow TCP to cope with transmission errors. We propose a new TCP congestion window action (CWA) for transmission errors by delaying the window cut decision until TCP receives all duplicate acknowledgments for a given window of data (packets in flight). This will give TCP a clear image about the number of drops from this window. The congestion window size is then reduced only by number of dropped packets. Also, we propose a safety mechanism to prevent this algorithm from causing congestion to the network by using an extra congestion window threshold (tthresh) in order to save the safe area where there are no drops of any kind. The second algorithm is a new retransmission action to deal with multiple drops from the same window. This multiple drops action (MDA) will prevent TCP from falling into consecutive timeout events by resending all dropped packets from the same window. A third algorithm is used to calculate a new back-off policy for TCP retransmission timeout based on the network's available bandwidth. This new retransmission timeout action (RTA) helps relating the length of the timeout event with current network conditions, especially with heavy transmission error rates. The three algorithms have been combined and incorporated into a delay based error discriminator. The improvement of the new algorithm is measured along with the impact on the network in terms of congestion drop rate, end-to-end delay, average queue size and fairness of sharing the bottleneck bandwidth. The results show that the proposed error discriminator along with the new actions toward transmission errors has increased the performance of TCP. At the same time it has reduced the load on the network compared to existing error discriminators. Also, the proposed error discriminator has managed to deliver excellent fairness values for sharing the bottleneck bandwidth. Finally improvements to the basic error discriminator have been proposed by using the multiple drops action (MDA) for both transmission and congestion errors. The results showed improvements in the performance as well as decreases in the congestion loss rates when compared to a similar error discriminator.
2

Improving Fairness among TCP Flows by Cross-layer Stateless Approach

Tsai, Hsu-Sheng 26 July 2008 (has links)
Transmission Control Protocol (TCP) has been recognized as the most important transport-layer protocol for the Internet. It is distinguished by its reliable transmission, flow control, and congestion control. However, the issue of fair bandwidth-sharing among competing flows was not properly addressed in TCP. As web-based applications and interactive applications grow more popular, the number of short-lived flows conveyed on the Internet continues to rise. With conventional TCP, short-lived flows will be unable to obtain a fair share of available bandwidth. As a result, short-lived flows will suffer from longer delays and a lower service rate. It is essential for the Internet to come up with an effective solution to this problem in order to accommodate the new traffic patterns. With a more equitable sharing of bottleneck bandwidth as its goal, two cross-layer stateless queue management schemes featuring Drop Maximum (DM) and Early Drop Maximum (EDM) are developed and presented in this dissertation. The fundamental idea is to drop packets from those flows having more than an equal share of bandwidth and retain low level of queue occupancy. The congestion window size of a TCP sender is carried in the options field on each packet. These proposed schemes will be exercised on routers and make its decision on packet dropping according to the congestion windows. In case of link congestion, the queued packet with the largest congestion window will be dropped from the queue. This will lower the sending rate of its sender and release part of the occupied bandwidth for the use of other competing flows. By so doing, the entire system will approach an equilibrium point with a rapid and fair distribution of bandwidth. As a stateless approach, these proposed schemes inherit numerous advantages in implementation and scalability. Extensive simulations were conducted to verify the feasibility and the effectiveness of the proposed schemes. For the simple proposed packet discard scheme, Drop Maximum outperforms the other two stateless buffer management schemes, i.e. Drop Tail and Random Early Drop, in the scenario of homogeneous flows. However, in heterogeneous flows, Random Early Drop gains superiority to packet discard schemes due to its additional buffer occupancy control mechanism. To overcome the lack of proper buffer occupancy control, Early Drop Maximum is thus proposed. As shown in the simulation results, this proposed scheme outperforms existing stateless techniques, including Drop Tail, Drop Maximum and Random Early Drop, in many respects, such as a fair sharing of available bandwidth and a short response time for short-lived flows.
3

Gateway Adaptive Pacing for TCP across Multihop Wireless Networks and the Internet

ElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce an effective congestion control scheme for TCP over hybrid wireless/wired networks comprising a multihop wireless IEEE 802.11 network and the wired Internet. We propose an adaptive pacing scheme at the Internet gateway for wired-to-wireless TCP flows. Furthermore, we analyze the causes for the unfairness of oncoming TCP flows and propose a scheme to throttle aggressive wired-to-wireless TCP flows at the Internet gateway to achieve nearly optimal fairness. Thus, we denote the introduced congestion control scheme TCP with Gateway Adaptive Pacing (TCP-GAP). For wireless-to-wired flows, we propose an adaptive pacing scheme at the TCP sender. In contrast to previous work, TCP-GAP does not impose any control traffic overhead for achieving fairness among active TCP flows. Moreover, TCP-GAP can be incrementally deployed because it does not require any modifications of TCP in the wired part of the network and is fully TCP-compatible. Extensive simulations using ns-2 show that TCPGAP is highly responsive to varying traffic conditions, provides nearly optimal fairness in all scenarios and achieves up to 42% more goodput than TCP NewReno.
4

TCP with gateway adaptive pacing for multihop wireless networks with Internet connectivity

ElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
This paper introduces an effective congestion control pacing scheme for TCP over multihop wireless networks with Internet connectivity. The pacing scheme is implemented at the wireless TCP sender as well as at the Internet gateway, and reacts according to the direction of TCP flows running across the wireless network and the Internet. Moreover, we analyze the causes for the unfairness of oncoming TCP flows and propose a scheme to throttle aggressive wired-to-wireless TCP flows at the Internet gateway to achieve nearly optimal fairness. The proposed scheme, which we denote as TCP with Gateway Adaptive Pacing (TCP-GAP), does not impose any control traffic overhead for achieving fairness among active TCP flows and can be incrementally deployed since it does not require any modifications of TCP in the wired part of the network. In an extensive set of experiments using ns-2 we show that TCP-GAP is highly responsive to varying traffic conditions, provides nearly optimal fairness in all scenarios and achieves up to 42% more goodput for FTP-like traffic as well as up to 70% more goodput for HTTP-like traffic than TCP NewReno. We also investigate the sensitivity of the considered TCP variants to different bandwidths of the wired and wireless links with respect to both aggregate goodput and fairness.
5

Performance Evaluation of Next Generation Wi-Fi : Link Asymmetry in Multi-Link Operation / Prestandautvärdering av Nästa Generations Wi-Fi : Länkasymmetri i Multilänkdrift

Lai, Kexin January 2023 (has links)
With the growing demand for high-speed data transmission, the Institute of Electrical and Electronics Engineers (IEEE) study group, particularly IEEE 802.11, which focuses on Wireless Fidelity (Wi-Fi) technologies, has been actively pursuing advancements to meet these escalating requirements. One such endeavor is the exploration of Millimeter Wave (mmWave) communications. However, mmWave communications differ significantly from traditional communication systems, characterized by factors like high propagation loss, directivity, and susceptibility to blockage. These distinctive attributes present numerous challenges that must be addressed to fully exploit the potential of mmWave communications. The 802.11be amendment, which will be advertised as Wi-Fi 7, introduces several features that aim to enhance the capabilities of Wi-Fi. One of the main features introduced in this amendment is the Multi-Link Operation (MLO) which allows nodes to transmit and receive over multiple links concurrently. The objective of this project is to assess the performance of integrating MLO with an additional mmWave link in comparison to using a single mmWave link. The aim is to determine whether this combination can effectively address challenges within mmWave communications and consequently enhance throughput performance. Experimental simulations were conducted using an event-based Radio Access Technology (RAT) simulator, considering various scenarios and setups. These investigations examined the impact of factors such as link bandwidth, the number of links in MLO, on Wi-Fi performance. Our findings demonstrate the potential of combining MLO with an additional mmWave link, highlighting significant improvements in overall throughput. However, our results also reveal a link asymmetry problem that arises when integrating links with substantial differences in link capacity. This problem manifests in a specific region where the performance of MLO is not as well as that of using a single mmWave link. To address this issue, we propose a potential solution, which we thoroughly investigate through multiple simulations to assess its feasibility and effectiveness. / Med den växande efterfrågan på höghastighetsdataöverföring har IEEE:s studiegrupp, särskilt IEEE 802.11, som fokuserar på Wi-Fi-teknik, aktivt strävat efter att göra framsteg för att möta dessa eskalerande krav. En sådan strävan är utforskningen av mm-vågskommunikation. mm-vågskommunikation skiljer sig dock avsevärt från traditionella kommunikationssystem, som kännetecknas av faktorer som hög spridningsförlust, direkthet och känslighet för blockering. Dessa utmärkande egenskaper innebär många utmaningar som måste hanteras för att fullt ut utnyttja potentialen hos mm-vågskommunikation. Tillägget 802.11be, som kommer att marknadsföras som Wi-Fi 7, introducerar flera funktioner som syftar till att förbättra kapaciteten hos Wi-Fi. En av de viktigaste funktionerna i detta tillägg är MLO (Multi-Link Operation) som gör det möjligt för noder att sända och ta emot över flera länkar samtidigt. Syftet med detta projekt är att utvärdera hur MLO integreras med en extra mm-våglänk jämfört med en enda mm-våglänk. Målet är att fastställa om denna kombination effektivt kan hantera utmaningar inom mm-vågskommunikation och därmed förbättra genomströmningsprestandan. Experimentella simuleringar genomfördes med hjälp av en händelsebaserad RAT-simulator (Radio Access Technology), där olika scenarier och inställningar beaktades. I dessa undersökningar undersöktes hur faktorer som länkbandbredd och antalet länkar i MLO påverkar Wi-Fi-prestandan. Våra resultat visar potentialen i att kombinera MLO med ytterligare en mm-våglänk, med betydande förbättringar av den totala genomströmningen. Våra resultat visar dock också på ett länkasymmetriproblem som uppstår när man integrerar länkar med stora skillnader i länkkapacitet. Detta problem manifesteras i en specifik region där MLO:s prestanda inte är lika bra som vid användning av en enda mm-våglänk. För att ta itu med detta problem föreslår vi en potentiell lösning som vi undersöker grundligt genom flera simuleringar för att bedöma dess genomförbarhet och effektivitet.
6

TCP Protocol Optimization for HTTP Adaptive Streaming / Optimisation du protocole TCP pour le streaming adaptatif sur HTTP

Ben Ameur, Chiheb 17 December 2015 (has links)
Le streaming adaptatif sur HTTP, désigné par HAS, est une technique de streaming vidéo largement déployée sur Internet. Elle utilise TCP comme protocole de transport. Elle consiste à segmenter la vidéo stockée sur un serveur web en petits segments indépendants de même durée de lecture et transcodés à plusieurs niveaux de qualité, désignés par "chunks". Le player, du côté du client HAS, demande périodiquement un nouveau chunk. Il sélectionne le niveau de qualité en se basant sur l’estimation de la bande passante du/des chunk(s) précédent(s). Étant donné que chaque client HAS est situé au sein d’un réseau d’accès, notre étude traite un cas fréquent dans l’usage quotidien: lorsque plusieurs clients partagent le même lien présentant un goulot d’étrangement et se trouvent en compétition sur la bande passante. Dans ce cas, on signale une dégradation de la qualité d’expérience (QoE) des utilisateurs de HAS et de la qualité de service (QoS) du réseau d’accès. Ainsi, l’objectif de cette thèse est d’optimiser le protocole TCP pour résoudre ces dégradations de QoE et QoS. Notre première contribution propose une méthode de bridage du débit HAS au niveau de la passerelle. Cette méthode est désignée par "Receive Window Tuning Method" (RWTM): elle utilise le principe de contrôle de flux de TCP et l’estimation passive du RTT au niveau de la passerelle. Nous avons comparé les performances de RWTM avec une méthode récente implémentée à la passerelle qui utilise une discipline particulière de gestion de la file d’attente, qui est désignée par "Hierarchical Token Bucket shaping Method" (HTBM). Les résultats d’évaluations indiquent que RWTM offre une meilleure QoE et une meilleure QoS de réseau d’accès que HTBM. Notre deuxième contribution consiste à mener une étude comparative combinant deux méthodes de bridages, RWTM et HTBM, avec quatre variantes TCP largement déployées, NewReno, Vegas, Illinois et Cubic. Les résultats d'évaluations montrent une discordance importante entre les performances des différentes combinaisons. De plus, la combinaison qui améliore les performances dans la majorité des scénarios étudiés est celle de RWTM avec Illinois. En outre, une mise à jour efficace de la valeur du paramètre "Slow Start Threshold", sthresh, peut accélérer la vitesse de convergence du player vers la qualité optimale. Notre troisième contribution propose une nouvelle variante de TCP adaptée aux flux HAS, qu’on désigne par TcpHas; c’est un algorithme de contrôle de congestion de TCP adapté aux spécifications de HAS. TcpHas estime le niveau de la qualité optimale du flux HAS en se basant sur l’estimation de la bande passante de bout en bout. Ensuite, TcpHas applique un bridage au trafic HAS en fonction du débit d’encodage du niveau de qualité estimé. TcpHas met à jour ssthresh pour accélérer la vitesse de convergence. Une étude comparative a été réalisée avec la variante Westwood+. Les résultats d’évaluations montrent que TcpHas est plus performant que Westwood+. / HTTP adaptive streaming (HAS) is a streaming video technique widely used over the Internet. It employs Transmission Control Protocol (TCP) as transport protocol and it splits the original video inside the server into segments of same duration, called "chunks", that are transcoded into multiple quality levels. The HAS player, on the client side, requests for one chunk each chunk duration and it commonly selects the quality level based on the estimated bandwidth of the previous chunk(s). Given that the HAS clients are located inside access networks, our investigation involves several HAS clients sharing the same bottleneck link and competing for bandwidth. Here, a degradation of both Quality of Experience (QoE) of HAS users and Quality of Service (QoS) of the access network are often recorded. The objective of this thesis is to optimize the TCP protocol in order to solve both QoE and QoS degradations. Our first contribution consists of proposing a gateway-based shaping method, that we called Receive Window Tuning Method (RWTM); it employs the TCP flow control and passive round trip time estimation on the gateway side. We compared the performances of RWTM with another gateway-based shaping method that is based on queuing discipline, called Hierarchical Token Bucket shaping Method (HTBM). The results of evaluation indicate that RWTM outperforms HTBM not only in terms of QoE of HAS but also in terms of QoS of access network by reducing the queuing delay and significantly reducing packet drop rate at the bottleneck.Our second contribution consists of a comparative evaluation when combining two shaping methods, RWTM and HTBM, and four very common TCP variants, NewReno, Vegas, Illinois and Cubic. The results show that there is a significant discordance in performance between combinations. Furthermore, the best combination that improves performances in the majority of scenarios is when combining Illinois variant with RWTM. In addition, the results reveal the importance of an efficient updating of the slow start threshold value, ssthresh, to accelerate the convergence toward the best feasible quality level. Our third contribution consists of proposing a novel HAS-based TCP variant, that we called TcpHas; it is a TCP congestion control algorithm that takes into consideration the specifications of HAS flow. Besides, it estimates the optimal quality level of its corresponding HAS flow based on end-to-end bandwidth estimation. Then, it permanently performs HAS traffic shaping based on the encoding rate of the estimated level. It also updates ssthresh to accelerate convergence speed. A comparative performance evaluation of TcpHas with a recent and well-known TCP variant that employs adaptive decrease mechanism, called Westwood+, was performed. Results indicated that TcpHas largely outperforms Westwood+; it offers better quality level stability on the optimal quality level, it dramatically reduces the packet drop rate and it generates lower queuing delay.

Page generated in 0.1052 seconds