• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 14
  • 14
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 52
  • 47
  • 38
  • 35
  • 21
  • 21
  • 20
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Improving the BBR congestion control algorithm for QUIC / Förbättringar av nätverksträngselalgoritmen BBR för QUIC

Chouchliapin, Alexander January 2023 (has links)
Congestion control is an important aspect of network technology, where traffic load is balanced to not cause an overflow in the system. Google has proposed its own protocol, QUIC, which is described as being set to supersede the TCP protocol. QUIC has several advantages, namely having high efficiency and low latency, but also a more flexible congestion control due to it being situated in the user space. To be used in tandem with QUIC, Google developed a new congestion control algorithm called BBR meant to fully exploit these advantages, by reducing latency and increasing throughput. However, as BBR is still a rather new algorithm, there are many different improvements possible to make it more efficient. In this paper, a modified BBR algorithm (mBBR) is proposed, which is comprised of three other algorithms meant to improve BBR by adjusting the otherwise static congestion window and pacing rate gain values based on the round-trip time flow, and is compared to the CUBIC, NewReno, and QUIC/TCP BBR algorithms. mBBR has a greatly lower RTT over CUBIC and NewReno, and reduces it by as much as 20% over the default QUIC BBR algorithm, while maintaining the same level of throughput. This improvement makes mBBR more suitable for usage in RAN-applications and other areas where a lower delay is crucial, without sacrificing network speeds.
42

Link Adaptation Algorithm and Metric for IEEE Standard 802.16

Ramachandran, Shyamal 26 March 2004 (has links)
Broadband wireless access (BWA) is a promising emerging technology. In the past, most BWA systems were based on proprietary implementations. The Institute of Electrical and Electronics Engineers (IEEE) 802.16 task group recently standardized the physical (PHY) and medium-access control (MAC) layers for BWA systems. To operate in a wide range of physical channel conditions, the standard defines a robust and flexible PHY. A wide range of modulation and coding schemes are defined. While the standard provides a framework for implementing link adaptation, it does not define how exactly adaptation algorithms should be developed. This thesis develops a link adaptation algorithm for the IEEE 802.16 standard's WirelessMAN air interface. This algorithm attempts to minimize the end-to-end delay in the system by selecting the optimal PHY burst profile on the air interface. The IEEE 802.16 standard recommends measuring C/(N+I) at the receiver to initiate a change in the burst profile, based on the comparison of the instantaneous the C/(N+I) with preset C/(N+I) thresholds. This research determines the C/(N+I) thresholds for the standard specified channel Type 1. To determine the precise C/(N+I) thresholds, the end-to-end(ETE) delay performance of IEEE 802.16 is studied for different PHY burst profiles at varying signal-to-noise ratio values. Based on these performance results, we demonstrate that link layer ETE delay does not reflect the physical channel condition and is therefore not suitable for use as the criterion for the determination of the C/(N+I) thresholds. The IEEE 802.16 standard specifies that ARQ should not be implemented at the MAC layer. Our results demonstrate that this design decision renders the link layer metrics incapable of use in the link adaptation algorithm. Transmission Control Protocol (TCP) delay is identified as a suitable metric to serve as the link quality indicator. Our results show that buffering and retransmissions at the transport layer cause ETE TCP delay to rise exponentially below certain SNR values. We use TCP delay as the criterion to determine the SNR entry and exit thresholds for each of the PHY burst profiles. We present a simple link adaptation algorithm that attempts to minimize the end-to-end TCP delay based on the measured signal-to-noise ratio (SNR). The effects of Internet latency, TCP's performance enhancement features and network traffic on the adaptation algorithm are also studied. Our results show that delay in the Internet can considerably affect the C/(N+I) thresholds used in the LA algorithm. We also show that the load on the network also impacts the C/(N+I) thresholds significantly. We demonstrate that it is essential to characterize Internet delays and network load correctly, while developing the LA algorithm. We also demonstrate that TCP's performance enhancement features do not have a significant impact on TCP delays over lossy wireless links. / Master of Science
43

Congestion control based on cross-layer game optimization in wireless mesh networks

Ma, X., Xu, L., Min, Geyong January 2013 (has links)
No / Due to the attractive characteristics of high capacity, high-speed, wide coverage and low transmission power, Wireless Mesh Networks become the ideal choice for the next-generation wireless communication systems. However, the network congestion of WMNs deteriorates the quality of service provided to end users. Game theory optimization model is a novel modeling tool for the study of multiple entities and the interaction between them. On the other hand, cross-layer design is shown to be practical for optimizing the performance of network communications. Therefore, a combination of the game theory and cross-layer optimization, named cross-layer game optimization, is proposed to reduce network congestion in WMNs. In this paper, the network congestion control in the transport layer and multi-path flow assignment in the network layer of WMNs are investigated. The proposed cross-layer game optimization algorithm is then employed to enable source nodes to change their set of paths and adjust their congestion window according to the round-trip time to achieve a Nash equilibrium. Finally, evaluation results show that the proposed cross-layer game optimization scheme achieves high throughput with low transmission delay.
44

Modeling and Analysis of Active Queue Management Schemes under Bursty Traffic.

Wang, Lan, Min, Geyong, Awan, Irfan U. January 2006 (has links)
No / Traffic congestion arising from the shared nature of uplink channels in wireless networks can cause serious problems for the provision of QoS to various services. One approach to overcome these problems is to implement some effective congestion control mechanisms at the downlink buffer at the mobile network link layer or at gateways on the behalf of wireless network access points. Active queue management (AQM) is an effective mechanism to support end-to-end traffic congestion control in modern high-speed networks. Initially developed for Internet routers, AQM is now being also considered as an effective congestion control mechanism to enhance TCP performance over 3G links. This paper proposes an analytical performance model for AQM using various dropping functions. The selection of different dropping functions and threshold values required for this scheme plays a critical role on its effectiveness. The model uses a well-known Markov-modulated Poisson process (MMPP) to capture traffic burstiness and correlations. The validity of the model has been demonstrated through simulation experiments. Extensive analytical results have indicated that exponential dropping function is a good choice for AQM to support efficient congestion control.
45

Transport Services for Soft Real-Time Applications in IP Networks

Grinnemo, Karl-Johan January 2006 (has links)
In recent years, Internet and IP technologies have made inroads into almost every commu- nication market ranging from best-effort services such as email and Web, to soft real-time applications such as VoIP, IPTV, and video. However, providing a transport service over IP that meets the timeliness and availability requirements of soft real-time applications has turned out to be a complex task. Although network solutions such as IntServ, DiffServ, MPLS, and VRRP have been suggested, these solutions many times fail to provide a trans- port service for soft real-time applications end to end. Additionally, they have so far only been modestly deployed. In light of this, this thesis considers transport protocols for soft real-time applications. Part I of the thesis focuses on the design and analysis of transport protocols for soft real- time multimedia applications with lax deadlines such as image-intensive Web applications. Many of these applications do not need a completely reliable transport service, and to this end Part I studies so-called partially reliable transport protocols, i.e., transport protocols that enable applications to explicitly trade reliability for improved timeliness. Specifically, Part I investigates the feasibility of designing retransmission-based, partially reliable transport protocols that are congestion aware and fair to competing traffic. Two transport protocols are presented in Part I, PRTP and PRTP-ECN, which are both extensions to TCP for partial reliability. Simulations and theoretical analysis suggest that these transport protocols could give a substantial improvement in throughput and jitter as compared to TCP. Additionally, the simulations indicate that PRTP-ECN is TCP friendly and fair against competing congestion- aware traffic such as TCP flows. Part I also presents a taxonomy for retransmission-based, partially reliable transport protocols. Part II of the thesis considers the Stream Control Transmission Protocol (SCTP), which was developed by the IETF to transfer telephony signaling traffic over IP. The main focus of Part II is on evaluating the SCTP failover mechanism. Through extensive experiments, it is suggested that in order to meet the availability requirements of telephony signaling, SCTP has to be configured much more aggressively than is currently recommended by IETF. Fur- thermore, ways to improve the transport service provided by SCTP, especially with regards to the failover mechanism, are suggested. Part II also studies the effects of Head-of-Line Blocking (HoLB) on SCTP transmission delays. HoLB occurs when packets in one flow block packets in another, independent, flow. The study suggests that the short-term effects of HoLB could be substantial, but that the long-term effects are marginal.
46

Parameter self-tuning in internet congestion control

Chen, Wu January 2010 (has links)
Active Queue Management (AQM) aims to achieve high link utilization, low queuing delay and low loss rate in routers. However, it is difficult to adapt AQM parameters to constantly provide desirable transient and steady-state performance under highly dynamic network scenarios. They need to be a trade-off made between queuing delay and utilization. The queue size would become unstable when round-trip time or link capacity increases, or would be unnecessarily large when round-trip time or link capacity decreases. Effective ways of adapting AQM parameters to obtain good performance have remained a critical unsolved problem during the last fifteen years. This thesis firstly investigates existing AQM algorithms and their performance. Based on a previously developed dynamic model of TCP behaviour and a linear feedback model of TCP/RED, Auto-Parameterization RED (AP-RED) is proposed which unveils the mechanism of adapting RED parameters according to measurable network conditions. Another algorithm of Statistical Tuning RED (ST-RED) is developed for systematically tuning four key RED parameters to control the local stability in response to the detected change in the variance of the queue size. Under variable network scenarios like round-trip time, link capacity and traffic load, no manual parameter configuration is needed. The proposed ST-RED can adjust corresponding parameters rapidly to maintain stable performance and keep queuing delay as low as possible. Thus the sensitivity of RED's performance to different network scenarios is removed. This Statistical Tuning algorithm can be applied to a PI controller for AQM and a Statistical Tuning PI (ST-PI) controller is also developed. The implementation of ST-RED and ST-PI is relatively straightforward. Simulation results demonstrate the feasibility of ST-RED and ST-PI and their capabilities to provide desirable transient and steady-state performance under extensively varying network conditions.
47

Congestion and medium access control in 6LoWPAN WSN

Michopoulos, Vasilis January 2012 (has links)
In computer networks, congestion is a condition in which one or more egressinterfaces are offered more packets than are forwarded at any given instant [1]. In wireless sensor networks, congestion can cause a number of problems including packet loss, lower throughput and poor energy efficiency. These problems can potentially result in a reduced deployment lifetime and underperforming applications. Moreover, idle radio listening is a major source of energy consumption therefore low-power wireless devices must keep their radio transceivers off to maximise their battery lifetime. In order to minimise energy consumption and thus maximise the lifetime of wireless sensor networks, the research community has made significant efforts towards power saving medium access control protocols with Radio Duty Cycling. However, careful study of previous work reveals that radio duty cycle schemes are often neglected during the design and evaluation of congestion control algorithms. This thesis argues that the presence (or lack) of radio duty cycle can drastically influence the performance of congestion control mechanisms. To investigate if previous findings regarding congestion control are still applicable in IPv6 over low power wireless personal area and duty cycling networks; some of the most commonly used congestion detection algorithms are evaluated through simulations. The research aims to develop duty cycle aware congestion control schemes for IPv6 over low power wireless personal area networks. The proposed schemes must be able to maximise the networks goodput, while minimising packet loss, energy consumption and packet delay. Two congestion control schemes, namely DCCC6 (Duty Cycle-Aware Congestion Control for 6LoWPAN Networks) and CADC (Congestion Aware Duty Cycle MAC) are proposed to realise this claim. DCCC6 performs congestion detection based on a dynamic buffer. When congestion occurs, parent nodes will inform the nodes contributing to congestion and rates will be readjusted based on a new rate adaptation scheme aiming for local fairness. The child notification procedure is decided by DCCC6 and will be different when the network is duty cycling. When the network is duty cycling the child notification will be made through unicast frames. On the contrary broadcast frames will be used for congestion notification when the network is not duty cycling. Simulation and test-bed experiments have shown that DCCC6 achieved higher goodput and lower packet loss than previous works. Moreover, simulations show that DCCC6 maintained low energy consumption, with average delay times while it achieved a high degree of fairness. CADC, uses a new mechanism for duty cycle adaptation that reacts quickly to changing traffic loads and patterns. CADC is the first dynamic duty cycle pro- tocol implemented in Contiki Operating system (OS) as well as one of the first schemes designed based on the arbitrary traffic characteristics of IPv6 wireless sensor networks. Furthermore, CADC is designed as a stand alone medium access control scheme and thus it can easily be transfered to any wireless sensor network architecture. Additionally, CADC does not require any time synchronisation algorithms to operate at the nodes and does not use any additional packets for the exchange of information between the nodes (For example no overhead). In this research, 10000 simulation experiments and 700 test-bed experiments have been conducted for the evaluation of CADC. These experiments demonstrate that CADC can successfully adapt its cycle based on traffic patterns in every traffic scenario. Moreover, CADC consistently achieved the lowest energy consumption, very low packet delay times and packet loss, while its goodput performance was better than other dynamic duty cycle protocols and similar to the highest goodput observed among static duty cycle configurations.
48

A New Framework for Classification and Comparative Study of Congestion Control Schemes of ATM Networks

Chandra, Umesh, 1971- 05 1900 (has links)
In our work, we have proposed a new framework for the classification and comparative study of ATM congestion control schemes. The different aspects on which we have classified the algorithms are control theoretic approach, action and congestion notification. These three aspects present of the classification present a coherent framework on which congestion control algorithms are to be classified. Such a classification will also help in developing new algorithms.
49

Avaliação de algoritmos de controle de congestionamento como controle de admissão em um modelo de servidores web com diferenciação de serviços / Evaluation of congestion control algorithms used as control admission in a model of web servers with service differentiation

Figueiredo, Ricardo Nogueira de 11 March 2011 (has links)
Esta dissertação apresenta a construção de um protótipo de servidor Web distribuído, baseado no modelo de servidor Web com diferenciação de serviços (SWDS) e a implementação e avaliação de algoritmos de seleção, utilizando o conceito de controle de congestionamento para requisições HTTP. Com isso, além de implementar uma plataforma de testes, este trabalho também avalia o comportamento de dois algoritmos de controle de congestionamento. Os dois algoritmos estudados são chamados de Drop Tail e RED (Random Early Detection), no qual são bastante difundidos na literatura científica e aplicados em redes de computadores. Os resultados obtidos demostram que, apesar das particularidades de cada algoritmo, existe uma grande relação entre tempo de resposta e a quantidade de requisições aceitas / This MSc dissertation presents the implementation of a prototype for a distributed web server based on the SWDS, a model for a web server with service differentiation, and the implementation and evaluation of selection algorithms adopting the concept of congestion control for HTTP requests. Thus, besides implementing a test platform this work also evaluates the behavior of two congestion control algorithms. The two algorithms studied are the Drop Tail and the RED (Random Early Detection), which are frequently discussed in the scientific literature and widely applied in computer networks. The results obtained show that, although the particularities of each algorithm, there is a strong relation between the response times and the amount of requests accepted in the server
50

Avaliação do protocolo multicast e PePcc para transmissões confiáveis na Internet / Evaluation of the Pepcc protocol for reliable multicast transmission in the Internet

Peradotto, Roberto 03 April 2003 (has links)
Made available in DSpace on 2015-03-05T13:53:45Z (GMT). No. of bitstreams: 0 Previous issue date: 3 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Aplicações como a World-Wide Web, correio eletrônico e transferência de arquivos são a principal fonte de tráfego em redes de comunicação entre computadores hoje em dia. A tecnologia IP, base da Internet, não dispõe de reservas de recursos, e a divisão justa da capacidade da rede entre os fluxos que competem se dá através de mecanismos que devem estar presentes nos protocolos executados nas estações fim (hosts). Tal é denominado controle de congestionamento fim a fim. O tráfego na Internet é predominantemente composto por fluxos TCP, que é dotado de controle de congestionamento. Para o bom funcionamento da Internet, novos protocolos de transporte ou de aplicação devem incluir mecanismos de controle de congestionamento que sejam "amigáveis" ao TCP, ou seja, que não ocupem mais recursos do que deveriam. O surgimento de IP multicast viabilizou, no final nos anos 90, novas aplicações na Internet, como aplicações de trabalho em grupo, bancos de dados distribuídos, vídeo-conferência, etc. Muitas dessas aplicações

Page generated in 0.1196 seconds