• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 14
  • 14
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 52
  • 47
  • 38
  • 35
  • 21
  • 21
  • 20
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Non-cooperative beaconing control in vehicular ad hoc networks

Goudarzi, Forough January 2017 (has links)
The performance of many protocols and applications of Vehicular Ad hoc Networks (VANETs), depends on vehicles obtaining enough fresh information on the status of their neighbouring vehicles. This should be fulfilled by exchanging Basic Safety Messages (BSMs) also called beacons using a shared channel. In dense vehicular conditions, many of the beacons are lost due to channel congestion. Therefore, in such conditions, it is necessary to control channel load at a level that maximizes BSM dissemination. To address the problem, in this thesis algorithms for adaptation of beaconing to control channel load are proposed. First, a position-based routing protocol for VANETs is proposed and the requirement of adaptive beaconing to increase the performance of the protocol is indicated. The routing protocol is traffic-aware and suitable for city environments and obtains real-time traffic information in a completely ad hoc manner without any central or dedicated control, such as traffic sensors, roadside units, or information obtained from outside the network. The protocol uses an ant-based algorithm to find a route that has optimum network connectivity. Using information included in small control packets called ants, vehicles calculate a weight for every street segment that is proportional to the network connectivity of that segment. Ant packets are launched by vehicles in junction areas. To find the optimal route between a source and destination, a source vehicle determines the path on a street map with the minimum total weight for the complete route. The correct functionality of the protocol design has been verified and its performance has been evaluated in a simulation environment. Moreover, the performance of the protocol in different vehicular densities has been studied and indicated that in dense vehicular conditions the performance of the protocol degrades due to channel load created by uncontrolled periodic beaconing. Then, the problem of beaconing congestion control has been formulated as non-cooperative games, and algorithms for finding the equilibrium point of the games have been presented. Vehicles as players of the games adjust their beacon rate or power or both, based on the proposed algorithms so that channel load is controlled at a desired level. The algorithms are overhead free and fairness in rate or power or both rate and power allocation are achieved without exchanging excess information in beacons. Every vehicle just needs local information on channel load while good fairness is achieved globally. In addition, the protocols have per-vehicle parameters, which makes them capable of meeting application requirements. Every vehicle can control its share of bandwidth individually based on its dynamics or requirements, while the whole usage of the bandwidth is controlled at an acceptable level. The algorithms are stable, computationally inexpensive and converge in a short time, which makes them suitable for the dynamic environment of VANETs. The correct functionality of the algorithms has been validated in several high density scenarios using simulations.
52

Avaliação de algoritmos de controle de congestionamento como controle de admissão em um modelo de servidores web com diferenciação de serviços / Evaluation of congestion control algorithms used as control admission in a model of web servers with service differentiation

Ricardo Nogueira de Figueiredo 11 March 2011 (has links)
Esta dissertação apresenta a construção de um protótipo de servidor Web distribuído, baseado no modelo de servidor Web com diferenciação de serviços (SWDS) e a implementação e avaliação de algoritmos de seleção, utilizando o conceito de controle de congestionamento para requisições HTTP. Com isso, além de implementar uma plataforma de testes, este trabalho também avalia o comportamento de dois algoritmos de controle de congestionamento. Os dois algoritmos estudados são chamados de Drop Tail e RED (Random Early Detection), no qual são bastante difundidos na literatura científica e aplicados em redes de computadores. Os resultados obtidos demostram que, apesar das particularidades de cada algoritmo, existe uma grande relação entre tempo de resposta e a quantidade de requisições aceitas / This MSc dissertation presents the implementation of a prototype for a distributed web server based on the SWDS, a model for a web server with service differentiation, and the implementation and evaluation of selection algorithms adopting the concept of congestion control for HTTP requests. Thus, besides implementing a test platform this work also evaluates the behavior of two congestion control algorithms. The two algorithms studied are the Drop Tail and the RED (Random Early Detection), which are frequently discussed in the scientific literature and widely applied in computer networks. The results obtained show that, although the particularities of each algorithm, there is a strong relation between the response times and the amount of requests accepted in the server
53

SF-SACK: A Smooth Friendly TCP Protocol for Streaming Multimedia Applications

Bakthavachalu, Sivakumar 16 April 2004 (has links)
Voice over IP and video applications continue to increase the amount of traffic over the Internet. These applications utilize the UDP protocol because TCP is not suitable for streaming applications. The flow and congestion control mechanisms of TCP can change the connection transmission rate too drastically, affecting the user-perceived quality of the transmission. Also, the TCP protocol provides a level of reliability that may waste network resources, retransmitting packets that have no value. On the other hand, the use of end-to-end flow and congestion control mechanisms for streaming applications has been acknowledged as an important measure to ease or eliminate the unfairness problem that exist when TCP and UDP share the same congested bottleneck link. Actually, router-based and end-to-end solutions have been proposed to solve this problem. This thesis introduces a new end-to-end protocol based on TCP SACK called SF-SACK that promises to be smooth enough for streaming applications while implementing the known flow and congestion control mechanisms available in TCP. Through simulations, it is shown that in terms of smoothness the SF-SACK protocol is considerably better than TCP SACK and only slightly worse than TFRC. Regarding friendliness, SF-SACK is not completely fair to TCP but considerably fairer than UDP. Furthermore, if SF-SACK is used by both streaming and data-oriented applications, complete fairness is achieved. In addition, SF-SACK only needs sender side modifcations and it is simpler than TFRC.
54

Performance Evaluation of TCP over Optical Channels and Heterogeneous Networks

Xu, Jianxuan 30 March 2004 (has links)
Next generation optical networks will soon provide users the capability to request and obtain end-to-end all optical 10 Gbps channels on demand. Individual users will use these channels to exchange large amounts of data and support applications for scientific collaborative work. These new applications, which expect steady transfer rates in the order of Gbps, will very likely use either TCP or a new transport layer protocol as the end-to-end communication protocol. This thesis investigates the performance of TCP and newer TCP versions over High Bandwidth Delay Product Channels (HBDPC), such as the on demand optical channels described above. In addition, it investigates the performance of these new TCP versions over wireless networks and according to old issues such as fairness. This is particularly important to make adoption decisions. Using simulations, it is shown that 1) the window-based mechanism of current TCP implementations is not suitable to achieve high link utilization and 2) congestion control mechanisms, such as the one utilized by TCP Vegas and Westwood are more appropriate and provide better performance. Modifications to TCP Vegas and Scalable TCP are introduced to improve the performance of these versions over HBDPC. In addition, simulation results show that new TCP proposals for HBDPC, although they perform better than current TCP versions, still perform worse than TCP Vegas. Also, it was found that even though these newer versions improve TCP's performance over their original counterparts in HBDPC, they still have performance problems in wireless networks and present worse fairness problems than their old counterparts. The main conclusion of this thesis is that all these versions are still based on TCP's AIMD strategy or similar and therefore continue to be fairly blind in the way they increase and decrease their transmission rates. TCP will not be able to utilize the foreseen optical infrastructure adequately and support future applications if not redesigned to scale.
55

Fast retransmit inhibitions for TCP

Hurtig, Per January 2006 (has links)
<p>The Transmission Control Protocol (TCP) has been the dominant transport protocol in the Internet for many years. One of the reasons to this is that TCP employs congestion control mechanisms which prevent the Internet from being overloaded. Although TCP's congestion control has evolved during almost twenty years, the area is still an active research area since the environments where TCP are employed keep on changing. One of the congestion control mechanisms that TCP uses is fast retransmit, which allows for fast retransmission of data that has been lost in the network. Although this mechanism provides the most effective way of retransmitting lost data, it can not always be employed by TCP due to restrictions in the TCP specification.</p><p>The primary goal of this work was to investigate when fast retransmit inhibitions occur, and how much they affect the performance of a TCP flow. In order to achieve this goal a large series of practical experiments were conducted on a real TCP implementation.</p><p>The result showed that fast retransmit inhibitions existed, in the end of TCP flows, and that the increase in total transmission time could be as much as 301% when a loss were introduced at a fast retransmit inhibited position in the flow. Even though this increase was large for all of the experiments, ranging from 16-301%, the average performance loss, due to an arbitrary placed loss, was not that severe. Because fast retransmit was inhibited in fewer positions of a TCP flow than it was employed, the average increase of the transmission time due to these inhibitions was relatively small, ranging from 0,3-20,4%.</p>
56

Dynamic modeling of Internet congestion control

Jacobsson, Krister January 2008 (has links)
The Transmission Control Protocol (TCP) has successfully governed the Internet congestion control for two decades. It is by now, however, widely recognized that TCP has started to reach its limits and that new congestion control protocols are needed in the near future. This has spurred an intensive research effort searching for new congestion control designs that meet the demands of a future Internet scaled up in size, capacity and heterogeneity. In this thesis we derive network fluid flow models suitable for analysis and synthesis of window based congestion control protocols such as TCP. In window based congestion control the transmission rate of a sender is regulated by: (1) the adjustment of the so called window, which is an upper bound on the number of packets that are allowed to be sent before receiving an acknowledgment packet (ACK) from the receiver side, and (2) the rate of the returning ACKs. From a dynamical perspective, this constitutes a cascaded control structure with an outer and an inner loop. The first contribution of this thesis is a novel dynamical characterization and an analysis of the inner loop, generic to all window based schemes and formed by the interaction between the, so called, ACK-clocking mechanism and the network. The model is based on a fundamental integral equation relating the instantaneous flow rate and the window dynamics. It is verified in simulations and testbed experiments that the model accurately predicts dynamical behavior in terms of system stability, previously unknown oscillatory behavior and even fast phenomenon such as traffic burstiness patterns present in the system. It is demonstrated that this model is more accurate than many of the existing models in the literature. In the second contribution we consider the outer loop and present a detailed fluid model of a generic window based congestion control protocol using queuing delay as congestion notification. The model accounts for the relations between the actual packets in flight and the window size, the window control, the estimator dynamics as well as sampling effects that may be present in an end-to-end congestion control algorithm. The framework facilitates modeling of a quite large class of protocols. The third contribution is a closed loop analysis of the recently proposed congestion control protocol FAST TCP. This contribution also serves as a demonstration of the developed modeling framework. It is shown and verified in experiments that the delay configuration is critical to the stability of the system. A conclusion from the analysis is that the gain of the ACK-clocking mechanism dramatically increases with the delay heterogeneity for the case of an equal resource allocation policy. Since this strongly affects the stability properties of the system, this is alarming for all window based congestion control protocols striving towards proportional fairness. While these results are interesting as such, perhaps the most important contribution is the developed stability analysis technique. / QC 20100813
57

Reliable Transport Performance in Mobile Environments

McSweeney, Martin January 2001 (has links)
Expanding the global Internet to include mobile devices is an exciting area of current research. Because of the vast size of the Internet, and because the protocols in it are already widely deployed, mobile devices must inter-operate with those protocols. Although most of the incompatiblities with mobiles have been solved, the protocols that deliver data reliably, and that account for the majority of Internet traffic, perform very poorly. A change in location causes a disruption in traffic, and disruption is dealt with by algorithms tailored only for stationary hosts. The Transmission Control Protocol (TCP) is the predominant transport-layer protocol in the Internet. In this thesis, we look at the performance of TCP in mobile environments. We provide a complete explanation for poor performance; we conduct a large number of experiments, simulations, and analyses that prove and quantify poor performance;and we propose simple and scalable solutions that address the limitations.
58

Reliable Transport Performance in Mobile Environments

McSweeney, Martin January 2001 (has links)
Expanding the global Internet to include mobile devices is an exciting area of current research. Because of the vast size of the Internet, and because the protocols in it are already widely deployed, mobile devices must inter-operate with those protocols. Although most of the incompatiblities with mobiles have been solved, the protocols that deliver data reliably, and that account for the majority of Internet traffic, perform very poorly. A change in location causes a disruption in traffic, and disruption is dealt with by algorithms tailored only for stationary hosts. The Transmission Control Protocol (TCP) is the predominant transport-layer protocol in the Internet. In this thesis, we look at the performance of TCP in mobile environments. We provide a complete explanation for poor performance; we conduct a large number of experiments, simulations, and analyses that prove and quantify poor performance;and we propose simple and scalable solutions that address the limitations.
59

Evaluation of explicit congestion control for high-speed networks

Jain, Saurabh 15 May 2009 (has links)
Recently, there has been a significant surge of interest towards the design and development of a new global-scale communication network that can overcome the limitations of the current Internet. Among the numerous directions of improvement in networking technology, recent pursuit to do better flow control of network traffic has led to the emergence of several explicit-feedback congestion control methods. As a first step towards understanding these methods, we analyze the stability and transient performance of Rate Control Protocol (RCP).We find that RCP can become unstable in certain topologies and may exhibit very high buffering requirements at routers. To address these limitations, we propose a new controller called Proportional Integral Queue Independent RCP (PIQI-RCP), prove its stability under heterogeneous delay, and use simulations to show that the new method has significantly lower transient queue lengths, better transient dynamics, and tractable stability properties. As a second step in understanding explicit congestion control, we experimentally evaluate proposed methods such as XCP, JetMax, RCP, and PIQI-RCP using their Linux implementation developed by us. Our experiments show that these protocols are scalable with the increase in link capacity and round-trip propagation delay. In steady-state, they have low queuing delay and almost zero packet-loss rate. We confirm that XCP cannot achieve max-min fairness in certain topologies. We find that JetMax significantly drops link utilization in the presence of short flows with long flow and RCP requires large buffer size at bottleneck routers to prevent transient packet losses and is slower in convergence to steady-state as compared to other methods. We observe that PIQI-RCP performs better than RCP in most of the experiments.
60

Modeling and control of network traffic for performance and secure communications

Xiong, Yong 17 February 2005 (has links)
The objective of this research is to develop innovative techniques for modeling and control of network congestion. Most existing network controls have discontinuous actions, but such discontinuity in control actions is commonly omitted in analytical models, and instead continuous models were widely adopted in the literature. This approximation works well under certain conditions, but it does cause significant discrepancy in creating robust, responsive control solutions for congestion management. In this dissertation, I investigated three major topics. I proposed a generic discontinuous congestion control model and its design methodology to guarantee asymptotic stability and eliminate traffic oscillation, based on the sliding mode control (SMC) theory. My scheme shows that discontinuity plays a crucial role in optimization of the I-D based congestion control algorithms. When properly modeled, the simple I-D control laws can be made highly robust to parameter and model uncertainties. I discussed applicability of this model to some existing flow or congestion control schemes, e.g. XON/XOFF, rate and window based AIMD, RED, etc. It can also be effectively applied to design of detection and defense of distributed denial of service (DDoS) attacks. DDoS management can be considered a special case of the flow control problem. Based on my generic discontinuous congestion control model, I developed a backward-propagation feedback control strategy for DDoS detection and defense. It not only prevents DDoS attacks but also provides smooth traffic and bounded queue size. Another application of the congestion control algorithms is design of private group communication networks. I proposed a new technique for protection of group communications by concealment of sender-recipient pairs. The basic approach is to fragment and disperse encrypted messages into packets to be transported along different paths, so that the adversary cannot efficiently determine the source/recipient of a message without correct ordering of all packets. Packet flows among nodes are made balanced, to eliminate traffic patterns related to group activities. I proposed a sliding window-based flow control scheme to control transmission of payload and dummy packets. My algorithms allow flexible tradeoff between traffic concealment and performance requirement.

Page generated in 0.0887 seconds