• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 360
  • 44
  • 41
  • 38
  • 19
  • 11
  • 9
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 668
  • 216
  • 163
  • 93
  • 92
  • 91
  • 88
  • 83
  • 80
  • 71
  • 64
  • 61
  • 61
  • 52
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Performance Evaluation of QUIC protocol under Network Congestion

Srivastava, Amit 18 April 2017 (has links)
TCP is a widely used protocol for web traffic. However, TCP€™s connection setup and congestion response can impact web page load times, leading to higher page load times for users. In order to address this issue, Google came out with QUIC (Quick UDP Internet Connections), a UDP-based protocol that runs in the application layer. While already deployed, QUIC is not well-studied, particularly QUIC€™s congestion response as compared to TCP€™s congestion response which is critical for stability of the Internet and flow fairness. To study QUIC€™s congestion response we conduct three sets of experiments on a wired testbed. One set of our experiments focused on QUIC and TCP throughput under added delay, another set compared QUIC and TCP throughput under added packet loss, and the third set had QUIC and TCP flows share a bottleneck link to study the fairness between TCP and QUIC flows. Our results show that with random packet loss QUIC delivers higher throughput compared to TCP. However, when sharing the same link, QUIC can be unfair to TCP. With an increase in the number of competing TCP flows, a QUIC flow takes a greater share of the available link capacity compared to TCP flows.
52

Spoplatnenie cestnej dopravy v Londýne / Charging for road transport in London

Koscelník, Štefan January 2011 (has links)
The aim of this work is to describe and evaluate the project of road charging in London. The theoretical part deals the issue of transport in urban conditions, its basic types, problems and possible approaches to tackling it. It also includes a description of the electronic toll systems as tools used to manage traffic in the city. The practical part deals with the use of video detection technology for road charging in London. It includes the preparation and implementation of the project, its basic principles of operation and adjustments made during the period of its operation. Followed by evaluation of the project in terms of impact on traffic, business and air quality in London. At the end of the work there are some recommendations for possible implementation of a similar charging system in terms of the city of Prague.
53

Propuesta de solución al congestionamiento vehicular en la rotonda Las Americas ubicada frente al Aeropuerto Internacional Jorge Chávez aplicando microsimulación en el software Vissim v.9 / Solution proposal to the vehicular congestion in the americas roundabout located in front of the International Airport Jorge Chavez applying microsimulation in the Vissim software v.9

Pérez Rodríguez, Carlos Martín, Porras Salazar, Carlos Martín 23 October 2019 (has links)
La presente tesis propone una solución a la congestión vehicular a través de una microsimulación con el software Vissim v9 de la rotonda Las Américas ubicada frente al Aeropuerto Internacional Jorge Chávez. La microsimulación se realizó mediante los parámetros de Wiedemann, los cuales fueron utilizados en la calibración y validación del modelo haciéndolo lo más cercano a la realidad considerando la geometría del área de estudio y la Psicología de los conductores. En el primer capítulo se encontrará el problema de la congestión vehicular en Perú y Lima, algunos antecedentes de estudios similares, la hipótesis y los objetivos de la investigación. En el segundo capítulo se desarrolló el marco teórico que respalda los criterios con los que se ha desarrollado tanto el modelo de microsimulación como la propuesta de mejora. Aquí podremos encontrar la definición de los diferentes tipos de modelos, los fundamentos de la microsimulación (modelo de Wiedemann) y se explica el análisis que ejecuta el software Vissim v9. El tercer capítulo describe la metodología que se utiliza desde la recolección de datos de campo hasta el modelo de microsimulación debidamente calibrado y validado. En el cuarto capítulo se presentan los resultados de la microsimulación, los cuales nos indican el nivel de servicio actual de la rotonda es “F” y se logró mejorar hasta uno “D”, según la HCM (2010). En el último capítulo se encontrarán las conclusiones y recomendaciones a las que se llegaron respondiendo así el objetivo general de la presente tesis. / This thesis proposes a solution to vehicular congestion through microsimulation with the Vissim v9 software of the Las Americas roundabout located opposite the Jorge Chavez International Airport. Microsimulation was performed using Wiedemann parameters, which were used in the calibration and validation of the model making it as close to reality considering the geometry of the study area and the Psychology of the drivers. In the first chapter you will find the problem of vehicular congestion in Peru and Lima, some background of similar studies, the hypothesis and the objectives of the investigation. In the second chapter the theoretical framework was developed that supports the criteria with which both the microsimulation model and the proposal for improvement have been developed. Here we can find the definition of the different types of models, the fundamentals of microsimulation (Wiedemann model) and the analysis that runs the Vissim v9 software is explained. The third chapter describes the methodology used from the collection of field data to the properly calibrated and validated microsimulation model. In the fourth chapter the results of the microsimulation are presented, which indicate the current service level of the roundabout is “F” and it was possible to improve up to one “D”, according to the HCM (2010). In the last chapter you will find the conclusions and recommendations reached in response to the general objective of this thesis. / Tesis
54

A methodology for resolving multiple vehicle occlusion in visual traffic surveillance

Pang, Chun-cheong. January 2005 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
55

Relevance and Reliability of Area-Wide Congestion Performance Measures in Road Networks

Moran, Carlos January 2011 (has links)
For operational and planning purposes it is important to observe, predict and monitor the traffic performance of congested urban road links and networks. This monitoring effort describes the traffic conditions in road networks using congestion performance measures. The objective of this research is to analyse and evaluate methods for measuring congestion and congestion performance measures for monitoring purposes. For this objective, a selection of the required aspects of the performance measures in the literature is considered. The aspects to be considered can be classified into two categories: A first group relates to the statistical aspects of these indicators, i.e. reliability. The second relates to their ability to capture the impacts of congestion, i.e .relevance. The reliability and relevance of the congestion performance measures are evaluated. A recommendation of the most suitable indicator is provided at the end of the study. / QC 20110912
56

Window-based congestion control : Modeling, analysis and design

Möller, Niels January 2008 (has links)
This thesis presents a model for the ACK-clock inner loop, common to virtually all Internet congestion control protocols, and analyzes the stability properties of this inner loop, as well as the stability and fairness properties of several window update mechanisms built on top of the ACK-clock. Aided by the model for the inner-loop, two new congestion control mechanisms are constructed, for wired and wireless networks. Internet traffic can be divided into two main types: TCP traffic and real-time traffic. Sending rates for TCP traffic, e.g., file-sharing, uses window-based congestion control, and adjust continuously to the network load. The sending rates for real-time traffic, e.g., voice over IP, are mostly independent of the network load. The current version of the Transmission Control Protocol (TCP) results in large queueing delays at bottlenecks, and poor quality for real-time applications that share a bottleneck link with TCP. The first contribution is a new model for the dynamic relationship between window sizes, sending rates, and queue sizes. This system, with window sizes as inputs, and queue sizes as outputs, is the inner loop at the core of window-based congestion control. The new model unifies two models that have been widely used in the literature. The dynamics of this system, including the static gain and the time constant, depend on the amount of cross traffic which is not subject to congestion control. The model is validated using ns-2 simulations, and it is shown that the system is stable. For moderate cross traffic, the system convergence time is a couple of roundtrip times. When introducing a new congestion control protocol, one important question is how flows using different protocols share resources. The second contribution is an analysis of the fairness when a flow using TCP Westwood+ is introduced in a network that is also used by a TCP New Reno flow. It is shown that the sharing of capacity depends on the buffer size at the bottleneck link. With a buffer size matching the bandwidth-delay product, both flows get equal shares. If the buffer size is smaller, Westwood+ gets a larger share. In the limit of zero buffering, it gets all the capacity. If the buffer size is larger, New Reno gets a larger share. In the limit of very large buffers, it gets 3/4 of the capacity. The third contribution is a new congestion control mechanism, maintaining small queues. The overall control structure is similar to the combination of TCP with Active Queue Management (AQM) and explicit congestion notification, where routers mark some packets according to a probability which depends on the queue size. The key ideas are to take advantage of the stability of the inner loop, and to use control laws for setting and reacting to packet marks that result in more frequent feedback than with AQM. Stability analysis for the single flow, single bottleneck topology gives a simple stability condition, which can be used to guide tuning. Simulations, both of the fluid-flow differential equations, and in the ns-2 packet simulator, show that the protocol maintains small queues. The simulations also indicate that tuning, using a single control parameter per link, is fairly easy. The final contribution is a split-connection scheme for downloads to a mobile terminal. A wireless mobile terminal requests a file from a web server, via a proxy. During the file transfer, the Radio Network Controller (RNC) informs the proxy about bandwidth changes over the radio channel, and the current RNC queue length. A novel control mechanism in the proxy uses this information to adjust the window size. In simulation studies, including one based on detailed radio-layer simulations, both the user response time and the link utilization are improved, compared TCP New Reno, Eifel and Snoop, both for a dedicated channel, and for the shared channel in High-Speed Downlink Packet Access. / QC 20100830
57

Internet Congestion Control: Modeling and Stability Analysis

Wang, Lijun 08 August 2008 (has links)
The proliferation and universal adoption of the Internet has made it become the key information transport platform of our time. Congestion occurs when resource demands exceed the capacity, which results in poor performance in the form of low network utilization and high packet loss rate. The goal of congestion control mechanisms is to use the network resources as efficiently as possible. The research work in this thesis is centered on finding ways to address these types of problems and provide guidelines for predicting and controlling network performance, through the use of suitable mathematical tools and control analysis. The first congestion collapse in the Internet was observed in 1980's. To solve the problem, Van Jacobson proposed the Transmission Control Protocol (TCP) congestion control algorithm based on the Additive Increase and Multiplicative Decrease (AIMD) mechanism in 1988. To be effective, a congestion control mechanism must be paired with a congestion detection scheme. To detect and distribute network congestion indicators fairly to all on-going flows, Active Queue Management (AQM), e.g., the Random Early Detection (RED) queue management scheme has been developed to be deployed in the intermediate nodes. The currently dominant AIMD congestion control, coupled with the RED queue in the core network, has been acknowledged as one of the key factors to the overwhelming success of the Internet. In this thesis, the AIMD/RED system, based on the fluid-flow model, is systematically studied. In particular, we concentrate on the system modeling, stability analysis and bounds estimates. We first focus on the stability and fairness analysis of the AIMD/RED system with a single bottleneck. Then, we derive the theoretical estimates for the upper and lower bounds of homogeneous and heterogeneous AIMD/RED systems with feedback delays and further discuss the system performance when it is not asymptotically stable. Last, we develop a general model for a class of multiple-bottleneck networks and discuss the stability properties of such a system. Theoretical and simulation results presented in this thesis provide insights for in-depth understanding of AIME/RED system and help predict and control the system performance for the Internet with higher data rate links multiplexed with heterogeneous flows.
58

Congestion Control in Networks with Dynamic Flows

Ma, Kexin January 2007 (has links)
Congestion control in wireline networks has been studied extensively since the seminal work by Mazumdar et al in 1998. It is well known that this global optimization problem can be implemented in a distributed manner. Stability and fairness are two main design objectives of congestion control mechanisms. Most literatures make the assumption that the number of flows is fixed in the network and each flow has infinite backlog for transfer in developing congestion control schemes. However, this assumption may not hold in reality. Thus, there is a need to study congestion control algorithm in the presence of dynamic flows. It is only until recently that short-lived flows have been taken into account. In this thesis, we study utility maximization problems for networks with dynamic flows. In particular, we consider the case where each class of flows arrives according to a Poisson process and has a length given by a certain distribution. The goal is to maximize the long-term expected system utility, which is a function of the number of flows and the rate (identical within a given class) allocated to each flow. Our investigation shows that, as long as the average work brought by the arrival processes is strictly within the network stability region, the fairness and stability issues are independent. While stability can be guaranteed by, for example, a FIFO policy, utility maximization becomes an unconstrained optimization. We also provide a queueing interpretation of this seemingly surprising result and show that not all utility functions make sense under dynamic flows. Finally, we use simulation results to show that our algorithm indeed maximizes the expected system utility.
59

Internet Congestion Control: Modeling and Stability Analysis

Wang, Lijun 08 August 2008 (has links)
The proliferation and universal adoption of the Internet has made it become the key information transport platform of our time. Congestion occurs when resource demands exceed the capacity, which results in poor performance in the form of low network utilization and high packet loss rate. The goal of congestion control mechanisms is to use the network resources as efficiently as possible. The research work in this thesis is centered on finding ways to address these types of problems and provide guidelines for predicting and controlling network performance, through the use of suitable mathematical tools and control analysis. The first congestion collapse in the Internet was observed in 1980's. To solve the problem, Van Jacobson proposed the Transmission Control Protocol (TCP) congestion control algorithm based on the Additive Increase and Multiplicative Decrease (AIMD) mechanism in 1988. To be effective, a congestion control mechanism must be paired with a congestion detection scheme. To detect and distribute network congestion indicators fairly to all on-going flows, Active Queue Management (AQM), e.g., the Random Early Detection (RED) queue management scheme has been developed to be deployed in the intermediate nodes. The currently dominant AIMD congestion control, coupled with the RED queue in the core network, has been acknowledged as one of the key factors to the overwhelming success of the Internet. In this thesis, the AIMD/RED system, based on the fluid-flow model, is systematically studied. In particular, we concentrate on the system modeling, stability analysis and bounds estimates. We first focus on the stability and fairness analysis of the AIMD/RED system with a single bottleneck. Then, we derive the theoretical estimates for the upper and lower bounds of homogeneous and heterogeneous AIMD/RED systems with feedback delays and further discuss the system performance when it is not asymptotically stable. Last, we develop a general model for a class of multiple-bottleneck networks and discuss the stability properties of such a system. Theoretical and simulation results presented in this thesis provide insights for in-depth understanding of AIME/RED system and help predict and control the system performance for the Internet with higher data rate links multiplexed with heterogeneous flows.
60

Congestion control algorithms of TCP in emerging networks

Bhandarkar, Sumitha 02 June 2009 (has links)
In this dissertation we examine some of the challenges faced by the congestion control algorithms of TCP in emerging networks. We focus on three main issues. First, we propose TCP with delayed congestion response (TCP-DCR), for improving performance in the presence of non-congestion events. TCP-DCR delays the conges- tion response for a short interval of time, allowing local recovery mechanisms to handle the event, if possible. If at the end of the delay, the event persists, it is treated as congestion loss. We evaluate TCP-DCR through analysis and simulations. Results show significant performance improvements in the presence of non-congestion events with marginal impact in their absence. TCP-DCR maintains fairness with standard TCP variants that respond immediately. Second, we propose Layered TCP (LTCP), which modifies a TCP flow to behave as a collection of virtual flows (or layers), to improve eficiency in high-speed networks. The number of layers is determined by dynamic network conditions. Convergence properties and RTT-unfairness are maintained similar to that of TCP. We provide the intuition and the design for the LTCP protocol and evaluation results based on both simulations and Linux implementation. Results show that LTCP is about an order of magnitude faster than TCP in utilizing high bandwidth links while maintaining promising convergence properties. Third, we study the feasibility of employing congestion avoidance algorithms in TCP. We show that end-host based congestion prediction is more accurate than previously characterized. However, uncertainties in congestion prediction may be un- avoidable. To address these uncertainties, we propose an end-host based mechanism called Probabilistic Early Response TCP (PERT). PERT emulates the probabilistic response function of the router-based scheme RED/ECN in the congestion response function of the end-host. We show through extensive simulations that, similar to router-based RED/ECN, PERT provides fair bandwidth sharing with low queuing delays and negligible packet losses, without requiring the router support. It exhibits better characteristics than TCP-Vegas, the illustrative end-host scheme. PERT can also be used for emulating other router schemes. We illustrate this through prelim- inary results for emulating the router-based mechanism REM/ECN. Finally, we show the interactions and benefits of combining the different proposed mechanisms.

Page generated in 0.1022 seconds