Spelling suggestions: "subject:"congestion control"" "subject:"kongestion control""
31 |
Burst TCP: an approach for benefiting mice flowsGonçalves, Glauco Estácio January 2007 (has links)
Made available in DSpace on 2014-06-12T16:00:28Z (GMT). No. of bitstreams: 2
arquivo6669_1.pdf: 1298139 bytes, checksum: 82c0aa9def52f663c245e3f57be952ef (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2007 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / The Transmission Control Protocol (TCP) is responsible for supplying reliable data
transport service on the TCP/IP stack and for carrying most than 90% of all Internet traffic.
In addition, the stability and efficiency of the actual TCP congestion control mechanisms
have been extensively studied and are indeed well known by the networking community.
However, new Internet applications and functionalities continuously modify its traffic
characteristics, demanding new research in order to adapt TCP to the new reality of the
Internet.
In particular, a traffic phenomenon known as "mice and elephants" has been
motivating important researches around the TCP. The main point is that the standard TCP
congestion control mechanisms were designed for elephants leading small flows to
experience poor performance. This is caused by the exponential behavior of Slow Start
which often causes multiple packet losses due their aggressive increase.
This work examines minutely the problems caused by the standard TCP congestion
control to mice flows as well as it studies the most important proposals to solve them. Thus,
based on such research studies, a modified TCP startup mechanism was proposed. The
Burst TCP (B-TCP) is an intuitive TCP modification that employs a responsive congestion
window growth scheme based on the current window size, to improve performance for
small flows. Moreover, B-TCP is easy to implement and requires TCP adjustment at the
sender side only.
Simulation experiments show that B-TCP can significantly reduce both transfer times
and packet losses for small flows without causing damage to large flows
|
32 |
Evaluation and Optimization of Quality of Service (QoS) In IP Based NetworksGhimire, Rajiv, Noor, Mustafa January 2010 (has links)
The purpose of this thesis is to evaluate and analyze the performance of RED (Random Early Detection) algorithm and our proposed RED algorithm. As an active queue management RED has been considered an emerging issue in the last few years. Quality of service (QoS) is the latest issue in today’s internet world. The name QoS itself signifies that special treatment is given to the special traffic. With the passage of time the network traffic grew in an exponential way. With this, the end user failed to get the service for what they had paid and expected for. In order to overcome this problem, QoS within packet transmission came into discussion in internet world. RED is the active queue management system which randomly drops the packets whenever congestion occurs. It is one of the active queue management systems designed for achieving QoS. In order to deal with the existing problem or increase the performance of the existing algorithm, we tried to modify RED algorithm. Our purposed solution is able to minimize the problem of packet drop in a particular duration of time achieving the desired QoS. An experimental approach is used for the validation of the research hypothesis. Results show that the probability of packet dropping in our proposed RED algorithm during simulation scenarios significantly minimized by early calculating the probability value and then by calling the pushback mechanism according to that calculated probability value. / +46739567385(Rajiv), +46762125426(Mustafa)
|
33 |
Network Traffic Control Based on Modern Control Techniques: Fuzzy Logic and Network Utility MaximizationLiu, Jungang January 2014 (has links)
This thesis presents two modern control methods to address the Internet traffic congestion control issues. They are based on a distributed traffic management framework for the fast-growing Internet traffic in which routers are deployed with intelligent or optimal data rate controllers to tackle the traffic mass.
The first one is called the IntelRate (Intelligent Rate) controller using the fuzzy logic theory. Unlike other explicit traffic control protocols that have to estimate network parameters (e.g., link latency, bottleneck bandwidth, packet loss rate, or the number of flows), our fuzzy-logic-based explicit controller can measure the router queue size directly. Hence it avoids various potential performance problems arising from parameter estimations while reducing much computation and memory consumption in the routers. The communication QoS (Quality of Service) is assured by the good performances of our scheme such as max-min fairness, low queueing delay and good robustness to network dynamics. Using the Lyapunov’s Direct Method, this controller is proved to be globally asymptotically stable.
The other one is called the OFEX (Optimal and Fully EXplicit) controller using convex optimization. This new scheme is able to provide not only optimal bandwidth allocation but also fully explicit congestion signal to sources. It uses the congestion signal from the most congested link, instead of the cumulative signal from a flow path. In this way, it overcomes the drawback of the relatively explicit controllers that bias the multi-bottlenecked users, and significantly improves their convergence speed and throughput performance. Furthermore, the OFEX controller design considers a dynamic model by proposing a remedial measure against the unpredictable bandwidth changes in contention-based multi-access networks (such as shared Ethernet or IEEE 802.11). When compared with the former works/controllers, such a remedy also effectively reduces the instantaneous queue size in a router, and thus significantly improving the queueing delay and packet loss performance.
Finally, the applications of these two controllers on wireless local area networks have been investigated. Their design guidelines/limits are also provided based on our experiences.
|
34 |
A Clean-Slate Architecture for Reliable Data Delivery in Wireless Mesh NetworksElRakabawy, Sherif M., Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce a clean-slate architecture for improving the delivery of data packets in IEEE 802.11 wireless mesh networks. Opposed to the rigid TCP/IP layer architecture which exhibits serious deficiencies in such networks, we propose a unitary layer approach that combines both routing and transport functionalities in a single layer. The new Mesh Transmission Layer (MTL) incorporates cross-interacting routing and transport modules for a reliable data delivery based on the loss probabilities of wireless links. Due to the significant drawbacks of standard TCP over IEEE 802.11, we particularly focus on the transport module, proposing a pure rate-based approach for transmitting data packets according to the current contention in the network. By considering the IEEE 802.11 spatial reuse constraint and employing a novel acknowledgment scheme, the new transport module improves both goodput and fairness in wireless mesh networks. In a comparative performance study, we show that MTL achieves up to 48% more goodput and up to 100% less packet drops than TCP/IP, while maintaining excellent fairness results.
|
35 |
A Hybrid (Active-Passive) VANET Clustering TechniqueMoore, Garrett Lee 01 January 2019 (has links)
Clustering serves a vital role in the operation of Vehicular Ad hoc Networks (VANETs) by continually grouping highly mobile vehicles into logical hierarchical structures. These moving clusters support Intelligent Transport Systems (ITS) applications and message routing by establishing a more stable global topology. Clustering increases scalability of the VANET by eliminating broadcast storms caused by packet flooding and facilitate multi-channel operation. Clustering techniques are partitioned in research into two categories: active and passive. Active techniques rely on periodic beacon messages from all vehicles containing location, velocity, and direction information. However, in areas of high vehicle density, congestion may occur on the long-range channel used for beacon messages limiting the scale of the VANET. Passive techniques use embedded information in the packet headers of existing traffic to perform clustering. In this method, vehicles not transmitting traffic may cause cluster heads to contain stale and malformed clusters. This dissertation presents a hybrid active/passive clustering technique, where the passive technique is used as a congestion control strategy for areas where congestion is detected in the network. In this case, cluster members halt their periodic beacon messages and utilize embedded position information in the header to update the cluster head of their position. This work demonstrated through simulation that the hybrid technique reduced/eliminated the delays caused by congestion in the modified Distributed Coordination Function (DCF) process, thus increasing the scalability of VANETs in urban environments. Packet loss and delays caused by the hidden terminal problem was limited to distant, non-clustered vehicles. This dissertation report presents a literature review, methodology, results, analysis, and conclusion.
|
36 |
A Survey on Congestion Detection and Control in Connected VehiclesParanjothi, Anirudh, Khan, Mohammad S., Zeadally, Sherali 01 November 2020 (has links)
The dynamic nature of vehicular ad hoc network (VANET) induced by frequent topology changes and node mobility, imposes critical challenges for vehicular communications. Aggravated by the high volume of information dissemination among vehicles over limited bandwidth, the topological dynamics of VANET causes congestion in the communication channel, which is the primary cause of problems such as message drop, delay, and degraded quality of service. To mitigate these problems, congestion detection, and control techniques are needed to be incorporated in a vehicular network. Congestion control approaches can be either open-loop or closed loop based on pre-congestion or post congestion strategies. We present a general architecture of vehicular communication in urban and highway environment as well as a state-of-the-art survey of recent congestion detection and control techniques. We also identify the drawbacks of existing approaches and classify them according to different hierarchical schemes. Through an extensive literature review, we recommend solution approaches and future directions for handling congestion in vehicular communications.
|
37 |
VANETomo: A Congestion Identification and Control Scheme in Connected Vehicles Using Network TomographyParanjothi, Anirudh, Khan, Mohammad S., Patan, Rizwan, Parizi, Reza M., Atiquzzaman, Mohammed 01 February 2020 (has links)
The Internet of Things (IoT) is a vision for an internetwork of intelligent, communicating objects, which is on the cusp of transforming human lives. Smart transportation is one of the critical application domains of IoT and has benefitted from using state-of-the-art technology to combat urban issues such as traffic congestion while promoting communication between the vehicles, increasing driver safety, traffic efficiency and ultimately paving the way for autonomous vehicles. Connected Vehicle (CV) technology, enabled by Dedicated Short Range Communication (DSRC), has attracted significant attention from industry, academia, and government, due to its potential for improving driver comfort and safety. These vehicular communications have stringent transmission requirements. To assure the effectiveness and reliability of DRSC, efficient algorithms are needed to ensure adequate quality of service in the event of network congestion. Previously proposed congestion control methods that require high levels of cooperation among Vehicular Ad-Hoc Network (VANET) nodes. This paper proposes a new approach, VANETomo, which uses statistical Network Tomography (NT) to infer transmission delays on links between vehicles with no cooperation from connected nodes. Our proposed method combines open and closed loops congestion control in a VANET environment. Simulation results show VANETomo outperforming other congestion control strategies.
|
38 |
Link Adaptation Algorithm and Metric for IEEE Standard 802.16Ramachandran, Shyamal 26 March 2004 (has links)
Broadband wireless access (BWA) is a promising emerging technology. In the past, most BWA systems were based on proprietary implementations. The Institute of Electrical and Electronics Engineers (IEEE) 802.16 task group recently standardized the physical (PHY) and medium-access control (MAC) layers for BWA systems. To operate in a wide range of physical channel conditions, the standard defines a robust and flexible PHY. A wide range of modulation and coding schemes are defined. While the standard provides a framework for implementing link adaptation, it does not define how exactly adaptation algorithms should be developed.
This thesis develops a link adaptation algorithm for the IEEE 802.16 standard's WirelessMAN air interface. This algorithm attempts to minimize the end-to-end delay in the system by selecting the optimal PHY burst profile on the air interface. The IEEE 802.16 standard recommends measuring C/(N+I) at the receiver to initiate a change in the burst profile, based on the comparison of the instantaneous the C/(N+I) with preset C/(N+I) thresholds. This research determines the C/(N+I) thresholds for the standard specified channel Type 1. To determine the precise C/(N+I) thresholds, the end-to-end(ETE) delay performance of IEEE 802.16 is studied for different PHY burst profiles at varying signal-to-noise ratio values. Based on these performance results, we demonstrate that link layer ETE delay does not reflect the physical channel condition and is therefore not suitable for use as the criterion for the determination of the C/(N+I) thresholds. The IEEE 802.16 standard specifies that ARQ should not be implemented at the MAC layer. Our results demonstrate that this design decision renders the link layer metrics incapable of use in the link adaptation algorithm.
Transmission Control Protocol (TCP) delay is identified as a suitable metric to serve as the link quality indicator. Our results show that buffering and retransmissions at the transport layer cause ETE TCP delay to rise exponentially below certain SNR values. We use TCP delay as the criterion to determine the SNR entry and exit thresholds for each of the PHY burst profiles. We present a simple link adaptation algorithm that attempts to minimize the end-to-end TCP delay based on the measured signal-to-noise ratio (SNR).
The effects of Internet latency, TCP's performance enhancement features and network traffic on the adaptation algorithm are also studied. Our results show that delay in the Internet can considerably affect the C/(N+I) thresholds used in the LA algorithm. We also show that the load on the network also impacts the C/(N+I) thresholds significantly. We demonstrate that it is essential to characterize Internet delays and network load correctly, while developing the LA algorithm. We also demonstrate that TCP's performance enhancement features do not have a significant impact on TCP delays over lossy wireless links. / Master of Science
|
39 |
Congestion control based on cross-layer game optimization in wireless mesh networksMa, X., Xu, L., Min, Geyong January 2013 (has links)
No / Due to the attractive characteristics of high capacity, high-speed, wide coverage and low transmission power, Wireless Mesh Networks become the ideal choice for the next-generation wireless communication systems. However, the network congestion of WMNs deteriorates the quality of service provided to end users. Game theory optimization model is a novel modeling tool for the study of multiple entities and the interaction between them. On the other hand, cross-layer design is shown to be practical for optimizing the performance of network communications. Therefore, a combination of the game theory and cross-layer optimization, named cross-layer game optimization, is proposed to reduce network congestion in WMNs. In this paper, the network congestion control in the transport layer and multi-path flow assignment in the network layer of WMNs are investigated. The proposed cross-layer game optimization algorithm is then employed to enable source nodes to change their set of paths and adjust their congestion window according to the round-trip time to achieve a Nash equilibrium. Finally, evaluation results show that the proposed cross-layer game optimization scheme achieves high throughput with low transmission delay.
|
40 |
Improving TCP performance over heterogeneous networks : The investigation and design of End to End techniques for improving TCP performance for transmission errors over heterogeneous data networks.Alnuem, M.A. January 2009 (has links)
Transmission Control Protocol (TCP) is considered one of the most important protocols
in the Internet. An important mechanism in TCP is the congestion control
mechanism which controls TCP sending rate and makes TCP react to congestion
signals. Nowadays in heterogeneous networks, TCP may work in networks with some
links that have lossy nature (wireless networks for example). TCP treats all packet
loss as if they were due to congestion. Consequently, when used in networks that
have lossy links, TCP reduces sending rate aggressively when there are transmission
(non-congestion) errors in an uncongested network.
One solution to the problem is to discriminate between errors; to deal with congestion
errors by reducing TCP sending rate and use other actions for transmission
errors. In this work we investigate the problem and propose a solution using an
end-to-end error discriminator. The error discriminator will improve the current
congestion window mechanism in TCP and decide when to cut and how much to
cut the congestion window.
We have identified three areas where TCP interacts with drops: congestion window
update mechanism, retransmission mechanism and timeout mechanism. All of
these mechanisms are part of the TCP congestion control mechanism. We propose
changes to each of these mechanisms in order to allow TCP to cope with transmission
errors. We propose a new TCP congestion window action (CWA) for transmission
errors by delaying the window cut decision until TCP receives all duplicate acknowledgments
for a given window of data (packets in flight). This will give TCP a clear
image about the number of drops from this window. The congestion window size is
then reduced only by number of dropped packets. Also, we propose a safety mechanism
to prevent this algorithm from causing congestion to the network by using
an extra congestion window threshold (tthresh) in order to save the safe area where
there are no drops of any kind. The second algorithm is a new retransmission action
to deal with multiple drops from the same window. This multiple drops action
(MDA) will prevent TCP from falling into consecutive timeout events by resending
all dropped packets from the same window. A third algorithm is used to calculate
a new back-off policy for TCP retransmission timeout based on the network¿s available
bandwidth. This new retransmission timeout action (RTA) helps relating the
length of the timeout event with current network conditions, especially with heavy
transmission error rates.
The three algorithms have been combined and incorporated into a delay based
error discriminator. The improvement of the new algorithm is measured along with
the impact on the network in terms of congestion drop rate, end-to-end delay, average
queue size and fairness of sharing the bottleneck bandwidth. The results show that
the proposed error discriminator along with the new actions toward transmission
errors has increased the performance of TCP. At the same time it has reduced the
load on the network compared to existing error discriminators. Also, the proposed
error discriminator has managed to deliver excellent fairness values for sharing the
bottleneck bandwidth.
Finally improvements to the basic error discriminator have been proposed by
using the multiple drops action (MDA) for both transmission and congestion errors.
The results showed improvements in the performance as well as decreases in the
congestion loss rates when compared to a similar error discriminator. / Ministry of Higher Education and
King Saud University in Saudi Arabia.
|
Page generated in 0.0759 seconds