• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Window-based congestion control : Modeling, analysis and design

Möller, Niels January 2008 (has links)
This thesis presents a model for the ACK-clock inner loop, common to virtually all Internet congestion control protocols, and analyzes the stability properties of this inner loop, as well as the stability and fairness properties of several window update mechanisms built on top of the ACK-clock. Aided by the model for the inner-loop, two new congestion control mechanisms are constructed, for wired and wireless networks. Internet traffic can be divided into two main types: TCP traffic and real-time traffic. Sending rates for TCP traffic, e.g., file-sharing, uses window-based congestion control, and adjust continuously to the network load. The sending rates for real-time traffic, e.g., voice over IP, are mostly independent of the network load. The current version of the Transmission Control Protocol (TCP) results in large queueing delays at bottlenecks, and poor quality for real-time applications that share a bottleneck link with TCP. The first contribution is a new model for the dynamic relationship between window sizes, sending rates, and queue sizes. This system, with window sizes as inputs, and queue sizes as outputs, is the inner loop at the core of window-based congestion control. The new model unifies two models that have been widely used in the literature. The dynamics of this system, including the static gain and the time constant, depend on the amount of cross traffic which is not subject to congestion control. The model is validated using ns-2 simulations, and it is shown that the system is stable. For moderate cross traffic, the system convergence time is a couple of roundtrip times. When introducing a new congestion control protocol, one important question is how flows using different protocols share resources. The second contribution is an analysis of the fairness when a flow using TCP Westwood+ is introduced in a network that is also used by a TCP New Reno flow. It is shown that the sharing of capacity depends on the buffer size at the bottleneck link. With a buffer size matching the bandwidth-delay product, both flows get equal shares. If the buffer size is smaller, Westwood+ gets a larger share. In the limit of zero buffering, it gets all the capacity. If the buffer size is larger, New Reno gets a larger share. In the limit of very large buffers, it gets 3/4 of the capacity. The third contribution is a new congestion control mechanism, maintaining small queues. The overall control structure is similar to the combination of TCP with Active Queue Management (AQM) and explicit congestion notification, where routers mark some packets according to a probability which depends on the queue size. The key ideas are to take advantage of the stability of the inner loop, and to use control laws for setting and reacting to packet marks that result in more frequent feedback than with AQM. Stability analysis for the single flow, single bottleneck topology gives a simple stability condition, which can be used to guide tuning. Simulations, both of the fluid-flow differential equations, and in the ns-2 packet simulator, show that the protocol maintains small queues. The simulations also indicate that tuning, using a single control parameter per link, is fairly easy. The final contribution is a split-connection scheme for downloads to a mobile terminal. A wireless mobile terminal requests a file from a web server, via a proxy. During the file transfer, the Radio Network Controller (RNC) informs the proxy about bandwidth changes over the radio channel, and the current RNC queue length. A novel control mechanism in the proxy uses this information to adjust the window size. In simulation studies, including one based on detailed radio-layer simulations, both the user response time and the link utilization are improved, compared TCP New Reno, Eifel and Snoop, both for a dedicated channel, and for the shared channel in High-Speed Downlink Packet Access. / QC 20100830
2

Performance Prediction Models for Rate-based and Window-based Flow Control Mechanisms

Wu, Lien-Wen 18 January 2006 (has links)
In this dissertation, we present performance prediction models for rate-based and window¡Vbased flow control mechanisms. For rate-based flow control, such as in ATM network, we derive two analytical models to predict the ACR rates for congestion-free and congestion networks, respectively. To coordinate the cooperative problems of TCP over ATM networks, we propose a new algorithm to monitor the states of ATM switches and adjust TCP congestion window size based on RM cells. For window-based flow control mechanisms, such as in TCP-Reno and TCP-SACK, we respectively present analytical models to systematically capture the characteristics of multiple consecutive packet losses in TCP windows. Through fast retransmission, the lost packets may or may not be recovered. Thus, we present upper bound analyses for slow start and congestion avoidance phases to study the effects of multiple packet losses on TCP performance. Above the proposed upper bounds, the lost packets may not be successfully recovered through fast retransmission. Finally, we develop a model to study the TCP performance in terms of throughput degradation resulted from multiple consecutive packet losses. The analytical results from the throughput degradation model are validated through OPNET simulation.
3

A Study of Rate-based TCP Mechanisms

Lai, Hsiu-Hung 24 August 2006 (has links)
Many applications in modern science need to transmit extremely massive amount of data over wide area networks. These data usually do not need stringent real-time requirements but require large bandwidth to finish transmission with unreasonable time. High-energy physics experiments and climate modeling and analysis are typical examples of such applications. As TCP is known to perform inefficiently over networks of large delay-bandwidth product, efficient transmission of this kind of massive, non-real-time data has been heavily studied in the past. The previous results work well in dedicated networks but will compete for fair share of bandwidth with normal TCP connections if they operate in the public networks. The objective of this thesis is to design a new transmission protocol for the above applications that can operate in the public networks without affecting normal TCP connections. The new protocol is called Rate Control Transmission Protocol (RCTP). The idea is to apply the packet-pair measurement technique to measure the bandwidth share in the network for the transmission. The sending rate is based on that measurement and is precisely compensated by the RTT variance measurement. Due to the RTT compensation, RCTP can efficiently utilize the unused bandwidth in the network while not affecting the normal TCP transmissions, making it perfect for transmitting massive, non-real-time data in the public networks.
4

Modeling the Throughput Performance of the SF-SACK Protocol

Voicu, Laura M. 30 March 2006 (has links)
Besides the two classical techniques used to evaluate the performance of a protocol, computer simulation and experimental measurements, mathematical modeling has been used to study the performance of the TCP protocol. This technique gives an elegant way to gain insights when studying the behavior of a protocol, while providing useful information about its performance. This thesis presents an analytical model for the SF-SACK protocol, a TCP SACK based protocol conceived to be appropriate for data and streaming applications. SF-Sack modifies the multiplicative part of the Additive Increase Multiplicative Decrease of TCP to provide good performance for data and streaming applications, while avoiding the TCP-friendliness problem of the Internet. The modeling of the SF-SACK protocol raises new challenges compared to the classical TCP modeling in two ways: first, the model needs to be adapted to a more complex dynamism of the congestion window, and second, the model needs to incorporate the scheduler that SF-SACK makes use of in order to maintain a periodically updated value of the congestion window. Presented here is a model that is progressively built in order to consider these challenges. The first step is to consider only losses detected by triple-duplicate acknowledgments, with the restriction that one such loss happens each scheduler interval. The second step is to consider losses detected via triple-duplicate acknowledgments, while eliminating the above restriction. Finally, the third step is to include losses detected via time-outs. The result is an analytical characterization of the steady-state send rate and throughput of a SF-SACK flow as a function of the loss probability, the round-trip time (RTT), the time-out interval, and the scheduler interval. The send rate and the throughput of SF-SACK were compared against available results for TCP Reno. The obtained graphs showed that SF-SACK presents a better performance than TCP. The analytical model of the SF-SACK follows the trends of the results that are presently available, using both the ns-2 simulator and experimental measurements.
5

TCP/AQM Congestion Control Based on the H2/H∞ Theory

Haghighizadeh, Navin January 2016 (has links)
This thesis uses a modern control approach to address the Internet traffic control issues in the Transport Layer. Through literature review, we are interested in using the H2/H∞ formulation to obtain the good transient performance of an H2 controller and the good robust property from an H∞ controller while avoiding their deficiencies. The H2/H∞ controller is designed by formulating an optimization problem using the H2-norm and the H∞-norm of the system, which can be solved by an LMI approach using MATLAB. Our design starts with the modeling of a router and the control system by augmenting the network plant function with the Sensitivity function S, the Complementary Sensitivity function T and the Input Sensitivity function U. These sensitivity functions along with their weight functions are used to monitor the closed-loop dynamics of the traffic control. By choosing different combinations of the sensitivity functions, we can obtain the SU, the ST and the STU controllers. Both the window-based and rate-based version of these different types of H2/H∞ controllers have been designed and investigated. We have also proved that these controllers are stable using Lyapunov’s First Method. Next, we verify the performance of the controllers by OPNET simulation using different performance measures of queue length, throughput, queueing delay, packet loss rate and goodput. Our performance evaluation via simulation has demonstrated the robustness and the better transient response such as the rise/fall time and the peak queue value. We have also investigated the controller performances subject to network dynamics as well as through comparison with other controllers. Finally, we have improved these controllers for real-time application. They are capable to update/renew the controller in a short time whenever new network parameter values are detected so that the optimum performance can be maintained.
6

Exploratory Analysis of Human Sleep Data

Laxminarayan, Parameshvyas 19 January 2004 (has links)
In this thesis we develop data mining techniques to analyze sleep irregularities in humans. We investigate the effects of several demographic, behavioral and emotional factors on sleep progression and on patient's susceptibility to sleep-related and other disorders. Mining is performed over subjective and objective data collected from patients visiting the UMass Medical Center and the Day Kimball Hospital for treatment. Subjective data are obtained from patient responses to questions posed in a sleep questionnaire. Objective data comprise observations and clinical measurements recorded by sleep technicians using a suite of instruments together called polysomnogram. We create suitable filters to capture significant events within sleep epochs. We propose and employ a Window-based Association Rule Mining Algorithm to discover associations among sleep progression, pathology, demographics and other factors. This algorithm is a modified and extended version of the Set-and-Sequences Association Rule Mining Algorithm developed at WPI to support the mining of association rules from complex data types. We analyze both the medical as well as the statistical significance of the associations discovered by our algorithm. We also develop predictive classification models using logistic regression and compare the results with those obtained through association rule mining.
7

A Reservoir of Adaptive Algorithms for Online Learning from Evolving Data Streams

Pesaranghader, Ali 26 September 2018 (has links)
Continuous change and development are essential aspects of evolving environments and applications, including, but not limited to, smart cities, military, medicine, nuclear reactors, self-driving cars, aviation, and aerospace. That is, the fundamental characteristics of such environments may evolve, and so cause dangerous consequences, e.g., putting people lives at stake, if no reaction is adopted. Therefore, learning systems need to apply intelligent algorithms to monitor evolvement in their environments and update themselves effectively. Further, we may experience fluctuations regarding the performance of learning algorithms due to the nature of incoming data as it continuously evolves. That is, the current efficient learning approach may become deprecated after a change in data or environment. Hence, the question 'how to have an efficient learning algorithm over time against evolving data?' has to be addressed. In this thesis, we have made two contributions to settle the challenges described above. In the machine learning literature, the phenomenon of (distributional) change in data is known as concept drift. Concept drift may shift decision boundaries, and cause a decline in accuracy. Learning algorithms, indeed, have to detect concept drift in evolving data streams and replace their predictive models accordingly. To address this challenge, adaptive learners have been devised which may utilize drift detection methods to locate the drift points in dynamic and changing data streams. A drift detection method able to discover the drift points quickly, with the lowest false positive and false negative rates, is preferred. False positive refers to incorrectly alarming for concept drift, and false negative refers to not alarming for concept drift. In this thesis, we introduce three algorithms, called as the Fast Hoeffding Drift Detection Method (FHDDM), the Stacking Fast Hoeffding Drift Detection Method (FHDDMS), and the McDiarmid Drift Detection Methods (MDDMs), for detecting drift points with the minimum delay, false positive, and false negative rates. FHDDM is a sliding window-based algorithm and applies Hoeffding’s inequality (Hoeffding, 1963) to detect concept drift. FHDDM slides its window over the prediction results, which are either 1 (for a correct prediction) or 0 (for a wrong prediction). Meanwhile, it compares the mean of elements inside the window with the maximum mean observed so far; subsequently, a significant difference between the two means, upper-bounded by the Hoeffding inequality, indicates the occurrence of concept drift. The FHDDMS extends the FHDDM algorithm by sliding multiple windows over its entries for a better drift detection regarding the detection delay and false negative rate. In contrast to FHDDM/S, the MDDM variants assign weights to their entries, i.e., higher weights are associated with the most recent entries in the sliding window, for faster detection of concept drift. The rationale is that recent examples reflect the ongoing situation adequately. Then, by putting higher weights on the latest entries, we may detect concept drift quickly. An MDDM algorithm bounds the difference between the weighted mean of elements in the sliding window and the maximum weighted mean seen so far, using McDiarmid’s inequality (McDiarmid, 1989). Eventually, it alarms for concept drift once a significant difference is experienced. We experimentally show that FHDDM/S and MDDMs outperform the state-of-the-art by representing promising results in terms of the adaptation and classification measures. Due to the evolving nature of data streams, the performance of an adaptive learner, which is defined by the classification, adaptation, and resource consumption measures, may fluctuate over time. In fact, a learning algorithm, in the form of a (classifier, detector) pair, may present a significant performance before a concept drift point, but not after. We define this problem by the question 'how can we ensure that an efficient classifier-detector pair is present at any time in an evolving environment?' To answer this, we have developed the Tornado framework which runs various kinds of learning algorithms simultaneously against evolving data streams. Each algorithm incrementally and independently trains a predictive model and updates the statistics of its drift detector. Meanwhile, our framework monitors the (classifier, detector) pairs, and recommends the efficient one, concerning the classification, adaptation, and resource consumption performance, to the user. We further define the holistic CAR measure that integrates the classification, adaptation, and resource consumption measures for evaluating the performance of adaptive learning algorithms. Our experiments confirm that the most efficient algorithm may differ over time because of the developing and evolving nature of data streams.

Page generated in 0.0447 seconds