• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Load shedding in network monitoring applications

Barlet Ros, Pere 15 December 2008 (has links)
Monitoring and mining real-time network data streams are crucial operations for managing and operating data networks. The information that network operators desire to extract from the network traffic is of different size, granularity and accuracy depending on the measurement task (e.g., relevant data for capacity planning and intrusion detection are very different). To satisfy these different demands, a new class of monitoring systems is emerging to handle multiple and arbitrary monitoring applications. Such systems must inevitably cope with the effects of continuous overload situations due to the large volumes, high data rates and bursty nature of the network traffic. These overload situations can severely compromise the accuracy and effectiveness of monitoring systems, when their results are most valuable to network operators. In this thesis, we propose a technique called load shedding as an effective and low-cost alternative to over-provisioning in network monitoring systems. It allows these systems to handle efficiently overload situations in the presence of multiple, arbitrary and competing monitoring applications. We present the design and evaluation of a predictive load shedding scheme that can shed excess load in front of extreme traffic conditions and maintain the accuracy of the monitoring applications within bounds defined by end users, while assuring a fair allocation of computing resources to non-cooperative applications. The main novelty of our scheme is that it considers monitoring applications as black boxes, with arbitrary (and highly variable) input traffic and processing cost. Without any explicit knowledge of the application internals, the proposed scheme extracts a set of features from the traffic streams to build an on-line prediction model of the resource requirements of each monitoring application, which is used to anticipate overload situations and control the overall resource usage by sampling the input packet streams. This way, the monitoring system preserves a high degree of flexibility, increasing the range of applications and network scenarios where it can be used. Since not all monitoring applications are robust against sampling, we then extend our load shedding scheme to support custom load shedding methods defined by end users, in order to provide a generic solution for arbitrary monitoring applications. Our scheme allows the monitoring system to safely delegate the task of shedding excess load to the applications and still guarantee fairness of service with non-cooperative users. We implemented our load shedding scheme in an existing network monitoring system and deployed it in a research ISP network. We present experimental evidence of the performance and robustness of our system with several concurrent monitoring applications during long-lived executions and using real-world traffic traces.
2

Data Mining Algorithms for Traffic Sampling, Estimation and Forecasting

Coric, Vladimir January 2014 (has links)
Despite the significant investments over the last few decades to enhance and improve road infrastructure worldwide, the capacity of road networks has not kept pace with the ever increasing growth in demand. As a result, congestion has become endemic to many highways and city streets. As an alternative to costly and sometimes infeasible construction of new roads, transportation departments are increasingly looking at ways to improve traffic flow over the existing infrastructure. The biggest challenge in accomplishing this goal is the ability to sample traffic data, estimate traffic current state, and forecast its future behavior. In this thesis, we first address the problem of traffic sampling where we propose strategies for frugal sensing where we collect a fraction of the observed traffic information to reduce costs while achieving high accuracy. Next we demonstrate how traffic estimation using deterministic traffic models can be improved using proposed data reconstruction techniques. Finally, we propose how mixture of experts algorithm which consists of two regime-specific linear predictors and a decision tree gating function can improve short-term and long-term traffic forecasting. As mobile devices become more pervasive, participatory sensing is becoming an attractive way of collecting large quantities of valuable location-based data. An important participatory sensing application is traffic monitoring, where GPS-enabled smartphones can provide invaluable information about traffic conditions. We propose a strategy for frugal sensing in which the participants send only a fraction of the observed traffic information to reduce costs while achieving high accuracy. The strategy is based on autonomous sensing, in which participants make decisions to send traffic information without guidance from the central server, thus reducing the communication overhead and improving privacy. To provide accurate and computationally efficient estimation of the current traffic, we propose to use a budgeted version of the Gaussian Process model on the server side. The experiments on real-life traffic data sets indicate that the proposed approach can use up to two orders of magnitude less samples than a baseline approach with only a negligible loss in accuracy. The estimation of the state of traffic provides a detailed picture of the conditions of a traffic network based on limited traffic measurements and, as such, plays a key role in intelligent transportation systems. Most often, traffic measurements are aggregated over multiple time steps, and this procedure raises the question of how to best use this information for state estimation. Reconstructing the high-resolution measurements from the aggregated ones and using them to correct the state estimates at every time step are proposed. Several reconstruction techniques from signal processing, including kernel regression and a reconstruction approach based on convex optimization, were considered. Experimental results show that signal reconstruction leads to more accurate traffic state estimation as compared with the standard approach for dealing with aggregated measurements. Accurate traffic speed forecasting can help in trip planning by allowing travelers to avoid congested routes, either by choosing alternative routes or by changing the departure time. An important feature of traffic is that it consists of free flow and congested regimes, which have significantly different properties. Training a single traffic speed predictor for both regimes typically results in suboptimal accuracy. To address this problem, a mixture of experts algorithm which consists of two regime-specific linear predictors and a decision tree gating function was developed. Experimental results showed that mixture of experts approach outperforms several popular benchmark approaches. / Computer and Information Science
3

Measuring understanding and modelling internet traffic

Hohn, Nicolas Unknown Date (has links) (PDF)
This thesis concerns measuring, understanding and modelling Internet traffic. We first study the origins of the statistical properties of Internet traffic, in particular its scaling behaviour, and propose a constructive model of packet traffic with physically motivated parameters. We base our analysis on a large amount of empirical data measured on different networks, and use a so called semi-experimental approach to isolate certain features of traffic we seek to model. These results lead to the choice of a particular Poisson cluster process, known as Bartlett-Lewis point process, for a new packet traffic model. This model has a small number of parameters with simple networking meaning, and is mathematically tractable. It allows us to gain valuable insight on the underlying mechanisms creating the observed statistics. / In practice, Internet traffic measurements are limited by the very large amount of data generated by high bandwidth links. This leads us to also investigate traffic sampling strategies and their respective inversion methods. We argue that the packet sampling mechanism currently implemented in Internet routers is not practical when one wants to infer the statistics of the full traffic from partial measurements. We advocate the use of flow sampling for many purposes. We show that such sampling strategy is much easier to invert and can give reasonable estimates of higher order traffic statistics such as distribution of number of packets per flow and spectral density of the packet arrival process. This inversion technique can also be used to fit the Bartlett-Lewis point process model from sampled traffic. / We complete our understanding of Internet traffic by focusing on the small scale behaviour of packet traffic. To do so, we use data from a fully instrumented Tier-1 router and measure the delays experienced by all the packets crossing it. We present a simple router model capable of simply reproducing the measured packet delays, and propose a scheme to export router performance information based on busy periods statistics. We conclude this thesis by showing how the Bartlett-Lewis point process can model the splitting and merging of packet streams in a router.

Page generated in 0.0777 seconds