1 |
REORDERING PACKET BASED DATA IN REAL-TIME DATA ACQUISITION SYSTEMSKilpatrick, Stephen, Rasche, Galen, Cunningham, Chris, Moodie, Myron, Abbott, Ben 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Ubiquitous internet protocol (IP) hardware has reached performance and capability levels that
allow its use in data collection and real-time processing applications. Recent development
experience with IP-based airborne data acquisition systems has shown that the open, pre-existing
IP tools, standards, and capabilities support this form of distribution and sharing of data quite
nicely, especially when combined with IP multicast. Unfortunately, the packet based nature of
our approach also posed some problems that required special handling to achieve performance
requirements. We have developed methods and algorithms for the filtering, selecting, and
retiming problems associated with packet-based systems and present our approach in this paper.
|
2 |
On Admission Control for IP Networks Based on ProbingMás, Ignacio January 2008 (has links)
The current Internet design is based on a best-effort service, which combines high utilization of network resources with architectural simplicity. As a consequence of this design, the Internet is unable to provide guaranteed or predictable quality of service (QoS) to real-time services that have constraints on end-to-end delay, delay jitter and packet loss. To add QoS capabilities to the present Internet, the new functions need to be simple to implement, while allowing high network utilization. In recent years, different methods have been investigated to provide the required QoS. Most of these methods include some form of admission control so that new flows are only admitted to the network if the admission does not decrease the quality of connections that are already in progress below some defined level. To achieve the required simplicity a new family of admission control methods, called end-to-end measurement-based admission control moves the admission decision to the edges of the network. This thesis presents a set of methods for admission control based on measurements of packet loss. The thesis studies how to deploy admission control in an incremental way: First, admission control is included in the audiovisual real-time applications, without any support from the network. Second, admission control is enabled at the transport layer to differentiate between elastic and inelastic flows, by embedding the probing mechanism in UDP and using the inherent congestion control of TCP. Finally, admission control is deployed at the network layer by providing differentiated scheduling in the network for probe and data packets, which then allows the operator to control the blocking probability for the inelastic flows and the average throughput for the elastic flows. The thesis offers a description of the incremental steps to provide QoS on a DiffServ-based Internet. It analyzes the proposed schemes and provides extensive figures of performance based on simulations and on real implementations. It also shows how the admission control can be used in multicast sessions by making the admission decision at the receiver. The thesis provides as well two different mathematical analyses of the network layer admission control, which enable operators to obtain initial configuration parameters for the admission decision, like queue sizes, based on the forecasted or measured traffic volume. The thesis ends by considering a new method for overload control in WLAN cells, closely based on the ideas for admission control presented in the rest of the articles. / QC 20100826
|
3 |
Improving the convergence of IP routing protocolsFrancois, Pierre 30 October 2007 (has links)
The IP protocol suite has been initallyi designed to provide best effort reachability among the nodes of a network or an inter-network. The goal was to design a set of routing solutions that would allow routers to automatically provide end-to-end connectivity among hosts. Also, the solution was meant to recover the connectivity
upon the failure of one or multiple devices supporting the service, without the need
of manual, slow, and error-prone reconfigurations. In other words, the requirement was to have an Internet that "converges" on its own.
Along with the "Internet Boom", network availability expectations increased,
as e-business emerged and companies started to associate loss of Internet connectivity
with loss of customers... and money. So, Internet Service Providers (ISPs) relied on best practice rules for the design and the configuration of their networks, in order to improve their Quality of Service.
The goal of this thesis is to complement the IP routing suite so as to improve its resiliency. It provides enhancements to routing protocols that reduce the IP packet losses when an IP network reacts to a change of its topology. It also provides techniques that allow ISPs to perform reconfigurations of their networks that do not lead to packet losses.
|
4 |
Quality of Service for IP Networks : In Theory and Practice / Quality of service för IP-nätverk : i teori och praktikRosen, Magnus von January 2002 (has links)
Quality of Service (QoS) for IP networks is a set of methods for establishing better and more reliable performance for today's and tomorrow's networks. When transmitting real-time data from such applications as IP telephony, video conferencing and IP broadcasting, it is imperative that the data is transmitted quickly and with even delays. Longer delays mean problems when communicating, varying transfer times means that data packets are delivered too late to be used, or even dropped. As network applications grow more demanding, the networks can not always keep up. Even though a network may offer more bandwidth than needed, disturbances to sound and picture is to be expected because of the competition with other data traffic. QoS can solve many such problems by reserving private channels through a network, or differentiating classes of traffic to prioritise the sensitive data. QoS also contains methods to speed up backbone data transfers by in advance planning complete routes over a network, and avoiding congested or broken connections. This report explains QoS as it stands today, together with suggestions on how it could work for Axis Communications AB. It also presents an experiment to test some QoS methods in a real-time sensitive situation, demonstrating the effectiveness and priceworthiness of QoS. / Quality Of Service (QoS) för IP-nätverk är en uppsättning metoder för att erbjuda garantier i hastighet och pålithlighet för överföringar i IP-nätverk. I takt med att nätverk blir snabbare och att flera former av kommunikation flyttas till IP-nätverk, kommer krav på stabilare och mer pålitliga överföringar. QoS gör det möjligt att använda vanliga datornätverk till att exempelvis ringa och genomföra videokonferanser, applikationer som vanligtvis plågas av bristande kvalité på grund av att nätverkstrafik tappas eller försenas. Denna rapport presenterar nuvarande problem och dess lösningar i form av QoS. För att testa vissa QoS-funktioner och mognaden av Open Source-implementationer för QoS görs tester som även visar effektiviteten och de ekonomiska fördelarna av QoS. / Magnus von Rosen Juryvägen 35 226 57 Lund 046 - 13 89 61 pt98mvo@student.bth.se magnusvr@telia.com
|
5 |
Inteligentní systémy hromadného sběru dat v energetických sítích / Intelligent systems of mass data acquisition in power gridsKrejčír, Ľuboš January 2011 (has links)
This paper is describing the issues of data collection in power distribution networks. It discusses the posibilities of data communication over wide area networks using the communication protocol IEC 60870-5-104, used in power distribution systems for transmission of information over IP networks. Thesis presents 4 technologies, suitable for data collection, with respect to the use of existing infrastructure of the utility. It focuses on design of appropriate data types in correspondence with used IEC 60870-5-104 protocol, and estimates the minimum data requirements for transmission, through proposed hierarchical network, with collecting data concentrators. For verification of given design, simulations are carried out based on proposed data loads with subsequent analysis of network load and transmission delays. Consequently, the results are analyzed and selected parts of network optimized for improvement od selected results, of which causes of formation are discussed in debate.
|
6 |
Impact of wireless losses on the predictability of end-to-end flow characteristics in Mobile IP NetworksBhoite, Sameer Prabhakarrao 17 February 2005 (has links)
Technological advancements have led to an increase in the number of wireless and
mobile devices such as PDAs, laptops and smart phones. This has resulted in an ever-
increasing demand for wireless access to the Internet. Hence, wireless mobile traffic
is expected to form a significant fraction of Internet traffic in the near future, over
the so-called Mobile Internet Protocol (MIP) networks. For real-time applications,
such as voice, video and process monitoring and control, deployed over standard IP
networks, network resources must be properly allocated so that the mobile end-user
is guaranteed a certain Quality of Service (QoS). As with the wired and fixed IP
networks, MIP networks do not offer any QoS guarantees. Such networks have been
designed for non-real-time applications. In attempts to deploy real-time applications
in such networks without requiring major network infrastructure modifications, the
end-points must provide some level of QoS guarantees. Such QoS guarantees or QoS
control, requires ability of predictive capabilities of the end-to-end flow characteristics.
In this research network flow accumulation is used as a measure of end-to-end
network congestion. Careful analysis and study of the flow accumulation signal shows
that it has long-term dependencies and it is very noisy, thus making it very difficult
to predict. Hence, this work predicts the moving average of the flow accumulation
signal. Both single-step and multi-step predictors are developed using linear system
identification techniques. A multi-step prediction error of up to 17% is achieved for
prediction horizon of up to 0.5sec.
The main thrust of this research is on the impact of wireless losses on the ability to
predict end-to-end flow accumulation. As opposed to wired, congestion related packet
losses, the losses occurring in a wireless channel are to a large extent random, making
the prediction of flow accumulation more challenging. Flow accumulation prediction
studies in this research demonstrate that, if an accurate predictor is employed, the
increase in prediction error is up to 170% when wireless loss reaches as high as 15% ,
as compared to the case of no wireless loss. As the predictor accuracy in the case of
no wireless loss deteriorates, the impact of wireless losses on the flow accumulation
prediction error decreases.
|
7 |
Congestion Removal in the Next Generation InternetSuryasaputra, Robert, rsuryasaputra@gmail.com January 2007 (has links)
The ongoing development of new and demanding Internet applications requires the Internet to deliver better service levels that are significantly better than the best effort service that the Internet currently provides and was built for. These improved service levels include guaranteed delays, jitter and bandwidth. Through extensive research into Quality of Service and Differentiated Service (DiffServ) it has become possible to provide guaranteed services, however this turns out to be inadequate without the application of Traffic Engineering methodologies and principles. Traffic Engineering is an integral part of network operation. Its major goal is to deliver the best performance from an existing service provider's network resources and, at the same time, to enhance a customers' view of network performance. In this thesis, several different traffic engineering methods for optimising the operation of native IP and IP networks employing MPLS are proposed. A feature of these new methods is their fast run times and this opens the way to making them suitable for application in an online traffic engineering environment. For native IP networks running shortest path based routing protocols, we show that an LP-based optimisation based on the well known multi-commodity flow problem can be effective in removing network congestion. Having realised that Internet service providers are now moving towards migrating their networks to the use of MPLS, we have also formulated optimisation methods to traffic engineer MPLS networks by selecting suitable routing paths and utilising the feature of explicit routing contained in MPLS. Although MPLS is capable of delivering traffic engineering across different classes of traffic, network operators still prefer to rely on the proven and simple IP based routing protocols for best effort traffic and only use MPLS to route traffic requiring special forwarding treatment. Based on this fact, we propose a method that optimises the routing patterns applicable to different classes of traffic based on their bandwidth requirements. A traffic engineering comparison study that evaluates the performance of a neural network-based method for MPLS networks and LP-based weight setting approach for shortest path based networks has been performed using a well-known open source network simulator, called ns2. The comparative evaluation is based upon the packet loss probability. The final chapter of the thesis describes the software development of a network management application called OptiFlow which integrates techniques described in earlier chapters including the LP-based weight setting optimisation methodology; it also uses traffic matrix estimation techniques that are required as input to the weight setting models that have been devised. The motivation for developing OptiFlow was to provide a prototype set of tools that meet the congestion management needs of networking industries (ISPs and telecommunications companies - telcos).
|
8 |
Adaptive Real-time Anomaly Detection for Safeguarding Critical NetworksRing Burbeck, Kalle January 2006 (has links)
<p>Critical networks require defence in depth incorporating many different security technologies including intrusion detection. One important intrusion detection approach is called anomaly detection where normal (good) behaviour of users of the protected system is modelled, often using machine learning or data mining techniques. During detection new data is matched against the normality model, and deviations are marked as anomalies. Since no knowledge of attacks is needed to train the normality model, anomaly detection may detect previously unknown attacks.</p><p>In this thesis we present ADWICE (Anomaly Detection With fast Incremental Clustering) and evaluate it in IP networks. ADWICE has the following properties:</p><p>(i) Adaptation - Rather than making use of extensive periodic retraining sessions on stored off-line data to handle changes, ADWICE is fully incremental making very flexible on-line training of the model possible without destroying what is already learnt. When subsets of the model are not useful anymore, those clusters can be forgotten.</p><p>(ii) Performance - ADWICE is linear in the number of input data thereby heavily reducing training time compared to alternative clustering algorithms. Training time as well as detection time is further reduced by the use of an integrated search-index.</p><p>(iii) Scalability - Rather than keeping all data in memory, only compact cluster summaries are used. The linear time complexity also improves scalability of training.</p><p>We have implemented ADWICE and integrated the algorithm in a software agent. The agent is a part of the Safeguard agent architecture, developed to perform network monitoring, intrusion detection and correlation as well as recovery. We have also applied ADWICE to publicly available network data to compare our approach to related works with similar approaches. The evaluation resulted in a high detection rate at reasonable false positives rate.</p> / Report code: LiU-Tek-Lic-2006:12.
|
9 |
Automatic fault detection and localization in IPnetworks : Active probing from a single node perspectivePettersson, Christopher January 2015 (has links)
Fault management is a continuously demanded function in any kind of network management. Commonly it is carried out by a centralized entity on the network which correlates collected information into likely diagnoses of the current system states. We survey the use of active-on-demand-measurement, often called active probes, together with passive readings from the perspective of one single node. The solution is confined to the node and is isolated from the surrounding environment. The utility for this approach, to fault diagnosis, was found to depend on the environment in which the specific node was located within. Conclusively, the less environment knowledge, the more useful this solution presents. Consequently this approach to fault diagnosis offers limited opportunities in the test environment. However, greater prospects was found for this approach while located in a heterogeneous customer environment.
|
10 |
An Evaluation of Traffic Matrix Estimation Techniques for Large-Scale IP NetworksAdelani, Titus Olufemi 09 February 2010 (has links)
The information on the volume of traffic flowing between all possible origin and destination pairs in an IP network during a given period of time is generally referred to as traffic matrix (TM). This information, which is very important for various traffic engineering tasks, is very costly and difficult to obtain on large operational IP network, consequently it is often inferred from readily available link load measurements.
In this thesis, we evaluated 5 TM estimation techniques, namely Tomogravity (TG), Entropy Maximization (EM), Quadratic Programming (QP), Linear Programming (LP) and Neural Network (NN) with gravity and worst-case bound (WCB) initial estimates. We found that the EM technique performed best, consistently, in most of our simulations and that the gravity model yielded better initial estimates than the WCB model. A hybrid of these techniques did not result in considerable decrease in estimation errors. We, however, achieved most significant reduction in errors by combining iterative proportionally-fitted estimates with the EM technique. Therefore, we propose this technique as a viable approach for estimating the traffic matrix of large-scale IP networks.
|
Page generated in 0.0439 seconds