• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8765
  • 2932
  • 1104
  • 1047
  • 1019
  • 682
  • 315
  • 302
  • 277
  • 266
  • 135
  • 128
  • 79
  • 78
  • 75
  • Tagged with
  • 20120
  • 3912
  • 2830
  • 2576
  • 2438
  • 2350
  • 1934
  • 1838
  • 1554
  • 1534
  • 1518
  • 1512
  • 1502
  • 1445
  • 1398
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Implementation and lessons learned from the Texas Synchrophasor Network

Kai, Moses An 15 February 2013 (has links)
For decades, power engineers have used simulations to predict grid stability and voltage phase angles. Only recently have equipment been available to actually measure phase angle at points hundreds of miles away. A few of these systems are presently operating in the US by electric grids including the Electric Reliability Council of Texas (ERCOT) and California Independent System Operator (ISO). However, the systems are in their infancy and are far from being used to improve grid reliability. This thesis describes the only independent synchronized phasor network that exists in the US. Thanks to Schweitzer Engineering Laboratories (SEL), we are streaming in points from three locations plus the University of Texas at Austin (UT Austin) as of January 2009. This thesis will describe this network and grid analysis done this far. / text
872

Performance and security trade-offs in high-speed networks : an investigation into the performance and security modelling and evaluation of high-speed networks based on the quantitative analysis and experimentation of queueing networks and generalised stochastic Petri nets

Miskeen, Guzlan Mohamed Alzaroug January 2013 (has links)
Most used security mechanisms in high-speed networks have been adopted without adequate quantification of their impact on performance degradation. Appropriate quantitative network models may be employed for the evaluation and prediction of 'optimal' performance vs. security trade-offs. Several quantitative models introduced in the literature are based on queueing networks (QNs) and generalised stochastic Petri nets (GSPNs). However, these models do not take into consideration Performance Engineering Principles (PEPs) and the adverse impact of traffic burstiness and security protocols on performance. The contributions of this thesis are based on the development of an effective quantitative methodology for the analysis of arbitrary QN models and GSPNs through discrete-event simulation (DES) and extended applications into performance vs. security trade-offs involving infrastructure and infrastructure-less high-speed networks under bursty traffic conditions. Specifically, investigations are carried out focusing, for illustration purposes, on high-speed network routers subject to Access Control List (ACL) and also Robotic Ad Hoc Networks (RANETs) with Wired Equivalent Privacy (WEP) and Selective Security (SS) protocols, respectively. The Generalised Exponential (GE) distribution is used to model inter-arrival and service times at each node in order to capture the traffic burstiness of the network and predict pessimistic 'upper bounds' of network performance. In the context of a router with ACL mechanism representing an infrastructure network node, performance degradation is caused due to high-speed incoming traffic in conjunction with ACL security computations making the router a bottleneck in the network. To quantify and predict the trade-off of this degradation, the proposed quantitative methodology employs a suitable QN model consisting of two queues connected in a tandem configuration. These queues have single or quad-core CPUs with multiple-classes and correspond to a security processing node and a transmission forwarding node. First-Come-First-Served (FCFS) and Head-of-the-Line (HoL) are the adopted service disciplines together with Complete Buffer Sharing (CBS) and Partial Buffer Sharing (PBS) buffer management schemes. The mean response time and packet loss probability at each queue are employed as typical performance metrics. Numerical experiments are carried out, based on DES, in order to establish a balanced trade-off between security and performance towards the design and development of efficient router architectures under bursty traffic conditions. The proposed methodology is also applied into the evaluation of performance vs. security trade-offs of robotic ad hoc networks (RANETs) with mobility subject to Wired Equivalent Privacy (WEP) and Selective Security (SS) protocols. WEP protocol is engaged to provide confidentiality and integrity to exchanged data amongst robotic nodes of a RANET and thus, to prevent data capturing by unauthorised users. WEP security mechanisms in RANETs, as infrastructure-less networks, are performed at each individual robotic node subject to traffic burstiness as well as nodal mobility. In this context, the proposed quantitative methodology is extended to incorporate an open QN model of a RANET with Gated queues (G-Queues), arbitrary topology and multiple classes of data packets with FCFS and HoL disciplines under bursty arrival traffic flows characterised by an Interrupted Compound Poisson Process (ICPP). SS is included in the Gated-QN (G-QN) model in order to establish an 'optimal' performance vs. security trade-off. For this purpose, PEPs, such as the provision of multiple classes with HoL priorities and the availability of dual CPUs, are complemented by the inclusion of robot's mobility, enabling realistic decisions in mitigating the performance of mobile robotic nodes in the presence of security. The mean marginal end-to-end delay was adopted as the performance metric that gives indication on the security improvement. The proposed quantitative methodology is further enhanced by formulating an advanced hybrid framework for capturing 'optimal' performance vs. security trade-offs for each node of a RANET by taking more explicitly into consideration security control and battery life. Specifically, each robotic node is represented by a hybrid Gated GSPN (G-GSPN) and a QN model. In this context, the G-GSPN incorporates bursty multiple class traffic flows, nodal mobility, security processing and control whilst the QN model has, generally, an arbitrary configuration with finite capacity channel queues reflecting 'intra'-robot (component-to-component) communication and 'inter'-robot transmissions. Two theoretical case studies from the literature are adapted to illustrate the utility of the QN towards modelling 'intra' and 'inter' robot communications. Extensions of the combined performance and security metrics (CPSMs) proposed in the literature are suggested to facilitate investigating and optimising RANET's performance vs. security trade-offs. This framework has a promising potential modelling more meaningfully and explicitly the behaviour of security processing and control mechanisms as well as capturing the robot's heterogeneity (in terms of the robot architecture and application/task context) in the near future (c.f. [1]. Moreover, this framework should enable testing robot's configurations during design and development stages of RANETs as well as modifying and tuning existing configurations of RANETs towards enhanced 'optimal' performance and security trade-offs.
873

Analysing the impact of disruptions in intermodal transport networks: A micro simulation-based model

Burgholzer, Wolfgang, Bauer, Gerhard, Posset, Martin, Jammernegg, Werner 03 1900 (has links) (PDF)
Transport networks have to provide carriers with time-efficient alternative routes in case of disruptions. It is, therefore, essential for transport network planners and operators to identify sections within the network which, if broken, have a considerable negative impact on the networks performance. Research on transport network analysis provides lots of different approaches and models in order to identify such critical sections. Most of them, however, are only applicable to mono-modal transport networks and calculate indices which represent the criticality of sections by using aggregated data. The model presented, in contrast, focuses on the analysis of intermodal transport networks by using a traffic micro simulation. Based on available, real-life data, our approach models a transport network as well as its actual traffic participants and their individual decisions in case of a disruption. The resulting transport delay time due to a specific disruption helps to identify critical sections and critical networks, as a whole. Therefore, the results are a valuable decision support for transport network planners and operators in order to make the infrastructure less vulnerable, more attractive for carriers and thus more economically sustainable. In order to show the applicability of the model we analyse the Austrian intermodal transport network and show how critical sections can be evaluated by this approach. (authors' abstract)
874

Performance diagnosis in large operational networks

Mahimkar, Ajay 15 June 2011 (has links)
IP networks have become the unified platform that supports a rice and extremely diverse set of applications and services, including traditional IP data service, Voice over IP (VoIP), smart mobile devices (e.g., iPhone), Internet television (IPTV) and online gaming. Network performance and reliability are critical issues in today's operational networks because many applications place increasingly stringent reliability and performance requirements. Even the smallest network performance degradation could cause significant customer distress. In addition, new network and service features (e.g., MPLS fast re-route capabilities) are continually rolled out across the network to support new applications, improve network performance, and reduce the operational cost. Network operators are challenged with ensuring that network reliability and performance is improved over time even in the face of constant changes, network and service upgrades and recurring faulty behaviors. It is critical to detect, troubleshoot and repair performance degradations in a timely and accurate fashion. This is extremely challenging in large IP networks due to their massive scale, complicated topology, high protocol complexity, and continuously evolving nature through either software or hardware upgrades, configuration changes or traffic engineering. In this dissertation, we first propose a novel infrastructure NICE (Network-wide Information Correlation and Exploration) that enables detection and troubleshooting of chronic network conditions by analyzing statistical correlations across multiple data sources. NICE uses a novel circular permutation test to determine the statistical significance of correlation. It also allows flexible analysis at various spatial granularity (e.g., link, router, network level, etc.). We validate NICE using real measurement data collected at a tier-1 ISP network. The results are quite positive. We then apply NICE to troubleshoot real network issues in the tier-1 ISP network. In all three case studies, NICE successfully uncovers previously unknown chronic network conditions, resulting in improved network operations. Second, we extend NICE to detect and troubleshoot performance problems in IPTV networks. Compared to traditional ISP networks, IPTV distribution network typically adopts a different structure (tree-like multicast as opposed to mesh), imposes more restrictive service constraints (both in reliability and performance), and often faces a much larger scalability issue (managing millions of residential gateways versus thousands of provider-edge routers). Tailoring to the scale and structure of IPTV network, we propose a novel multi-resolution data analysis approach Giza that enables fast detection and localization of regions in the multicast tree hierarchy where the problem becomes significant. Furthermore, we develop several statistical data mining techniques to troubleshoot the identified problems and diagnose their root causes. Validation against operational experiences demonstrates the effectiveness of our approach in detecting important performance issues and identifying interesting dependencies. Finally, we design and implement a novel infrastructure MERCURY for detecting the impact of network upgrades on performance. It is crucial to monitor the network when upgrades are made because they can have a significant impact on network performance and if not monitored may lead to unexpected consequences in operational networks. This can be achieved manually for a small number of devices, but does not scale to large networks with hundreds or thousands of routers and extremely large number of different upgrades made on a regular basis. MERCURY extracts interesting triggers from a large number of network maintenance activities. It then identifies behavior changes in network performance caused by the triggers. It uses statistical rule mining and network configuration to identify commonality across the behavior changes. We systematically evaluate MERCURY using data collected at a large tier-1 ISP network. By comparing to operational practice, we show that MERCURY is able to capture the interesting triggers and behavior changes induced by the triggers. In some cases, MERCURY also discovers previously unknown network behaviors demonstrating the effectiveness in identifying network conditions flying under the radar. / text
875

Ad Hoc Networks Measurement Model and Methods Based on Network Tomography

Yao, Ye 08 July 2011 (has links) (PDF)
The measurability of Mobile ad hoc network (MANET) is the precondition of itsmanagement, performance optimization and network resources re-allocations. However, MANET is an infrastructure-free, multi-hop, andself-organized temporary network, comprised of a group of mobile nodes with wirelesscommunication devices. Not only does its topology structure vary with time going by, butalso the communication protocol used in its network layer or data link layer is diverse andnon-standard.In order to solve the problem of interior links performance (such as packet loss rate anddelay) measurement in MANET, this thesis has adopted an external measurement basedon network tomography (NT). To the best of our knowledge, NT technique is adaptable for Ad Hoc networkmeasurement.This thesis has deeply studied MANET measurement technique based on NT. The maincontributions are:(1) An analysis technique on MANET topology dynamic characteristic based onmobility model was proposed. At first, an Ad Hoc network mobility model formalizationis described. Then a MANET topology snapshots capturing method was proposed to findand verify that MANET topology varies in steady and non-steady state in turnperiodically. At the same time, it was proved that it was practicable in theory to introduceNT technique into Ad Hoc network measurement. The fitness hypothesis verification wasadopted to obtain the rule of Ad Hoc network topology dynamic characteristic parameters,and the Markov stochastic process was adopted to analyze MANET topology dynamiccharacteristic. The simulation results show that the method above not only is valid andgenerable to be used for all mobility models in NS-2 Tool, but also could obtain thetopology state keeping experimental formula and topology state varying probabilityformula.IV(2) An analysis technique for MANET topology dynamic characteristic based onmeasurement sample was proposed. When the scenario file of mobile models could notbe obtained beforehand, End-to-End measurement was used in MANET to obtain thepath delay time. Then topology steady period of MANET is inferred by judging whetherpath delay dithering is close to zero. At the same time, the MANET topology wasidentified by using hierarchical clustering method based on measurement sample of pathperformance during topology steady period in order to support the link performanceinference. The simulation result verified that the method above could not only detect themeasurement window time of MANET effectively, but also identify the MANETtopology architecture during measurement window time correctly.(3) A MANET link performance inference algorithm based on linear analysis modelwas proposed. The relation of inequality between link and path performance, such as lossrate of MANET, was deduced according to a linear model. The phenomena thatcommunication characteristic of packets, such as delay and loss rate, is more similarwhen the sub-paths has longer shared links was proved in the document. When the rankof the routing matrix is equal to that of its augmentation matrix, the linear model wasused to describe the Ad Hoc network link performance inference method. The simulationresults show that the algorithm not only is effective, but also has short computing time.(4) A Link performance inference algorithm based on multi-objectives optimizationwas proposed. When the rank of the routing matrix is not equal to that of its augmentationmatrix, the link performance inference was changed into multi-objectives optimizationand genetic algorithm is used to infer link performance. The probability distribution oflink performance in certain time t was obtained by performing more measurements andstatistically analyzing the hypo-solutions. Through the simulation, it can be safelyconcluded that the internal link performance, such as, link loss ratio and link delay, can beinferred correctly when the rank of the routing matrix is not equal to that of itsaugmentation matrix.
876

The Use of Demand-wise Shared Protection in Creating Topology Optimized High Availability Networks

Todd, Brody Unknown Date
No description available.
877

Facilitating the provision of auxiliary support services for overlay networks

Demirci, Mehmet 20 September 2013 (has links)
Network virtualization and overlay networks have emerged as powerful tools for improving the flexibility of the Internet. Overlays are used to provide a wide range of useful services in today's networking environment, and they are also viewed as important building blocks for an agile and evolvable future Internet. Regardless of the specific service it provides, an overlay needs assistance in several areas in order to perform properly throughout its existence. This dissertation focuses on the mechanisms underlying the provision of auxiliary support services that perform control and management functions for overlays, such as overlay assignment, resource allocation, overlay monitoring and diagnosis. The priorities and objectives in the design of such mechanisms depend on network conditions and the virtualization environment. We identify opportunities for improvements that can help provide auxiliary services more effectively at different overlay life stages and under varying assumptions. The contributions of this dissertation are the following: 1. An overlay assignment algorithm designed to improve an overlay's diagnosability, which is defined as its property to allow accurate and low-cost fault diagnosis. The main idea is to increase meaningful sharing between overlay links in a controlled manner in order to help localize faults correctly with less effort. 2. A novel definition of bandwidth allocation fairness in the presence of multiple resource sharing overlays, and a routing optimization technique to improve fairness and the satisfaction of overlays. Evaluation analyzes the characteristics of different fair allocation algorithms, and suggests that eliminating bottlenecks via custom routing can be an effective way to improve fairness. 3. An optimization solution to minimize the total cost of monitoring an overlay by determining the optimal mix of overlay and native links to monitor, and an analysis of the effect of topological properties on monitoring cost and the composition of the optimal mix of monitored links. We call our approach multi-layer monitoring and show that it is a flexible approach producing minimal-cost solutions with low errors. 4. A study of virtual network embedding in software defined networks (SDNs), identifying the challenges and opportunities for embedding in the SDN environment, and presenting two VN embedding techniques and their evaluation. One objective is to balance the stress on substrate components, and the other is to minimize the delays between VN controllers and switches. Each technique optimizes embedding for one objective while keeping the other within bounds.
878

Control de acceso a redes

Esmoris, Daniel Omar January 2010 (has links) (PDF)
El presente trabajo pretende analizar las distintas alternativas que ofrece el mercado y analizar los accesos a las redes. A esto se asocian diversos productos y tecnologías, y los estándares no están aun definidos en un mercado que es extremadamente difícil de entender. Esta confusión lleva a ideas confusas, mucha gente toma pedazos de información que oyen y forman juicios incorrectos de qué pueden hacer los productos y qué amenazas tratan realmente.
879

Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von Solms

Von Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly generated inside a random network coding network to apply error correction to the transmitted message. The obtained results show that the developed implicit error correcting method can reduce the effect of errors in a random network coding network without the addition of redundant information at the source node. This method presents numerous advantages compared to the documented concatenated error correction methods. We found that various error correction schemes can be implemented without adding redundancy at the source nodes. The decoding ability of this method is dependent on the network characteristics. We found that large networks with a high level of interconnectivity yield more redundant information allowing more advanced error correction schemes to be implemented. Network coding networks are prone to error propagation. We present the results of the effect of link error probability on our scheme and show that our scheme outperforms concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.
880

Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von Solms

Von Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly generated inside a random network coding network to apply error correction to the transmitted message. The obtained results show that the developed implicit error correcting method can reduce the effect of errors in a random network coding network without the addition of redundant information at the source node. This method presents numerous advantages compared to the documented concatenated error correction methods. We found that various error correction schemes can be implemented without adding redundancy at the source nodes. The decoding ability of this method is dependent on the network characteristics. We found that large networks with a high level of interconnectivity yield more redundant information allowing more advanced error correction schemes to be implemented. Network coding networks are prone to error propagation. We present the results of the effect of link error probability on our scheme and show that our scheme outperforms concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.

Page generated in 0.0541 seconds