• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 143
  • 18
  • 10
  • 9
  • 7
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 444
  • 444
  • 165
  • 163
  • 147
  • 129
  • 109
  • 101
  • 79
  • 60
  • 43
  • 42
  • 40
  • 39
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Management and Control of Scalable and Resilient Next-Generation Optical Networks

Liu, Guanglei 10 January 2007 (has links)
Two research topics in next-generation optical networks with wavelength-division multiplexing (WDM) technologies were investigated: (1) scalability of network management and control, and (2) resilience/reliability of networks upon faults and attacks. In scalable network management, the scalability of management information for inter-domain light-path assessment was studied. The light-path assessment was formulated as a decision problem based on decision theory and probabilistic graphical models. It was found that partial information available can provide the desired performance, i.e., a small percentage of erroneous decisions can be traded off to achieve a large saving in the amount of management information. In network resilience under malicious attacks, the resilience of all-optical networks under in-band crosstalk attacks was investigated with probabilistic graphical models. Graphical models provide an explicit view of the spatial dependencies in attack propagation, as well as computationally efficient approaches, e.g., sum-product algorithm, for studying network resilience. With the proposed cross-layer model of attack propagation, key factors that affect the resilience of the network from the physical layer and the network layer were identified. In addition, analytical results on network resilience were obtained for typical topologies including ring, star, and mesh-torus networks. In network performance upon failures, traffic-based network reliability was systematically studied. First a uniform deterministic traffic at the network layer was adopted to analyze the impacts of network topology, failure dependency, and failure protection on network reliability. Then a random network layer traffic model with Poisson arrivals was applied to further investigate the effect of network layer traffic distributions on network reliability. Finally, asymptotic results of network reliability metrics with respect to arrival rate were obtained for typical network topologies under heavy load regime. The main contributions of the thesis include: (1) fundamental understandings of scalable management and resilience of next-generation optical networks with WDM technologies; and (2) the innovative application of probabilistic graphical models, an emerging approach in machine learning, to the research of communication networks.
52

Tattle - "Here's How I See It" : Crowd-Sourced Monitoring and Estimation of Cellular Performance Through Local Area Measurement Exchange

Liang, Huiguang 01 May 2015 (has links)
The operating environment of cellular networks can be in a constant state of change due to variations and evolutions of technology, subscriber load, and physical infrastructure. One cellular operator, which we interviewed, described two key difficulties. Firstly, they are unable to monitor the performance of their network in a scalable and fine-grained manner. Secondly, they find difficulty in monitoring the service quality experienced by each user equipment (UE). Consequently, they are unable to effectively diagnose performance impairments on a per-UE basis. They currently expend considerable manual efforts to monitor their network through controlled, small-scale drive-testing. If this is not performed satisfactorily, they risk losing subscribers, and also possible penalties from regulators. In this dissertation, we propose Tattle1, a distributed, low-cost participatory sensing framework for the collection and processing of UE measurements. Tattle is designed to solve three problems, namely coverage monitoring (CM), service quality monitoring (QM) and, per-device service quality estimation and classification (QEC). In Tattle, co-located UEs exchange uncertain location information and measurements using local-area broadcasts. This preserves the context of co-location of these measurements. It allows us to develop U-CURE, as well as its delay-adjusted variant, to discard erroneously-localized samples, and reduce localization errors respectively. It allows operators to generate timely, high-resolution and accurate monitoring maps. Operators can then make informed, expedient network management decisions, such as adjusting base-station parameters, to making long-term infrastructure investment. We propose a comprehensive statistical framework that also allows an individual UE to estimate and classify its own network performance. In our approach, each UE monitors its recent measurements, together with those reported by co-located UEs. Then, through our framework, UEs can automatically determine if any observed impairment is endemic amongst other co-located devices. Subscribers that experience isolated impairments can then take limited remedy steps, such as rebooting their devices. We demonstrate Tattle's effectiveness by presenting key results, using up to millions of real-world measurements. These were collected systematically using current generations of commercial-off-the-shelf (COTS) mobile devices. For CM, we show that in urban built-up areas, GPS locations reported by UEs may have significant uncertainties and can sometimes be several kilometers away from their true locations. We describe how U-CURE can take into account reported location uncertainty and the knowledge of measurement co-location to remove erroneously-localized readings. This allows us to retain measurements with very high location accuracy, and in turn derive accurate, fine-grained coverage information. Operators can then react and respond to specific areas with coverage issues in a timely manner. Using our approach, we showcase high-resolution results of actual coverage conditions in selected areas of Singapore. For QM, we show that localization performance in COTS devices may exhibit non-negligible correlation with network round-trip delay. This can result in localization errors of up to 605.32m per 1,000ms of delay. Naïve approaches that blindly accepts measurements with their reported locations will therefore result in grossly mis-localized data points. This affects the fidelity of any geo-spatial monitoring information derived from these data sets. We demonstrate that using the popular localization approach of combining Global-Positioning System together with Network-Assisted Localization, may result in a median root-mean-square (rms) error increase of over 60%. This is in comparison to simply using the Global-Positioning System on its own. We propose a network-delay-adjusted variant of U-CURE, to cooperatively improve the localization performance of COTS devices. We show improvements of up to 70% in terms of median rms location errors, even while subjected to uncertain real-world network delay conditions, with just 3 participating UEs. This allows us to refine the purported locations of delay measurements, and as a result, derive accurate, fine-grained and actionable cellular quality information. Using this approach, we present accurate cellular network delay maps that are of much higher spatial-resolution, as compared to those naively derived using raw data. For QEC, we report on the characteristics of the delay performance of co-located devices subscribed to 2 particular cellular network operators in Singapore. We describe the results of applying our proposed approach to addressing the QEC problem, on real-world measurements of over 443,500 data points. We illustrate examples where “normal” and “abnormal” performances occur in real networks, and report instances where a device can experience complete outage, while none of its neighbors are affected. We give quantitative results on how well our algorithm can detect an “abnormal” time series, with increasing effectiveness as the number of co-located UEs increases. With just 3 UEs, we are able to achieve a median detection accuracy of just under 70%. With 7 UEs, we can achieve a median detection rate of just under 90%. 1 The meaning of Tattle, as a verb, is to gossip idly. By letting devices communicate their observations with one another, we explore the kinds of insights that can elicited based on this peer-to-peer exchange.
53

Facilitating dynamic network control with software-defined networking

Kim, Hyojoon 21 September 2015 (has links)
This dissertation starts by realizing that network management is a very complex and error-prone task. The major causes are identified through interviews and systematic analysis of network config- uration data on two large campus networks. This dissertation finds that network events and dynamic reactions to them should be programmatically encoded in the network control program by opera- tors, and some events should be automatically handled for them if the desired reaction is general. This dissertation presents two new solutions for managing and configuring networks using Software- Defined Networking (SDN) paradigm: Kinetic and Coronet. Kinetic is a programming language and central control platform that allows operators to implement traffic control application that reacts to various kinds of network events in a concise, intuitive way. The event-reaction logic is checked for correction before deployment to prevent misconfigurations. Coronet is a data-plane failure recovery service for arbitrary SDN control applications. Coronet pre-plans primary and backup routing paths for any given topology. Such pre-planning guarantees that Coronet can perform fast recovery when there is failure. Multiple techniques are used to ensure that the solution scales to large networks with more than 100 switches. Performance and usability evaluations show that both solutions are feasible and are great alternative solutions to current mechanisms to reduce misconfigurations.
54

Resource Allocation, and Survivability in Network Virtualization Environments

Rahman, Muntasir Raihan January 2010 (has links)
Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this thesis, we remove that assumption by formulating the survivable virtual network embedding (SVNE) problem and developing baseline policy heuristics and an efficient hybrid policy heuristic to solve it. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristic for SVNE outperforms baseline heuristics in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time.
55

EXPERIENCE MANAGEMENT FOR IT MANAGEMENT SUPPORT

Bozdogan, Can 30 July 2012 (has links)
This thesis focuses on the identification of experience required for solving IT (Information Technology) problems in small to medium sized enterprises. It is aimed to utilize information retrieval and data mining techniques to automatically extract information from publicly available experience data on the internet to automatically generate a knowledgebase for dynamic IT management support. In this thesis, similarity distance measures as Jaccard Index, Cosine Similarity Measure and clustering algorithms as K-Means, EM, DBScan, CES, CES+ are employed on three different datasets to evaluate their performances. CES+ algorithm gives the highest performance results in these evaluations. Moreover, Multi Objective Genetic Algorithm (MOGA) is used and is evaluated on three different data sets to aid the usage of CES+ in real life senarios by automating the selection of necessary parameters. Results show that MOGA support is not only automating the CES+, it also provides higher performance results.
56

Simulation-Assisted QoS-Aware VHO in Wireless Heterogeneous Networks

Al Ridhawi, Ismaeel 08 January 2014 (has links)
The main goal of today’s wireless Service Providers (SPs) is to provide optimum and ubiquitous service for roaming users while maximizing the SPs own monetary profits. The fundamental objective is to support such requirements by providing solutions that are adaptive to varying conditions in highly mobile and heterogeneous, as well as dynamically changing wireless network infrastructures. This can only be achieved through well-designed management systems. Most techniques fail to utilize the knowledge gained from previously tested reconfiguration strategies on system and network behaviour. This dissertation presents a novel framework that automates the cooperation among a number of wireless SPs facing the challenge of meeting strict service demands for a large number of mobile users. The proposed work employs a novel policy-based system configuration model to automate the process of adapting new network policies. The proposed framework relies on the assistance of a real-time simulator that runs as a constant background process in order to continuously find optimal policy configurations for the SPs’ networks. To minimize the computational time needed to find these configurations, a modified tabu-search scheme is proposed. An objective is to efficiently explore the space of network configurations in order to find optimal network decisions and provide a service performance that adheres to contracted service level agreements. This framework also relies on a distributed Quality of Service (QoS) monitoring scheme. The proposed scheme relies on the efficient identification of candidate QoS monitoring users that can efficiently submit QoS related measurements on behalf of their neighbors. These candidate users are chosen according to their devices’ residual power and transmission capabilities and their estimated remaining service lifetime. Service monitoring users are then selected from these candidates using a novel user-to-user semantic similarity matching algorithm. This step ensures that the monitoring users are reporting on behalf of other users that are highly similar to them in terms of their mobility, used services and device profiles. Experimental results demonstrate the significant gains achieved in terms of the reduced traffic overhead and overall consumed users’ devices power while achieving a high monitoring accuracy, adaptation time speedup, base station load balancing, and individual providers’ payoffs.
57

Data reliability control in wireless sensor networks for data streaming applications

Le, Dinh Tuan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This thesis contributes toward the design of a reliable and energy-efficient transport system for Wireless Sensor Networks. Wireless Sensor Networks have emerged as a vital new area in networking research. In many Wireless Sensor Network systems, a common task of sensor nodes is to sense the environment and send the sensed data to a sink node. Thus, the effectiveness of a Wireless Sensor Network depends on how reliably the sensor nodes can deliver their sensed data to the sink. However, the sensor nodes are susceptible to loss for various reasons when there are dynamics in wireless transmission medium, environmental interference, battery depletion, or accidentally damage, etc. Therefore, assuring reliable data delivery between the sensor nodes and the sink in Wireless Sensor Networks is a challenging task. The primary contributions of this thesis include four parts. First, we design, implement, and evaluate a cross-layer communication protocol for reliable data transfer for data streaming applications in Wireless Sensor Networks. We employ reliable algorithms in each layer of the communication stack. At the MAC layer, a CSMA MAC protocol with an explicit hop-by-hop Acknowledgment loss recovery is employed. To ensure the end-to-end reliability, the maximum number of retransmissions are estimated and used at each sensor node. At the transport layer, an end-to-end Negative Acknowledgment with an aggregated positive Acknowledgment mechanism is used. By inspecting the sequence numbers on the packets, the sink can detect which packets were lost. In addition, to increase the robustness of the system, a watchdog process is implemented at both base station and sensor nodes, which enable them to power cycle when an unexpected fault occurs. We present extensive evaluations, including theoretical analysis, simulations, and experiments in the field based on Fleck-3 platform and the TinyOS operating system. The designed network system has been working in the field for over a year. The results show that our system is a promising solution to a sustainable irrigation system. Second, we present the design of a policy-based Sensor Reliability Management framework for Wireless Sensor Networks called SRM. SRM is based on hierarchical management architecture and on the policy-based network management paradigm. SRM allows the network administrators to interact with the Wireless Sensor Network via the management policies. SRM also provides a self-control capability to the network. This thesis restricts SRM to reliability management, but the same framework is also applicable for other management services by providing the management policies. Our experimental results show that SRM can offer sufficient reliability to the application users while reducing energy consumption by more than 50% compared to other approaches. Third, we propose an Energy-efficient and Reliable Transport Protocol called ERTP, which is designed for data streaming applications in Wireless Sensor Networks. ERTP is an adaptive transport protocol based on statistical reliability that ensures the number of data packets delivered to the sink exceeds the defined threshold while reducing the energy consumption. Using a statistical reliability metric when designing a reliable transport protocol guarantees the delivery of adequate information to the users, and reduces energy consumption when compared to the absolute reliability. ERTP uses hop-by-hop Implicit Acknowledgment with a dynamically updated retransmission timeout for packet loss recovery. In multihop wireless networks, the transmitter can overhear a forwarding transmission and interpret it as an Implicit Acknowledgment. By combining the statistical reliability and the hop-by-hop Implicit Acknowledgment loss recovery, ERTP can offer sufficient reliability to the application users with minimal energy expense. Our extensive simulations and experimental evaluations show that ERTP can reduce energy consumption by more than 45% when compared to the state-of- the-art protocol. Consequently, sensor nodes are more energy-efficient and the lifespan of the unattended Wireless Sensor Network is increased. In Wireless Sensor Networks, sensor node failures can create network partitions or coverage loss which can not be solved by providing reliability at higher layers of the protocol stack. In the final part of this thesis, we investigate the problem of maintaining the network connectivity and coverage when the sensor nodes are failed. We consider a hybrid Wireless Sensor Network where a subset of the nodes has the ability to move at a high energy expense. When a node has low remaining energy (dying node) but it is a critical node which constitutes the network such as a cluster head, it will seek a replacement. If a redundant node is located in the transmission range of the dying node and can fulfill the network connectivity and coverage requirement, it can be used for substitution. Otherwise, a protocol should be in place to relocate the redundant sensor node for replacement. We propose a distributed protocol for Mobile Sensor Relocation problem called Moser. Moser works in three phases. In the first phase, the dying node determines if network partition occurs, finds an available mobile node, and asks for replacement by using flooding algorithm. The dying node also decides the movement schedule of the available mobile node based on certain criteria. The second phase of the Moser protocol involves the actual movement of the mobile nodes to approach the location of the dying node. Finally, when the mobile node has reached the transmission of the dying node, it communicates to the dying nodes and moves to a desired location, where the network connectivity and coverage to the neighbors of the dying nodes are preserved.
58

Scalable quality of service scheduling in core networks /

Xu, Zhe. January 2007 (has links)
Thesis (Ph.D.)--University of Texas at Dallas, 2007. / Includes vita. Includes bibliographical references (leaves 124-126)
59

Verhaltensorientierte Steuerung logistischer Netzwerke : eine konzeptionell-theoretische Analyse /

Sonnek, Alexandra. January 2005 (has links)
Univ., Diss.--Dortmund, 2005.
60

A bandwidth market in an IP network /

Lusilao-Zodi, Guy-Alain. January 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2008. / Bibliography. Also available via the Internet.

Page generated in 0.1084 seconds