• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1616
  • 177
  • 149
  • 128
  • 34
  • 20
  • 20
  • 20
  • 20
  • 20
  • 19
  • 14
  • 11
  • 8
  • 4
  • Tagged with
  • 2418
  • 2418
  • 666
  • 565
  • 442
  • 393
  • 357
  • 293
  • 278
  • 277
  • 268
  • 264
  • 255
  • 251
  • 248
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

MediateSpace : applying contextual mediation to the tuple space paradigm

Matthews, Danny January 2015 (has links)
I designed, implemented and evaluated a decentralised context-aware content distribution middleware. It can support a variety of applications, with all network communication handled transparently behind a tuple space based interface. Content is inserted into the network with an associated condition stipulating the context that must be matched to receive it. Conditions can be expressed using conjunctions, disjunctions, a form of universal and existential quantification and nested block scopes. Conditions are mapped onto a set of spatial indexes to enable lookup; and these are inserted into a distributed multi-dimensional spatial data structure (e.g. an R-Tree). They are also translated into an OWL representation to enable evaluation. Nodes bind to their most geographically proximate neighbours which allows distance-sensitive context sharing. The middleware is capability-aware, pushing computationally expensive tasks onto more capable nodes. I evaluated my system through benchmarks and simulation, defining condition classes which collectively represent a large portion of the condition space. Random conditions were generated from these classes. Node mobility was controlled through a number of probability distributions. Benchmark evaluation times were reasonable, evaluating 500 typical messages in 1.4 seconds each. When the number of stored contexts were reduced, this improved dramatically, evaluating 500 much more complicated conditions in one-tenth of a second each. The number and complexity of context parameters has a major impact on efficiency. The number of spatial indexes generated was reasonable for most conditions, with a 95th percentile of 6. However, existential quantification was a challenge for both condition evaluation and index generation due to the potentially large number of possible combinations of conditions. As expected, simulations found that the distribution of workload was very uneven because nodes tend to cluster in large cities; meaning that most communication is localised within these areas. Also, node density had a dramatic impact on the number of received messages as nodes within sparse areas were unable to obtain context information which precluded condition evaluation. I achieved my research goals of developing a distributed context-aware content distribution framework.
312

The role of graph entropy in fault localization and network evolution

Tee, Philip January 2017 (has links)
The design of a communication network has a critical impact on its effectiveness at delivering service to the users of a large scale compute infrastructure. In particular, the reliability of such networks is increasingly vital in the modern world, as more and more of our commercial and social activity is conducted using digital platforms. Systems to assure service availability have been available since the emergence of Mainframes, with the System 360 in 1964, and although commercially widespread, the scientific understanding is not as deep as the problem warrants. The basic operating principle of most service assurance systems combines the gathering of status messages, which we term as events, with algorithms to deduce from the events where potential failures may be occurring. The algorithms to identify which events are causal, known as root cause analysis or fault localization, usually rely upon a detailed understanding of the network structure in order to determine those events that are most helpful in diagnosing and remediating a service threatening problem. The complex nature of root cause algorithms introduces scalability limits in terms of the number of events that can be processed per second. Unfortunately as networks grow, the volume of events produced continues to increase, often dramatically. The dependence of root cause analysis algorithms on network structure presents a significant challenge as networks continue to grow in scale and complexity. As a consequence of this, and the growing reliance upon networks as part of the key fabric of the modern economy, the commercial importance and the scale of the engineering challenges are increasing significantly. In this thesis I outline a novel approach to improving the scalability of event processing using a mathematical property of networks, graph entropy. In the first two papers described in this thesis, I apply an efficiently computable approximation of graph entropy to the problem of identifying important nodes in a network. In this context, importance is a measure of whether the failure of a node is more likely to result in a significant impact on the overall connectivity of the network, and therefore likely to lead to an interruption of service. I show that by ignoring events from unimportant network nodes it is possible to significantly reduce the event rate that a root cause algorithm needs to process. Further, I demonstrate that unimportant nodes produce very many events, but very few root causes. The consequence is that although some events relating to root causes are missed, this is compensated for by the reduction in overall event rate. This leads to a significant reduction of the event processing load on management systems, and therefore increases the effectiveness of current approaches to root cause analysis on large networks. Analysis of the topology data used in the first two papers revealed interesting anomalies in the degree distribution of the network nodes. This motivated the later focus of my research to investigate how graph entropy and network design considerations could be applied to the dynamical evolution of networks structures, most commonly described using the Preferential Attachment model of Barabási and Albert. A common feature of a communication network is the presence of a constraint on the number of logical or physical connections a device can support. In the last of the three papers in the thesis I develop and present a constrained model of network evolution, which demonstrates better quantitative agreement with real world networks than the preferential attachment model. This model, developed using the continuum approach, still does not address a fundamental question of random networks as a model of network evolution. Why should a node's degree influence the likelihood of it acquiring connections? In the same paper I attempt to answer that question by outlining a model that links vertex entropy to a node's attachment probability. The model successfully reproduces some of the characteristics of preferential attachment, and illustrates the potential for entropic arguments in network science. Put together, the two main bodies of work constitute a practical advance on the state of the art of fault localization, and a theoretical insight into the inner workings of dynamic networks. They open up a number of interesting avenues for further investigation.
313

Design considerations for a network control language (NCL)

Chapin, Wayne Barrett January 2010 (has links)
Typescript, etc. / Digitized by Kansas Correctional Industries
314

Resource optimization of consolidating two coexisting networks with interconnections.

January 2010 (has links)
Xie, Zhenchang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 48-50). / Abstracts in English and Chinese. / Abstract --- p.ii / Table of Contents --- p.v / List of Figures --- p.vi / List of Tables --- p.vii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Development of fiber optic networks --- p.1 / Chapter 1.2 --- Optical transmission system --- p.2 / Chapter 1.3 --- The motivation of this thesis --- p.7 / Chapter 1.4 --- Outline of this thesis --- p.8 / Chapter Chapter 2 --- The Consolidation of Two Coexisting Networks with Full-Interconnection --- p.10 / Chapter 2.1 --- Assumptions and problem formulation --- p.10 / Chapter 2.2 --- Definitions and notations --- p.12 / Chapter 2.3 --- An algorithm to derive Lmin --- p.13 / Chapter 2.4 --- Example illustrations --- p.17 / Chapter 2.5 --- "The number of fiber links required over the number of nodes of a network, L/N" --- p.21 / Chapter 2.6 --- Summary --- p.22 / Chapter Chapter 3 --- The Consolidation of Two Coexisting Networks with Two Interconnection Links --- p.23 / Chapter 3.1 --- Assumptions --- p.24 / Chapter 3.2 --- Analysis on the optimal location of the two interconnection links --- p.25 / Chapter 3.3 --- Notations --- p.25 / Chapter 3.4 --- Theorems and corollaries --- p.25 / Chapter 3.5 --- "The number of fiber links required over the number of nodes of a network, L/N" --- p.35 / Chapter 3.6 --- Summary --- p.36 / Chapter Chapter 4 --- Protection of the Consolidated Network --- p.37 / Chapter 4.1 --- Full-interconnection case --- p.38 / Chapter 4.2 --- Two interconnection case --- p.39 / Chapter 4.3 --- Summary --- p.44 / Chapter Chapter 5 --- Summary and Future Works --- p.45 / Chapter 5.1 --- Summary --- p.45 / Chapter 5.2 --- Future works --- p.47 / Bibliography --- p.48 / Appendix ´ؤ List of publications --- p.52
315

Gossip mechanisms for distributed database systems.

January 2007 (has links)
Yam, Shing Chung Jonathan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 75-79). / Abstracts in English and Chinese. / Abstract / Acknowledgement / Contents / List of Figures / List of Tables / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.2 / Chapter 1.2 --- Thesis Organization --- p.5 / Chapter 2 --- Literature Review --- p.7 / Chapter 2.1 --- Data Sharing and Dissemination --- p.7 / Chapter 2.2 --- Data Aggregation --- p.12 / Chapter 2.3 --- Sensor Network Database Systems --- p.13 / Chapter 2.4 --- Data Routing and Networking --- p.23 / Chapter 2.5 --- Other Applications --- p.24 / Chapter 3 --- Preliminaries --- p.25 / Chapter 3.1 --- Probability Distribution and Gossipee-selection Schemes --- p.25 / Chapter 3.2 --- The Network Models --- p.28 / Chapter 3.3 --- Objective and Problem Statement --- p.30 / Chapter 3.4 --- Two-tier Gossip Mechanism --- p.31 / Chapter 3.5 --- Semantic-dependent Gossip Mechanism --- p.32 / Chapter 4 --- Results for Two-tier Gossip Mechanisms --- p.34 / Chapter 4.1 --- Background --- p.34 / Chapter 4.2 --- A Time Bound for Solving the Clustered Destination Problem with T-Theorem 1 --- p.39 / Chapter 4.3 --- Further Results´ؤTheorem 2 --- p.49 / Chapter 4.4 --- Experimental Results for Two-tier and N-tier Gossip Mechanisms --- p.51 / Chapter 4.4.1 --- Performance Evaluation of Two-tier Gossip Mechanisms --- p.52 / Chapter 4.4.2 --- Performance Evaluation of N-tier Gossip Mechanisms --- p.56 / Chapter 4.5 --- Discussion --- p.60 / Chapter 5 --- Results for Semantic-dependent Gossip Mechanisms --- p.62 / Chapter 5.1 --- Background --- p.62 / Chapter 5.2 --- Theory --- p.65 / Chapter 5.3 --- "Detection of Single Moving Heat Source with S max(2c1l,c1h ))" --- p.66 / Chapter 5.4 --- Detection of Multiple Static Heat Sources with Two-tier Gossip mechanism --- p.69 / Chapter 5.5 --- Discussion --- p.72 / Chapter 6 --- Conclusion --- p.73 / Chapter 7 --- References --- p.75 / Appendix Prove of Result 4.3 --- p.80
316

An inter-computer communications system for a personal computer

Vestal, Daniel Ray January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
317

CacheCash: A Cryptocurrency-based Decentralized Content Delivery Network

Almashaqbeh, Ghada January 2019 (has links)
Online content delivery has witnessed dramatic growth recently with traffic consuming over half of today’s Internet bandwidth. This escalating demand has motivated content publishers to move outside the traditional solutions of infrastructure-based content delivery networks (CDNs). Instead, many are employing peer-to-peer data transfers to reduce the service cost and avoid bandwidth over-provision to handle peak demands. Unfortunately, the open access work model of this paradigm, which allows anyone to join, introduces several design challenges related to security, efficiency, and peer availability. In this dissertation, we introduce CacheCash, a cryptocurrency-based decentralized content distribution network designed to address these challenges. CacheCash bypasses the centralized approach of CDN companies for one in which end users organically set up new caches in exchange for cryptocurrency tokens. Thus, it enables publishers to hire caches on an as-needed basis, without constraining these parties with long-term business commitments. To address the challenges encountered as the system evolved, we propose a number of protocols and techniques that represent basic building blocks of CacheCash’s design. First, motivated by the observation that conventional security assessment tools do not suit cryptocurrency-based systems, we propose ABC, a threat modeling framework capable of identifying attacker collusion and the new threat vectors that cryptocurrencies introduce. Second, we propose CAPnet, a defense mechanism against cache accounting attacks (i.e., a client pretends to be served allowing a colluding cache to collect rewards without doing any work). CAPnet features a bandwidth expenditure puzzle that clients must solve over the content before caches are given credit, which bounds the effectiveness of this collusion case. Third, to make it feasible to reward caches per data chunk served, we introduce MicroCash, a decentralized probabilistic micropayment scheme that reduces the overhead of processing these small payments. MicroCash implements several novel ideas that make micropayments more suitable for delay-sensitive applications, such as online content delivery. CacheCash combines the previous techniques to produce a novel service-payment exchange protocol that secures the content distribution process. This protocol utilizes gradual content disclosure and partial payment collection to encourage the honest collaborative work between participants. We present a detailed game theoretic analysis showing how to exploit rational financial incentives to address several security threats. This is in addition to various performance optimization mechanisms that promote system efficiency and scalability. Lastly, we evaluate system performance and show that modest machines can serve/retrieve content at a high bitrate with minimal overhead.
318

VPN over a wireless infrastructure : evaluation and performance analysis

Munasinghe, Kumudu S., University of Western Sydney, College of Science, Technology and Environment, School of Computing and Information Technology January 2005 (has links)
This thesis presents the analysis and experimental results for an evaluation of the performance and Quality of Service (QoL) levels of a virtual private network( QoL) levels of a Virtual Private Network (VPN) implementation of an IEEE 802.11b wireless infrastructure. The VPN tunnelling protocol considered for the above study is IP security (IPSec). The main focus of the research is to identify the major performance limitations and their underlying causes for such VPN implementations under study. The experimentation and data collection involved in the study spans over a number of platforms to suit a range of practical VPN implementations over a wireless medium. The collected data includes vital QoS and performance measures such as the application throughput, packet loss, jitter, and round-trip delay. Once the baseline measure is established, a series of experiments are conducted to analyse the behaviour of a single IPSec VPN operating over an IEEE 802.11b infrastructure, after which the experimentation is extended by investigating the trends of the performance metrics of a simultaneously multiple VPN setup. The overall results and analysis of the investigation concludes that the CPU processing power, payload data size, packet generation rate and the geographical distance are critical factors affecting the performance of such VPN tunnel implementations. Furthermore, it is believed that these results may give vital clues for enhancing and achieving optimal performance and QoS levels for VPN applications over WLANs / Master of Science (Hons.)
319

Resource Discovery and Fair Intelligent Admission Control over Scalable Internet

January 2004 (has links)
The Internet currently supports a best-effort connectivity service. There has been an increasing demand for the Internet to support Quality of Service (QoS) to satisfy stringent service requirements from many emerging networking applications and yet to utilize the network resources efficiently. However, it has been found that even with augmented QoS architecture, the Internet cannot achieve the desired QoS and furthermore, there are concerns about the scalability of any available QoS solutions. If the network is not provisioned adequately, the Internet is not capable to handle congestion condition. This is because the Internet is unaware of its internal network QoS states therefore it is not possible to provide QoS when the network state changes dynamically. This thesis addresses the following question: Is it possible to deliver the applications with QoS in the Internet fairly and efficiently while keeping scalability? In this dissertation we answer this question affirmatively by proposing an innovative service architecture: the Resource Discovery (RD) and Fair Intelligent Admission Control (FIAC) over scalable Internet. The main contributions of this dissertation are as follows: 1. To detect the network QoS state, we propose the Resource Discovery (RD) framework to provide network QoS state dynamically. The Resource Discovery (RD) adopts feedback loop mechanism to collect the network QoS state and reports to the Fair Intelligent Admission Control module, so that FIAC is capable to take resource control efficiently and fairly. 2. To facilitate network resource management and flow admission control, two scalable Fair Intelligent Admission Control architectures are designed and analyzed on two levels: per-class level and per-flow level. Per-class FIAC handles the aggregate admission control for certain pre-defined aggregate. Per-flow FIAC handles the flow admission control in terms of fairness within the class. 3. To further improve its scalability, the Edge-Aware Resource Discovery and Fair Intelligent Admission Control is proposed which does not need the core routers involvement. We devise and analyze implementation of the proposed solutions and demonstrate the effectiveness of the approach. For the Resource Discovery, two closed-loop feedback solutions are designed and investigated. The first one is a core-aware solution which is based on the direct QoS state information. To further improve its scalability, the edge-aware solution is designed where only the edges (not core)are involved in the feedback QoS state estimation. For admission control, FIAC module bridges the gap between 'external' traffic requirements and the 'internal' network ability. By utilizing the QoS state information from RD, FIAC intelligently allocate resources via per-class admission control and per-flow fairness control. We study the performance and robustness of RD-FIAC through extensive simulations. Our results show that RD can obtain the internal network QoS state and FIAC can adjust resource allocation efficiently and fairly.
320

Simple star multihop optical network

Chonbodeechalermroong, Yongyut, School of Electrical Engineering, UNSW January 2001 (has links)
A new multihop wavelength-division multiplexed (WDM) optical network designed for uniform traffic with two wavelengths per node that can give the maximum throughput and minimum delay is proposed. It is called a 'Simple Star' multihop optical network. This network has good characteristics in traffic balance and small average number of hops. Moreover, Simple Star can be used together with multiple star couplers to reduce the number of wavelength used. Furthermore, unlike most existing networks, this network does not impose an upper limit to the number of nodes. Another interesting pattern is Simple Star with Center Node (Simple Star CN) particularly for prime numbers of nodes. It can be shown that the average number of hops of Simple Star (normal plus CN) is in between those of Shufflenet and Kautz, but the throughput and delay are better. An associated network called Simple Star Shared Channel (Simple Star SC) for two transceivers per node is also presented and it can be used together with multiple star couplers to reduce the number of wavelengths. An example of a 16-node Simple Star SC shows that the number of wavelengths used can be 8 times less than that in the normal Simple Star network. The Shared Channel simulation model is based on the concept of CSMA/CD (Carrier Sense Multiple Access with Collision Detection).

Page generated in 0.0591 seconds