• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 47
  • 34
  • Tagged with
  • 377
  • 85
  • 70
  • 44
  • 44
  • 38
  • 37
  • 34
  • 33
  • 28
  • 25
  • 24
  • 21
  • 21
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Multi-resource approach to asynchronous SoC : design and tool support

Golubcovs, Stanislavs January 2011 (has links)
As silicon cost reduces, the demands for higher performance and lower power consumption are ever increasing. The ability to dynamically control the number of resources employed can help balance and optimise a system in terms of its throughput, power consumption, and resilience to errors. The management of multiple resources requires building more advanced resource allocation logic than traditional 1-of-N arbiters posing the need for the efficient design flow supporting both the design and verification of such systems. Networks-on-Chip provide a good application example of distributed arbitration, in which the processor cores needing to transmit data are the clients; and the point-to-point links are the resources managed by routers. Building fast and smart arbiters can greatly benefit such systems in providing efficient and reliable communication service. In this thesis, a multi-resource arbiter was developed based on the Signal Transition Graph (STG) development flow. The arbiter distributes multiple active interchangeable resources that initiate requests when they are ready to be used. It supports concurrent resource utilization, which benefits creating asynchronous Multiple-Input-Multiple- Output (MIMO) queues. In order to deal with designs of higher complexity, an arbiter-oriented design flow is proposed. The flow is based on digital circuit components that are represented internally as STGs. This allows designing circuits without directly working with STGs but allowing their use for synthesis and formal verification. The interfaces for modelling, simulation, and visual model representation of the flow were implemented based on the existing modelling framework. As a result, the verification phase of the flow has helped to find hazards in existing Priority arbiter implementations. Finally, based on the logic-gate flow, the structure of a low-latency general purpose arbiter was developed. This design supports a wide variety of arbitration problems including the multi-resource management, which can benefit building NoCs employing complex and adaptive routing techniques.

Modelling and performance analysis of mobile ad hoc networks

Younes, Osama January 2013 (has links)
Mobile Ad hoc Networks (MANETs) are becoming very attractive and useful in many kinds of communication and networking applications. This is due to their efficiency, relatively low cost, and flexibility provided by their dynamic infrastructure. Performance evaluation of mobile ad hoc networks is needed to compare various architectures of the network for their performance, study the effect of varying certain network parameters and study the interaction between various parameters that characterise the network. It can help in the design and implementation of MANETs. It is to be noted that most of the research that studies the performance of MANETs were evaluated using discrete event simulation (DES) utilising a broad band of network simulators. The principle drawback of DES models is the time and resources needed to run such models for large realistic systems, especially when results with a high accuracy are desired. In addition, studying typical problems such as the deadlock and concurrency in MANETs using DES is hard because network simulators implement the network at a low abstraction level and cannot support specifications at higher levels. Due to the advantage of quick construction and numerical analysis, analytical modelling techniques, such as stochastic Petri nets and process algebra, have been used for performance analysis of communication systems. In addition, analytical modelling is a less costly and more efficient method. It generally provides the best insight into the effects of various parameters and their interactions. Hence, analytical modelling is the method of choice for a fast and cost effective evaluation of mobile ad hoc networks. To the best of our knowledge, there is no analytical study that analyses the performance of multi-hop ad hoc networks, where mobile nodes move according to a random mobility model, in terms of the end-to-end delay and throughput. This work ii presents a novel analytical framework developed using stochastic reward nets and mathematical modelling techniques for modelling and analysis of multi-hop ad hoc networks, based on the IEEE 802.11 DCF MAC protocol, where mobile nodes move according to the random waypoint mobility model. The proposed framework is used to analysis the performance of multi-hop ad hoc networks as a function of network parameters such as the transmission range, carrier sensing range, interference range, number of nodes, network area size, packet size, and packet generation rate. The proposed framework is organized into several models to break up the complexity of modelling the complete network and make it easier to analyse each model as required. This is based on the idea of decomposition and fixed point iteration of stochastic reward nets. The proposed framework consists of a mathematical model and four stochastic reward nets models; the path analysis model, data link layer model, network layer model and transport layer model. These models are arranged in a way similar to the layers of the OSI protocol stack model. The mathematical model is used to compute the expected number of hops between any source-destination pair; and the average number of carrier sensing, hidden, and interfering nodes. The path analysis model analyses the dynamic of paths in the network due to the node mobility in terms of the path connection availability and rate of failure and repair. The data link layer model describes the behaviour of the IEEE 802.11 DCF MAC protocol. The actions in the network layer are modelled by the network layer model. The transport layer model represents the behaviour of the transport layer protocols. The proposed models are validated using extensive simulations.

Generating networks for performance evaluation of P2P trust path search algorithms

Zhang, Huqiu January 2013 (has links)
Trust has become central in computing science research. The problem of nding trust paths and estimating the trust one can place in a partner arises in various application areas, including virtual organisations, authentication systems and reputation-based trust systems. We study the use of peer-to-peer algorithms for nding trust paths and probabilistically assessing trust values in systems where trust is organised similar to the `web of trust'. Many empirical results demonstrate that many real-life large networks and systems are scale-free, that is, the node degree follows a power law distribution. To be able to analyse such networks, \growth algorithms" have been devised that generate scale-free networks by growing the size of the network in a manner that intuitively resembles real networks. Interestingly, generation of scale-free networks with directed arcs has not been researched extensively, especially for the case that avoids duplicate arcs as well as arcs that connect a node with itself (self loops). Being able to easily generate scale-free networks with these properties allows more accurate and e cient evaluation and simulation of routing algorithms and applications. We consider various di erent graph algorithms, which modify existing network generating models for directed graphs. A mathematical framework is presented to prove under which conditions the algorithm can generate networks with the scale-free feature. Since a complete proof is not feasible, we evaluate if these algorithms generate scale-free networks using statistical tests. We nd that removing multiple arcs and self loops after an entire network has been generated does not a ect the scale-free character, but at the cost of the growth nature of the algorithm. To obtain reliable results with small enough con dence intervals through simulation, one needs to run many simulations and generate many networks and it is therefore of importance to generate networks with the desired properties in reasonable time. We implement a set of algorithms and compare them with respect to CPU time and memory use, in terms of both theoretical complexity analysis and experimental results. We show through experiments that using relatively standard equipment networks with a million or more nodes can be generated in mere seconds. ii Finally, we explore the suitability of using peer-to-peer algorithms for nding trust paths and inferring the trust value of a set of trust paths discovered. We employ discrete event simulation and Monte Carlo techniques to evaluate these search algorithms. We implement all the relevant methods and search protocols in the Peersim simulation environment. Our main conclusion is that many peer-to-peer algorithms perform similarly, and only di er through (and are sensitive to) parameter choices, such as the number of nodes to which a query is forwarded. We also conclude that ooding is the most practical method if one stresses the requirement for nding all the trust paths, and if networks are less densely connected.

Self-correcting strategy for networks-on-chip interconnect

Liu, Junxiu January 2015 (has links)
Networks-on-Chip (NoC) interconnection provides an on-chip communication strategy for a large number of processing elements System-on- Chip. Fault tolerance is a challenge for modern NoCs due to the increase in physical defects in advanced manufacturing processes. A key requirement for modern NoCs is the ability to detect faults and failures and to self-correct after faults occur thereby maintaining a level of system functionality. However, existing fault-tolerant approaches cannot fully address system scalability and fault testing with minimal intrusion, in addition they fail to provide robust self-correction strategies under complex traffic conditions. Therefore, it is necessary to look to new fault detection and self-correction strategies to address this reliable design issue and to enable the design of reliable systems on unreliable fabrics. This thesis presents a novel online fault detection strategy where the intrusion of the runtime operation under testing is minimised. If the channel is faulty, an alert flag is raised. By using this alert flag mechanism, three novel fault-tolerant adaptive routing algorithms are proposed to provide selfcorrecting strategies for NoCs. They exploit the status of real-time traffic with different levels (local or regional) look-ahead functions, then calculate weights for output directions or path candidates, and choose the path with the lowest weighting to forward the packets. The key benefit of these routing algorithms is to bypass a routing path with faulty channels while minimising congestion for the adjacent connected channels. The detailed experimental results are given for a range of testing conditions, traffic patterns and fault rates, which demonstrate that the faults can be detected promptly with minimal intrusion and the routing algorithms are able to maintain a level of system functionality under high fault rates with a low cost. In particular, experimental results demonstrate that the proposed detection and self-correction strategy achieves an overall between 24%-62% improvement on throughput degradation under varied high fault rates compared to benchmarks. The thesis also presents an open-source monitoring mechanism which provides an evaluation and benchmarking mechanism to quantitatively analyse a hardware NoC system's fault-tolerant capability. By using this monitoring mechanism, the thesis concludes with hardware verification of the detection and self-correction algorithms in FPGA hardware. The FPGA implementations present the throughput performance, fault-tolerant capabilities and resource costs of the three different fault-tolerant adaptive routing algorithms, in particular, the implementations demonstrate the realtime operation of the proposed self-correction strategies in hardware while under the presence of varied levels of faults.

Ontological interpretation of network monitoring data

Napier, Ian January 2014 (has links)
Interpreting measurement and monitoring data from networks in general and the Internet in particular is a challenge. The motivation for this work has been to in- vestigate new ways to bridge the gap between the kind of data which are available and the more developed information which is needed by network stakeholders to support decision making and network management. Specific problems of syntax, semantics, conflicting data and modeling domain-specific knowledge have been identified. The methods developed and tested have used the Resource Descrip- tion Framework (rdf) and the ontology languages of the Semantic Web to bring together data from disparate sources into unified knowledgebases in two discrete case studies, both using real network data. Those knowledgebases have then been demonstrated to be usable and valuable sources of information about the networks concerned. Some success has been achieved in overcoming each of the identified problems using these techniques, proving the thesis that taking an ontological ap- proach to the processing of network monitoring data can be a very useful technique for overcoming problems of interpretation and for making information available to those who need it.

Congestion detection within multi-service TCP/IP networks using wavelets

Jarrett, Wayne O'Brian January 2004 (has links)
Using passive observation within the multi-service TCP/IP networking domain, we have developed a methodology that associates the frequency composition of composite traffic signals with the packet transmission mechanisms of TCP. At the core of our design is the Discrete Wavelet Transform (DWT), used to temporally localise the frequency variations of a signal. Our design exploits transmission mechanisms (including Fast Retransmit/Fast Recovery, Congestion Avoidance, Slow start, and Retransmission Timer Expiry with Exponential Back off.) that are activated in response to changes within this type of network environment. Manipulation of DWT output, combined with the use of novel heuristics permits shifts in the frequency spectrum of composite traffic signals to be directly associated with the former. Our methodology can be adapted to accommodate composite traffic signals that contain a substantial proportion of data originating from non-rate adaptive sources often associated with Long Range Dependence and Self Similarity (e.g. Pareto sources). We demonstrate the methodology in two ways. Firstly, it is used to design a congestion indicator tool that can operate with network control mechanisms that dissipate congestion. Secondly, using a queue management algorithm (Random Early Detection) as a candidate protocol, we show how our methodology can be adapted to produce a performance-monitoring tool. Our approach provides a solution that has both low operational and implementation intrusiveness with respect to existing network infrastructure. The methodology requires a single parameter (i.e. the arrival rate of traffic at a network node), which can be extracted from almost all network-forwarding devices. This simplifies implementation. Our study was performed within the context of fault management with design requirements and constraints arising from an in depth study of the Fault Management Systems (FMS) used by British Telecomm on regional UK networks up to February 2000.

Designing for shareable interfaces in the wild

Morris, Richard January 2014 (has links)
Despite excitement about the potential of interactive tabletops to support collaborative work, there have been few empirical demonstrations of their effectiveness (Marshall et al., 2011). In particular, while lab-based studies have explored the effects of individual design features, there has been a dearth of studies evaluating the success of systems in the wild. For this technology to be of value, designers and systems builders require a better understanding of how to develop and evaluate tabletop applications to be deployed in real world settings. This dissertation reports on two systems designed through a process that incorporated ethnography-style observations, iterative design and in the wild evaluation. The first study focused on collaborative learning in a medical setting. To address the fact that visitors to a hospital emergency ward were leaving with an incomplete understanding of their diagnosis and treatment, a system was prototyped in a working Emergency Room (ER) with doctors and patients. The system was found to be helpful but adoption issues hampered its impact. The second study focused on a planning application for visitors to a tourist information centre. Issues and opportunities for a successful, contextually-fitted system were addressed and it was found to be effective in supporting group planning activities by novice users, in particular, facilitating users’ first experiences, providing effective signage and offering assistance to guide the user through the application. This dissertation contributes to understanding of multi-user systems through literature review of tabletop systems, collaborative tasks, design frameworks and evaluation of prototypes. Some support was found for the claim that tabletops are a useful technology for collaboration, and several issues were discussed. Contributions to understanding in this field are delivered through design guidelines, heuristics, frameworks, and recommendations, in addition to the two case studies to help guide future tabletop system creators.

Bayesian estimation of environmental fields using mobile wireless sensor networks

Lu, Bowen January 2014 (has links)
Environmental fields widely exist around us in our daily life and some of them are so important that cannot be ignored. For instance, temperature distribution, environment contamination and nuclear leaking can all be categorised as envirornmental fields. Some of the fields are invisible, some are dynamic changing and some are harmful to human. Therefore, deploying a mobile wireless sensor network (WSN) will be a better solution than manually sampling and estimating an environmental field. Bayesian framework is an elegant mathematical model that interprets the recognition procedures of human being and is widely used for iterative learning processes. Based on two regression methods in this platform, a complete field estimation solution for mobile WSNs is proposed. First, two distributed platforms are provided based on support vector regression (SVR), and centroidal Voronoi tessellation (CVT) is employed to optimise the sensor deployment. Second, to overcome the defects existed in the solution of SVR-CVT, Gaussian process regression (GPR) is being investigated due to its additional estimation accuracy information. To further improve the performance of this GPR based solution. A data selection strategy for GPR and a hybrid criterion for CVT are investigated.

Load disaggregation and monitoring in a smart office space

Zoha, Ahmed January 2014 (has links)
Technological advancements in sensing, networking and computation opened up the possibilities .to sense user-centric information to solve many problems such as conservation of energy in commercial buildings. Research on leveraging such capabilities to optimize the energy utilization in a facility or a building is relatively new. The current thesis presents a framework that capitalize on heterogeneous sensing infrastructure present in a smart office space to track operational states of the appliances without the need to deploy energy meter on every device of interest. This study extends techniques from Non-intrusive Load Monitoring (NILM) domain that automates detection of operational appliance activities using aggregated load measurements, by employing sophisticated signal processing and machine learning algorithms. This study also addresses challenges such as the inability of existing methods to accurately localize and characterize state transition events of low-power appliances due to similarity of their power consumption profile. In addition, this study demonstrates how the effectiveness of traditional approaches has been compromised by their ability to recognizing multistate appliance operations due to lack of robust appliance signatures extracted from low-granularity power measurements. As a result, this study explores event detection and characterization mechanisms that includes the application of singular spectrum transformation for improved event localization, and extraction of new features to enhance class discrimination between target appliances. In addition, it proposes a multi-modal event characterization framework to deal with appliance classes that exhibit ambiguous overlap of power signatures in a feature space. The aim is to create a unified hybrid space by characterizing the power and acoustic profile of appliances and optimally combine them using kernel-based feature fusion strategy. The study demonstrates how the proposed system can better distinguish between appliances of different categories in this new feature space and consequently achieves a higher appliance state estimation accuracy. To evaluate the suitability of non event-based models for load disaggregation, a specialized variant of the hidden Markov model (HMM) known as factorial HMM is investigated for inferring appliance states based on aggregated load measurements. To demonstrate this approach in the real world, a mobile phone application was developed and evaluated in actual practice. In addition to load disaggregation, an interrelated challenge is to identify abnormal or unusual consumption patterns within specific energy measurements. Due to the high volume and noise content of sensor readings, data compression and appropriate feature representations were essential for effective analysis of energy measurements. To address these challenges, this study proposes an anomalous load pattern detection framework that performs wavelet approximation of electrical load curves, and further reduces their dimensionality using the classical multidimensional scaling method (CMDS). Results showed that the low-dimensional projection of features prior to performing anomaly detection effectively isolate the anomalous patterns and as a result improves the performance of target anomaly detection models.

Asynchronous two-way relay networks

Li, Yixin January 2014 (has links)
This thesis summarises the work during the four year Ph.D. study at the University of Reading. It focused on the design for emerging two-way relay network (TWRN) strategies respecting various practical and theoretical conditions. Our work concerns four main topics. The first topic is the synchronisation for time-domain (TD) based physical-layer network coding (PLNC) with timing asynchrony under Rayleigh block-flat-fading channels. In such a system, it is essential to estimate the channel coefficients at the relay to perform PLNC mapping and detection. We have proposed a training-sequence-based delay and channel estimation algorithm and presented a low-complexity estimation design based on Alamouticode structure. Among our findings, we revealed that as long as the signals arrives at the relay with symbol alignment and the relative delay information is sent to the destination nodes, timing asynchrony does not affect the system performance. The second topic targets the interference mitigation schemes in practical PLNC systems. In TD-based PLNC systems, signals may arrive at the relay with fractional symbol delay which introduces inter-symbol interference (ISI) Orthogonal frequency-division multiplexing (OFDM) can be combined with PLNC to combat the timing mismatch, however on the other hand it is sensitive to carrier frequency offset which introduces inter-carrier interference (ICI). In these systems, ISI and ICI need to be carefully handled, otherwise it will cause serious performance degradation. The thesis has looked into both cancellation and mitigation in PLNC systems and novel schemes were proposed accordingly. For TD-PLNC systems, the first scheme is a multi-dimensional transmission scheme through pre-coding. lSI can be fully avoided through separate decoding. The second scheme is an iterationbased algorithm, which enables the relay to reconstruct and eliminate the interference to achieve better performance with reduced complexity compared to other existing schemes. The second method is also extended to OFDM-based PLNC systems to mitigate the ICI. The third topic is concentrated on limited feedback (LFB) power control in PLNC, which has been rarely mentioned in the literature. We have proposed a feedback ratio design based on the characteristics of the channels, where each feedback ratio covers a ratio range of equal probability in cumulative distribution function of the ratio between two channels' power gains. The effectiveness of the proposed scheme shows that the proposed LFB power control scheme with 3 bits can approach optimal power control scheme. The last topic examines the relay selection and dynamic power allocation in analogue network coding CANe) system. Three novel power allocation schemes are proposed, which show significantly performance improvement and provide a tradeoff between computational complexity and performance.

Page generated in 0.031 seconds