• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3254
  • 1210
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8708
  • 4046
  • 2510
  • 2433
  • 2433
  • 805
  • 805
  • 588
  • 579
  • 554
  • 551
  • 525
  • 486
  • 480
  • 471
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Adaptive receiver-based preamble-sampling MAC protocol for low power and lossy wireless sensor networks

Akhavan, Mohammad Reza January 2014 (has links)
Low-power and lossy Wireless Sensor Networks (WSNs) consist of a large number of resource constrained sensors nodes communicating over a lossy wireless channel. The key design criteria in low-power and lossy WSNs are energy-efficiency and reliability of data delivery. Sensors are low-cost, battery-powered electronic devices with limited computational and communication capabilities. They are prone to failure due to energy depletion, hardware malfunction, etc. This causes links to create or break and hence the connectivity graph to change. In addition, path loss, shadowing and multipath fading make the links unstable. The main energy savings in sensors can be achieved by keeping the radio in sleep mode for maximum possible duration. The Medium Access Control (MAC) protocol is responsible for controlling the status of the radio; its behaviour consequently affects the energy-efficiency of the sensors. In this work a set of energy-efficient and reliable communication mechanisms for low-power and lossy WSNs are proposed. It can also be applicable for Internet of Things (IoT) and Machine-to-Machine (M2M) systems. The contributions of this thesis are: We propose a Receiver-Based MAC (RB-MAC) which is a preamble-sampling protocol that dynamically elects the next receiver among potential neighbours, based on current channel conditions. The proposed scheme is resilient to lossy links, and hence reduces the number of retransmissions. We show by analysis, simulation, and practical implementation how it outperforms the state-of-the-art sender-based MAC protocols in terms of energy-efficiency, delay and reliability. We introduce two extensions of RB-MAC: adaptive preamble MAC (ap-MAC) and adaptive sampling MAC (as-MAC) protocols. We demonstrate through analytical and simulation that the proposed extensions improve the end-to-end energy efficiency and delay while maintaining comparable reliability of data delivery. We apply RB-MAC to IETF ROLL’s RPL routing protocol [RFC6550] to study the multi-hop performance of RB-MAC. The analytical and simulation-based results show significant improvement in energy-efficiency, delay and reliability against sender-based MAC.
82

Type systems for nominal terms

Fairweather, Elliot Peter Marshall January 2014 (has links)
This thesis concerns types systems for nominal terms, a new syntax, close to informal practice, for representing and reasoning about formal languages that use binders. Nominal techniques allow a rst-order approach to binding in formal languages, providing direct access to both binders and bound variables and a formal axiomatisation of -equivalence. This approach is promising, not least because it has been shown that unication and matching of nominal representations is both decidable and tractable, giving rise to nominal models of computation and programming languages. Nominal terms, a nominal extension of rst-order terms, are now well studied, particularly in the context of equational reasoning. However, type systems for nominal terms have not yet been extensively researched. The development of type systems for nominal terms allows the application of the nominal approach to binding to the areas of specication and verication. Programming languages and environments based upon certifying type systems facilitate formal descriptions of operational semantics and the implementation of ecient compilers. Such features are increasingly important, particularly in critical domains where mathematical certainty is a necessity, such as medicine, telecommunications, transport and defence. This work rst denes three variations on a simple type system for nominal terms in the style of Church's simply typed lambda calculus. An ML-style polymorphic type system is then studied for which a type inference algorithm is provided and implemented. This type system is then applied to equational theories. Two formulations of typed rewriting are presented, one more expressive and one more ecient. Finally, a dependent type system is given for nominal terms extended with atom substitution, with a view to developing a nominal logical framework.
83

Advances in stringology and applications : from combinatorics via genomic analysis to computational linguistics

Alatabbi, Ali January 2015 (has links)
Written text is considered as one of the oldest methods to represent knowledge. A text can be defined as a logical and consistent sequence of symbols which encodes information in a certain language. A straightforward example are natural languages, which are typically used by humans to communicate in spoken or written form. Other underlying examples are DNA, RNA and proteins sequences; DNA and RNA are nucleic acids that carry the genetic instructions, specifies the sequence of the amino acids within proteins, regulate the development and functionality of living organisms specifies the sequence of the amino acids within proteins. Proteins are molecules consisting of one or more chains of amino acids participate in virtually every process within cells. DNA and RNA can be represented as sequences of the nucleo-bases of their nucleotides and proteins and can be represented by the sequence of amino acids encoded in the corresponding gene. A natural problem which emerges when processing such sequences is determine weather a specific patterns occur within another string (known as exact string matching problem); as far as natural language texts are concerned, an important problem in computational linguistics is finding the occurrences of a given word or sentence in a volume of text; Similarly, in computational biology identifying given features in DNA sequences is a important of great significance, on the other side, one is often interested in quantifying the likelihood that two pairs of strings have the same underlying features based on explicit similarity/dissimilarity measurement (known as approximate string matching). Both instance of the string matching problem have been studied thoroughly since early 1960s. This thesis contributes several efficient novel and derived solutions (algorithms and/or data structures), for complex problems which have been originated either out of theoretical considerations or practical problems, and study their experimental performance and compare the proposed solutions with some existing solutions. Among the latter originated introduced solution several ones motivated by realworld problems in the fields of molecular biology and computational linguistics. Despite the fact that studied problems and their proposed solutions differs in research motivation paradigm, yet still utilise similar tools and methodologies for solving the corresponding problems. For example the seminal “Aho-Corasick” Automaton is employed for finding a set of motifs in a biological sequence and detecting spelling mistakes in Arabic text. Similarly, employing the bit-masking trick to extend the DNA symbols to accelerate equivalency testing of degenerate characters in the same way to extend the Arabic alphabet to measure similarity between a stem and derived/inflected forms a given word.
84

Automated theory selection using agent based models

Stratton, Robert James January 2015 (has links)
Models are used as a tool for theory induction and decision making in many contexts, including complex and dynamic commercial environments. New technological and social developments — such as the increasing availability of real-time transactional data and the rising use of online social networks — create a trend towards modelling process automation, and a demand for models that can help decision making in the context of social interaction in the target process. There is often no obvious specification for the form that a particular model should take, and some kind of selection procedure is necessary that can evaluate the properties of a model and its associated theoretical implications. Automated theory selection has already proven successful for identifying model specifications in equation based modelling (EBM), but there has been little progress in developing automatic approaches to agent based model (ABM) selection. I analyse some of the automation methods currently used in EBM and consider what innovations would be required to create an automated ABM specification system. I then compare the effectiveness of simple automatically specified ABM and EBM approaches in selecting optimal strategies in a series of encounters between artificial corporations, mediated through a simulated market environment. I find that as the level of interaction increases, agent based models are more successful than equation based methods in identifying optimal decisions. I then propose a fuller framework for automated ABM model specification, based around an agent-centric theory representation which incorporates emergent features, a model-to-theory mapping protocol, a set of theory evaluation methods, a search procedure, and a simple recommendation system. I evaluate the approach using empirical data collected at two different levels of aggregation. Using macro level data, I derive a theory that represents the dynamics of an online social networking site, in which the data generating process involves interaction between users, and derive management recommendations. Then, using micro level data, I develop a model using individual-level transaction data and making use of existing statistical techniques — hidden Markov and multinomial discrete choice models. I find that the results at both micro and macro level offer insights in terms of understanding the interrelationship between exogenous factors, agent behaviours, and emergent features. From a quantitative perspective, the automated ABM approach shows small but consistent improvements in fit to the target empirical data compared with EBM approaches.
85

Distributed Denial of Service (DDoS) attack detection and mitigation

Saied, Alan January 2015 (has links)
A Distributed Denial of Service (DDoS) attack is an organised distributed packet-storming technique that aims to overload network devices and the communication channels between them. Its major objective is to prevent legitimate users from accessing networks, servers, services, or other computer resources. In this thesis, we propose, implement and evaluate a DDoS Detector approach consisting of detection, defence and knowledge sharing components. The detection component is designed to detect known and unknown DDoS attacks using an Artificial Neural Network (ANN) while the defence component prevents forged DDoS packets from reaching the victim. DDoS Detectors are distributed across one or more networks in order to mitigate the strength of a DDoS attack. The knowledge sharing component uses encrypted messages to inform other DDoS Detectors when it detects a DDoS attack. This mechanism increases the efficacy of the detection mechanism between the DDoS Detectors. This approach has been evaluated and tested against other related approaches in terms of Sensitivity, Specificity, False Positive Rate (FPR), Precision, and Detection Accuracy. A major contribution of the research is that this approach achieves a 98% DDoS detection and mitigation accuracy, which is 5% higher than the best result of previous related approaches.
86

Magnetic resonance compatible tactile force sensing using optical fibres for minimally invasive surgery

Xie, Hui January 2015 (has links)
This thesis presents research in design, fabrication and testing of magnetic resonance (MR) compatible tactile array sensors based on light intensity modulation using optical fibres. The popularity of minimally invasive surgery (MIS) opens the field of tac-tile sensing for medical use, especially in MR environment. The departure from con-ventional sensing approaches (such as capacitive and piezoresistive) allows the devel-opment of tactile sensors which are low cost, small in size, lightweight, free from electromagnetic interference, water and corrosion resistant and capable to operate in harsh environments. In the framework of this PhD study, a number of MR compatible tactile array sensors have been developed, including uniaxial tactile array sensors and an x- and y-axis lateral contact sensor. Mathematical models for these newly-devel-oped tactile sensors have been created and verified. Force is measured through the displacement of a flexible structure with a known stiffness, modulating in turn the light intensity in the employed optical fibres. For the tactile array sensor, a 2D vision system is applied to detect light signals from all sensing elements via the optical fibres – this new approach provides a great potential for high density tactile array sensing, employing a low-cost vision sensor. For the lateral sensor, high-speed/high-sensitivity detectors are utilized to calculate contact force position and magnitude. Combined with 3D printing technology, a miniature tactile probe head capable of palpation in MIS has been designed and tested in ex vivo tissue palpation experiments. All sensor systems developed in this thesis are MR compatible and immune to electromagnetic noise. The proposed sensing structures and principles show high miniaturization and resolution capabilities, making them suitable for integration with medical tools.
87

Statistical runtime verification of agent-based simulations

Herd, Benjamin January 2015 (has links)
As a consequence of the growing adoption of agent-based simulations as decision making tools in various (potentially also critical) areas, questions of veracity and validity become increasingly important. In general software and hardware development, formal verification – particularly model checking – has been applied successfully to a wide range of problems; due to their immense complexity, however, agent-based simulations lend themselves to conventional formal verification only in very simple cases and at a disproportionately high cost. The purpose of this work is to address this problem and present a statistical runtime verification approach which focusses on the analysis of the temporal behaviour of large-scale probabilistic agent-based simulations. The approach is tailored to the particular mix of characteristics that agent-based simulations typically exhibit: large populations, randomness, heterogeneity, temporal boundedness and the existence of multiple observational levels. It combines the ideas of runtime verification and statistical model checking and allows for the temporal verification of simulations with hundreds or thousands of constituents and probabilistic state transitions. Instead of requiring a formal model, verification is performed upon traces of the original simulation obtained through repeated execution. Properties are checked on-the-fly, i.e. during the execution of the simulation, which is achieved by interleaving simulation and verification. Evaluation is lazy, i.e. a simulation step is performed only if the property has not already been satisfied or refuted. This reduces the amount of simulation to a minimum and restricts state space exploration to the smallest fragment necessary for finding a definite answer to the given property. Verification results are approximate, but the precision is clearly quantifiable and adjustable by varying the number of simulation runs.
88

Self-localization and environment building methods for small non-holonomic manoeuvrable two-wheel mobile robots

Georgiou, Evangelos January 2015 (has links)
The thesis presents a kinematic and dynamic model of the mobile robot platform derived by Lagrange D’Alembert methodologies and system control using a closed-loop PD controller. Innovative research in self-localization is presented in this thesis with the use of a double compass configuration that exploits a fusion of relative and absolute localization methods to achieve an analytical solution to position. In order to validate this novel double compass self-localization method, an optimized method was proposed in the form of an overhead computer system and a two-wheel manoeuvrable nonholonomic mobile robot was developed to facilitate research in self-localization methods with shaft encoders, accelerometers, magnetometers, and gyroscopes. The computer system was used to improve the performance of track non-natural markers on the mobile robot. A novel pseudo random algorithm with a gradient policy, inspired by the skip-list method, was delivered to significantly improve the image scanning performance to find non-natural markers. The validation, analysing the data collected from double compass configuration compared to visual tracking data was carried out using a non-parametric single-sample statistical analysis using the Kolmogorov-Smirnov test and the results validated the null hypothesis with a mean error less than 12mm. After solving the translational position of the mobile robot on a 2-dimentional plane, the mobile robot needs to be aware of its 3-dimentional orientation. To achieve this, a 9-axis sensor using an accelerometer, a gyroscope, and a magnetometer were implemented, to form an inertial measurement unit capable of returning a highly accurate self-orientation position using a directional cosine matrix which returns model free from accumulating error. A novel closed-loop PI controller was derived using the directional cosine matrix. In order to validate the directional cosine matrix method, data was collected from the sensor and compared against visual tracking data. The directional cosine matrix method data was validated using a non-parametric single-sample statistical analysis using the Kolmogorov-Smirnov test validated the null hypothesis with a mean error less than 1˚.
89

Algorithmic problems in strings with applications to the analysis of biological sequences

Barton, Carl Samuel January 2015 (has links)
Recent advances in molecular biology have dramatically changed the way biological data analysis is performed [119, 92]. Next-Generation Sequenc- ing (NGS) technologies produce high-throughput data of highly controlled quality, hundreds of times faster and cheaper than a decade ago. Mapping of short reads to a reference sequence is a fundamental problem in NGS technologies. After finding an occurrence of a high quality fragment of the read, the rest must be approximately aligned, but a good alignment would not be expected to contain a large number of gaps (consecutive insertions or deletions). We present an alternative alignment algorithm which computes the optimal alignment with a bounded number of gaps. Another problem arising from NGS technologies is merging overlapping reads into a single string. We present a data structure which allows for the efficient computation of the overlaps between two strings as well as being applicable to other problems. Weighted strings are a representation of data that allows for a subtle representation of ambiguity in strings. In this document we present algorithms for other problems related to weighted strings: the computation of exact and approximate inverted repeats in weighted strings, computing repetitions and computing covers. We investigate the average-case complexity of wildcard matching. Wildcards can be used to model single nucleotide polymorphisms and so, efficient algorithms to search for strings with wildcards are necessary. In this document we investigate how efficient algorithms for this problem can be on average. There exist many organisms such as viruses, bacteria, eukaryotic cells, and archaea which have a circular DNA structure. If a biologist wishes to find occurrences of a particular virus in a carriers DNA sequence which may not be circular it must be possible to efficiently locate occurrences of circular strings. In this document we present a number of algorithms for circular string matching.
90

QoS-aware routing in future all-IP access networks

Jaron, Alexandre January 2015 (has links)
The proliferation of mobile devices over the past several years has created a whole new world of the Internet. The deluge of applications for every aspect of today’s life has raised the expectation of having ubiquitous connectivity, with a desired Quality of Service (QoS). Although appealing, it has violated the original Internet design which was not intended to support mobility, neither better than best-effort delivery. It is also a well-known fact that technology is an ever-advancing need of the human society, and undeniably the Internet forms a major part of our lives now. Everyday more and more users flood the Internet with enormous amount of data and information. As such there is a need to effectively handle all the information and traffic in a way that there is an availability of high speed network routing without any loss in data transmission. QoS provisioning has been one of the long lasting focuses in the network research community. While designed for fixed networks, the use of QoS protocols in IP-based mobile networks, where hosts dynamically change their point of attachments, imposes new challenges to be studied and analysed. Furthermore, a massive growth in the access network traffic with its highly unpredictable nature can cause bottlenecks in some links while others are under-utilised, rendering the load skewed, and therefore, breaching the QoS provisioning commitments. The main objective of this research is to propose a novel QoS mechanism for mobile networks. The new scheme is composed of two different approaches accountable for QoS provisioning in next-generation access networks. Firstly, a new method is proposed that minimises the signalling overhead, as well as the interruption in QoS at the time of handover. Through a developed analytical framework and simulation scenario, the performance of the new scheme is investigated thoroughly, with the focus on the figures of merit that affect the efficiency of using QoS signalling protocols in access networks. Secondly, a new QoS-aware routing mechanism is proposed, based on the OSPF protocol, intending to minimise the congestion on the links while at the same time complying with traffic requirements. OSPF was created for providing flexibility and great scalability, and although widely used today, does not allow arbitrary splitting of traffic. This research delves into the study and development of IP-based networking, built upon an extension to OSPF routing protocol, that will foster integrated functioning of technologies that currently lead the vision for the novel telecommunication infrastructures and service provision. This novel QoS-aware approach, Multi-Plane Routing (MPR), is applied in the context of access networks for IP routing. MPR divides the physical network into several logical routing planes, each being associated with a dedicated link weight configuration. Network topology and node degree distribution directly impact the performance of our strategy. The foundation of this research’s vision for networking in future networks is in the evolution and derivatives of IP routing that are inherited from the native Internet and stand as the solution for networking in the sought "all-IP" integrated modern telecommunications infrastructures. MPR is proposed to offer a traffic engineering solution for future all-IP access networks that uses QoS-awareness and policies for plane selection to maximise path diversity, increase overall throughput and satisfy QoS requirements for sessions.

Page generated in 0.0353 seconds