• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Scalable wireless sensor networks for dynamic communication environments : simulation and modelling

Barbosa, Pedro January 2011 (has links)
This thesis explores the deployment of Wireless Sensor Networks (WSNs) on localised maritime events. In particular, it will focus on the deployment of a WSN at sea and estimating what challenges derive from the environment and how they affect communication. This research addresses these challenges through simulation and modelling of communication and environment, evaluating the implications of hardware selection and custom algorithm development. The first part of this thesis consists of the analysis of aspects related to the Medium Access Control layer of the network stack in large-scale networks. These details are commonly hidden from upper layers, thus resulting in misconceptions of real deployment characteristics. Results show that simple solutions have greater advantages when the number of nodes within a cluster increases. The second part considers routing techniques, with focus on energy management and packet delivery. It is shown that, under certain conditions, relaying data can increase energy savings, while at the same time allows a more even distribution of its usage between nodes. The third part describes the development of a custom-made network simulator. It starts by considering realistic radio, channel and interference models to allow a trustworthy simulation of the deployment environment. The MAC and Routing techniques developed thus far are adapted to the simulator in a cross-layer manner. The fourth part consists of adapting the WSN behaviour to the variable weather and topology found in the chosen application scenario. By analysing the algorithms presented in this work, it is possible to find and use the best alternative under any set of environmental conditions. This mechanism, the environment-aware engine, uses both network and sensing data to optimise performance through a set of rules that involve message delivery and distance between origin and cluster head
512

A prototype parallel multi-FPGA accelerator for SPICE CMOS model evaluation

Maache, Ahmed January 2011 (has links)
Due to ever increasing complexity of circuits, EDA tools and algorithms are demanding more computational power. This made transistor-level simulation a growing bottleneck in the circuit development process. This thesis serves as a proof of concept to evaluate and quantify the cost of using multi-FPGA systems in SPICE-like simulations in terms of acceleration, throughput, area, and power. To this end, a multi-FPGA architecture is designed to exploit the inherent parallelism in the device model evaluation phase within the SPICE simulator. A code transformation flow which converts the high-level device model code to structural VHDL was also implemented. This ow showed that an automatic compiler system to design, map, and optimise SPICE-like simulations on FPGAs is feasible. This thesis has two main contributions. The first contribution is the multi-FPGA accelerator of the device model evaluation which demonstrated speedup of 10 times over a conventional processor, while consuming six times less power. Results also showed that it is feasible to describe and optimise FPGA pipelined implementations to exploit other class of applications similar to the SPICE device model evaluation. The constant throughput of the pipelined architecture is one of the main factors for the FPGA accelerator to outperform conventional processors. The second contribution lies in the use of multi-FPGA synthesis to optimise the inter-FPGA connections through altering the process of mapping partitions to FPGA devices. A novel technique is introduced which reduces the inter-FPGA connections by an average of 18%. The speedup and power effciency results showed that the proposed multi-FPGA system can be used by the SPICE community to accelerate the transistor-level simulation. The experimental results also showed that it is worthwhile continuing this research further to explore the use of FPGAs to accelerate other EDA tools
513

Linear and ellipsoidal pattern separation : theoretical aspects and experimental analysis

Kharechko, Andriy January 2009 (has links)
This thesis deals with a pattern classification problem, which geometrically implies data separation in some Euclidean feature space. The task is to infer a classifier (a separating surface) from a set or sequence of observations. This classifier would later be used to discern observations of different types. In this work, the classification problem is viewed from the perspective of the optimization theory: we suggest an optimization problem for the learning model and adapt optimization algorithms for this problem to solve the learning problem. The aim of this research is twofold, so this thesis can be split into two self-contained parts because it deals with two different type of classifiers each in a different learning setting. The first part deals with linear classification in the online learning setting and includes analysis of existing polynomial-time algorithms: the ellipsoid algorithm and the perceptron rescaling algorithm. We establish that they are based on different types of the same space dilation technique, and derive the parametric version of the latter algorithm, which allows to improve its complexity bound and exploit some extra information about the problem. We also interpret some results from the information-based complexity theory to the optimization model to suggest tight lower bounds on the learning complexity of this family of problems. To conclude this study, we experimentally test both algorithms on the positive semidefinite constraint satisfaction problem. Numerical results confirm our conjectures on the behaviour of the algorithms when the dimension of the problem grows. In the second part, we shift our focus from linear to ellipsoidal classifiers, which form a subset of second-order decision surfaces, and tackle a pattern separation problem with two concentric ellipsoids where the inner encloses one class (which is normally our class of interest, if we have one) and the outer excludes inputs of the other class(es). The classification problem leads to semidefinite program, which allows us to harness the efficient interior-point algorithms for solving it. This part includes analysis of the maximal separation ratio algorithm
514

Table-top XUV nanoscope

Grant-Jacob, James January 2011 (has links)
This thesis documents the development of a table-top extreme-ultraviolet (XUV) nanoscope suitable for coherent diffractive imaging (CDI). Intense spatially coherent ultrashort XUV and X-ray pulses are desired for nanoscale biological and material imaging. Such radiation can be produced via high harmonic generation (HHG) by focusing a highly intense ultrashort laser pulse into gas. In order to obtain high flux XUV radiation suitable for CDI, various generation conditions are explored. By observing the fluorescence from an argon gas jet to position the laser focus into different regions within the jet, a fourfold variation in XUV yield is achieved. Maximum output flux is obtained for the 19th harmonic when the laser is focused into the Mach disc of the jet. To further increase the XUV flux, HHG from a larger generation region (an argon-filled pipe) is also demonstrated. The most intense harmonic is nearly fifty times more intense and 10 nm shorter in wavelength compared with the most intense harmonic generated from an argon gas jet. Position for maximum generated XUV flux occurs when the laser focus is positioned after the pipe. In addition, a reduction in the number of harmonics in the output spectrum is also achieved by positioning the laser focus after the pipe. Using the high harmonics generated from the argon-filled pipe for XUV scattering, CDI is used to reveal the nanoscale structure of micron-sized objects. This thesis demonstrates the imaging of a 5 0m pinhole, a 7.5 0m FIB (focused ion beam) sample and a biological sample using the table-top XUV nanoscope. A maximum reconstructed object resolution of ~ 300 nm is achieved. The work described here will aid in the development of a table-top nanoscope capable of routine imaging
515

Tracing fine-grained provenance in stream processing systems using a reverse mapping method

Sansrimahachai, Watsawee January 2012 (has links)
Applications that require continuous processing of high-volume data streams have grown in prevalence and importance. These kinds of system often process streaming data in real-time or near real-time and provide instantaneous responses in order to support a precise and on time decision. In such systems it is difficult to know exactly how a particular result is generated. However, such information is extremely important for the validation and veri�cation of stream processing results. Therefore, it is crucial that stream processing systems have a mechanism for tracking provenance - the information pertaining to the process that produced result data - at the level of individual stream elements which we refer to as fine-grained provenance tracking for streams. The traceability of stream processing systems allows for users to validate individual stream elements, to verify the computation that took place and to understand the chain of reasoning that was used in the production of a stream processing result. Several recent solutions to provenance tracking in stream processing systems mainly focus on coarse-grained stream provenance in which the level of granularity for capturing provenance information is not detailed enough to address our problem. This thesis proposes a novel fine-grained provenance solution for streams that exploits a reverse mapping method to precisely capture dependency relationships for every individual stream element. It is also designed to support a stream-specific provenance query mechanism, which performs provenance queries dynamically over streams of provenance assertions without requiring the assertions to be stored persistently. The dissertation makes four major contributions to the state of the art. First is a provenance model for streams that allows for the provenance of individual stream elements to be obtained. Second is a provenance query method which utilizes a reverse mapping method - stream ancestor functions - in order to obtain the provenance of a particular stream processing result. The third contribution is a stream-specific provenance query mechanism that enables provenance queries to be computed on-the-fly without requiring provenance assertions to be stored persistently. The fourth contribution is the performance characteristics of our stream provenance solution. It is shown that the storage overhead for provenance collection can be reduced significantly by using our storage reduction technique and the marginal cost of storage consumption is constant based on the number of input stream events. A 4% overhead for the persistent provenance approach and a 7% overhead for the stream-specific query approach are observed as the impact of provenance recording on system performance. In addition, our stream-specific query approach offers low latency processing (0.3 ms per additional component) with reasonable memory consumption.
516

Keystroke dynamics as a biometric

Marsters, John-David January 2009 (has links)
Modern computer systems rely heavily on methods of authentication and identity verification to protect sensitive data. One of the most robust protective techniques involves adding a layer of biometric analysis to other security mechanisms, as a means of establishing the identity of an individual beyond reasonable doubt. In the search for a biometric technique which is both low-cost and transparent to the end user, researchers have considered analysing the typing patterns of keyboard users to determine their characteristic timing signatures. Previous research into keystroke analysis has either required fixed performance of known keyboard input or relied on artificial tests involving the improvisation of a block of text for analysis. I is proposed that this is insufficient to determine the nature of unconstrained typing in a live computing environment. In an attempt to assess the utility of typing analysis for improving intrusion detection on computer systems, we present the notion of ‘genuinely free text’ (GFT). Through the course of this thesis, we discuss the nature of GFT and attempt to address whether it is feasible to produce a lightweight software platform for monitoring GFT keystroke biometrics, while protecting the privacy of users. The thesis documents in depth the design, development and deployment of the multigraph-based BAKER software platform, a system for collecting statistical GFT data from live environments. This software platform has enabled the collection of an extensive set of keystroke biometric data for a group of participating computer users, the analysis of which we also present here. Several supervised learning techniques were used to demonstrate that the richness of keystroke information gathered from BAKER is indeed sufficient to recommend multigraph keystroke analysis, as a means of augmenting computer security. In addition, we present a discussion of the feasibility of applying data obtained from GFT profiles in circumventing traditional static and free text analysis biometrics.
517

Programming languages and principles for read-write linked data

Horne, Ross J. January 2011 (has links)
This work addresses a gap in the foundations of computer science. In particular, only a limited number of models address design decisions in modern Web architectures. The development of the modern Web architecture tends to be guided by the intuition of engineers. The intuition of an engineer is probably more powerful than any model; however, models are important tools to aid principled design decisions. No model is sufficiently strong to provide absolute certainty of correctness; however, an architecture accompanied by a model is stronger than an architecture accompanied solely by intuition lead by the personal, hence subjective, subliminal ego. The Web of Data describes an architecture characterised by key W3C standards. Key standards include a semi-structured data format, entailment mechanism and query language. Recently, prominent figures have drawn attention to the necessity of update languages for the Web of Data, coining the notion of Read–Write Linked Data. A dynamicWeb of Data with updates is a more realistic reflection of the Web. An established and versatile approach to modelling dynamic languages is to define an operational semantics. This work provides such an operational semantics for a Read–Write Linked Data architecture. Furthermore, the model is sufficiently general to capture the established standards, including queries and entailments. Each feature is relative easily modelled in isolation; however a model which checks that the key standards socialise is a greater challenge to which operational semantics are suited. The model validates most features of the standards while raising some serious questions. Further to evaluating W3C standards, the operational mantics provides a foundation for static analysis. One approach is to derive an algebra for the model. The algebra is proven to be sound with respect to the operational semantics. Soundness ensures that the algebraic rules preserve operational behaviour. If the algebra establishes that two updates are equivalent, then they have the same operational capabilities. This is useful for optimisation, since the real cost of executing the updates may differ, despite their equivalent expressive powers. A notion of operational refinement is discussed, which allows a non-deterministic update to be refined to a more deterministic update. Another approach to the static analysis of Read–Write Linked Data is through a type system. The simplest type system for this application simply checks that well understood terms which appear in the semi-structured data, such as numbers and strings of characters, are used correctly. Static analysis then verifies that basic runtime errors in a well typed program do not occur. Type systems for URIs are also investigated, inspired by W3C standards. Type systems for URIs are controversial, since URIs have no internal structure thus have no obvious non-trivial types. Thus a flexible type system which accommodates several approaches to typing URIs is proposed.
518

Plasma enhanced chemical vapor deposition of nanocrystalline graphene and device fabrication development

Schmidt, Marek E. January 2012 (has links)
Large area growth of high quality graphene remains a challenge, and is currently dominated by chemical vapor deposition (CVD) on metal catalyst films. This method requires a transfer of the graphene onto an insulating substrate for electronic applications, and the graphene film quality and performance can vary with the transfer. A more attractive approach is plasma enhanced chemical vapor deposition (PECVD) of graphene and nanocrystalline graphene (NCG) directly on insulating substrates. The aim of this project was to explore the deposition process and microfabrication processes based on these NCG films. A deposition process for nanocrystalline graphene was developed in this work based on parallel-plate PECVD. NCG with thicknesses between 3 and 35nm were deposited directly on wet thermal oxidized silicon wafers with diameter of 150 mm, quartz glass and sapphire glass. High NCG thickness uniformities of 87% over full wafer were achieved. Surface roughness was measured by atomic force microscopy and shows root mean square (RMS) values of less than 0.23nm for 3nm thin films. NCG films deposited on quartz and sapphire show promising performance as transparent conductor with 13kΩ/X sheet resistance at 85% transparency. Furthermore, the suitability of the developed PECVD NCG films for microfabrication was demonstrated. Microfabrication process development was focused on four device types. NCG membranes were fabricated based on through-wafer inductively coupled plasma etching from the back, and consecutive membrane release by HF vapor etching. The fabrication of suspended NCG strips, based on HF vapor release, shows promising results, but was not entirely successful due to insufficient thickness of the sacrificial oxide. Top gated NCG strips are successfully fabricated, and the increased modulation by the top gate is demonstrated. Finally, NCG nanowire fabrication is performed on 150mm wafers. Experiments yielded an increased back gate modulation effect by a reduced NCG thickness, although no nanowire formation was observed. A highly accurate focused ion beam (FIB) prototyping technique was developed and applied to exfoliated graphene in this work. This technique systematically avoids any exposure of the graphene to Ga+-ions through the use of an alignment marker system, achieving alignment accuracies better than 250 nm. Contacts were deposited by FIB- or e-beam-assisted tungsten deposition, and FIB trench milling was used to confine conduction to a narrow channel. A channel passivation method based on e-beam-assisted insulator deposition has been demonstrated, and showed a reduction of ion damage to the graphene. Three fabricated transistor structures were electrically characterized.
519

Content-driven superpixels and their applications

Lowe, Richard January 2013 (has links)
This thesis develops a new superpixel algorithm that displays excellent visual reconstruction of the original image. It achieves high stability across multiple random initialisations, achieved by producing superpixels directly corresponding to local image complexity. This is achieved by growing superpixels and dividing them on image variation. The existing analysis was not sufficient to take these properties into account so new measures of oversegmentation provide new insight into the optimum superpixel representation. As a consequence of the algorithm, it was discovered that CDS has properties that have eluded previous attempts, such as initialisation invariance and stability. The completely unsupervised nature of CDS makes them highly suitable for tasks such as application to a database containing images of unknown complexity. These new superpixel properties have allowed new applications for superpixel pre-processing to be produced. These are image segmentation; image compression; scene classification; and focus detection. In addition, a new method of objectively analysing regions of focus has been developed using Light-Field photography.
520

Behavioural properties and dynamic software update for concurrent programmes

Anderson, Gabrielle January 2013 (has links)
Software maintenance is a major part of the development cycle. The traditional methodology for rolling out an update to existing programs is to shut down the system, modify the binary, and restart the program. Downtime has significant disadvantages. In response to such concerns, researchers and practitioners have investigated how to perform update on running programs whilst maintaining various desired properties. In a multi-threaded setting this is further complicated by the interleaving of different threads' actions. In this thesis we investigate how to prove that safety and liveness are preserved when updating a program. We present two possible approaches; the main intuition behind each of these is to find quiescent points where updates are safe. The first approach requires global synchronisation, and is more generally applicable, but can delay updates indefinitely. The second restricts the class of programs that can be updated, but permits update without global synchronisation, and guarantees application of update. We provide full proofs of all relevant properties.

Page generated in 0.0239 seconds