• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Distributable defect localization using Markov models /

Portnoy, William, January 2005 (has links)
Thesis (Ph. D.)--University of Washington, 2005. / Vita. Includes bibliographical references (p. 135-146).
22

Using an object-oriented approach to develop a software application

Duvall, Paul. January 2006 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2006. / Title from PDF title page (viewed on Aug. 30, 2006). Includes bibliographical references.
23

Efficient streaming for high fidelity imaging

McNamee, Joshua January 2017 (has links)
Researchers and practitioners of graphics, visualisation and imaging have an ever-expanding list of technologies to account for, including (but not limited to) HDR, VR, 4K, 360°, light field and wide colour gamut. As these technologies move from theory to practice, the methods of encoding and transmitting this information need to become more advanced and capable year on year, placing greater demands on latency, bandwidth, and encoding performance. High dynamic range (HDR) video is still in its infancy; the tools for capture, transmission and display of true HDR content are still restricted to professional technicians. Meanwhile, computer graphics are nowadays near-ubiquitous, but to achieve the highest fidelity in real or even reasonable time a user must be located at or near a supercomputer or other specialist workstation. These physical requirements mean that it is not always possible to demonstrate these graphics in any given place at any time, and when the graphics in question are intended to provide a virtual reality experience, the constrains on performance and latency are even tighter. This thesis presents an overall framework for adapting upcoming imaging technologies for efficient streaming, constituting novel work across three areas of imaging technology. Over the course of the thesis, high dynamic range capture, transmission and display is considered, before specifically focusing on the transmission and display of high fidelity rendered graphics, including HDR graphics. Finally, this thesis considers the technical challenges posed by incoming head-mounted displays (HMDs). In addition, a full literature review is presented across all three of these areas, detailing state-of-the-art methods for approaching all three problem sets. In the area of high dynamic range capture, transmission and display, a framework is presented and evaluated for efficient processing, streaming and encoding of high dynamic range video using general-purpose graphics processing unit (GPGPU) technologies. For remote rendering, state-of-the-art methods of augmenting a streamed graphical render are adapted to incorporate HDR video and high fidelity graphics rendering, specifically with regards to path tracing. Finally, a novel method is proposed for streaming graphics to a HMD for virtual reality (VR). This method utilises 360° projections to transmit and reproject stereo imagery to a HMD with minimal latency, with an adaptation for the rapid local production of depth maps.
24

Automated equivalence checking of quantum information systems

Ardeshir-Larijani, Ebrahim January 2014 (has links)
Quantum technologies have progressed beyond the laboratory setting and are beginning to make an impact on industrial development. The construction of practical, general purpose quantum computers has been challenging, to say the least. But quantum cryptographic and communication devices have been available in the commercial marketplace for a few years. Quantum networks have been built in various cities around the world, and plans are afoot to launch a dedicated satellite for quantum communication. Such new technologies demand rigorous analysis and verification before they can be trusted in safety and security-critical applications. In this thesis we investigate the theory and practice of equivalence checking of quantum information systems. We present a tool, Quantum Equivalence Checker (QEC), which uses a concurrent language for describing quantum systems, and performs verification by checking equivalence between specification and implementation. For our process algebraic language CCSq, we define an operational semantics and a superoperator semantics. While in general, simulation of quantum systems using current computing technology is infeasible, we restrict ourselves to the stabilizer formalism, in which there are efficient simulation algorithms and representation of quantum states. By using the stabilizer representation of quantum states we introduce various algorithms for testing equality of stabilizer states. In this thesis, we consider concurrent quantum protocols that behave functionally in the sense of computing a deterministic input-output relation for all interleavings of a concurrent system. Crucially, these input-output relations can be abstracted by superoperators, enabling us to take advantage of linearity. This allows us to analyse the behaviour of protocols with arbitrary input, by simulating their operation on a finite basis set consisting of stabilizer states. We present algorithms for the checking of functionality and equivalence of quantum protocols. Despite the limitations of the stabilizer formalism and also the range of protocols that can be analysed using equivalence checking, QEC is applied to specify and verify a variety of interesting and practical quantum protocols from quantum communication and quantum cryptography to quantum error correction and quantum fault tolerant computation, where for each protocol different sequential and concurrent model are defined in CCSq. We also explain the implementation details of the QEC tool and report on the experimental results produced by using it on the verification of a number of case studies.
25

Monitoring, analysis and optimisation of I/O in parallel applications

Wright, Steven A. January 2014 (has links)
High performance computing (HPC) is changing the way science is performed in the 21st Century; experiments that once took enormous amounts of time, were dangerous and often produced inaccurate results can now be performed and refined in a fraction of the time in a simulation environment. Current generation supercomputers are running in excess of 1016 floating point operations per second, and the push towards exascale will see this increase by two orders of magnitude. To achieve this level of performance it is thought that applications may have to scale to potentially billions of simultaneous threads, pushing hardware to its limits and severely impacting failure rates. To reduce the cost of these failures, many applications use checkpointing to periodically save their state to persistent storage, such that, in the event of a failure, computation can be restarted without significant data loss. As computational power has grown by approximately 2x every 18 − 24 months, persistent storage has lagged behind; checkpointing is fast becoming a bottleneck to performance. Several software and hardware solutions have been presented to solve the current I/O problem being experienced in the HPC community and this thesis examines some of these. Specifically, this thesis presents a tool designed for analysing and optimising the I/O behaviour of scientific applications, as well as a tool designed to allow the rapid analysis of one software solution to the problem of parallel I/O, namely the parallel log-structured file system (PLFS). This thesis ends with an analysis of a modern Lustre file system under contention from multiple applications and multiple compute nodes running the same problem through PLFS. The results and analysis presented outline a framework through which application settings and procurement decisions can be made.
26

Towards scalable adaptive mesh refinement on future parallel architectures

Beckingsale, David Alexander January 2015 (has links)
In the march towards exascale, supercomputer architectures are undergoing a significant change. Limited by power consumption and heat dissipation, future supercomputers are likely to be built around a lower-power many-core model. This shift in supercomputer design will require sweeping code changes in order to take advantage of the highly-parallel architectures. Evolving or rewriting legacy applications to perform well on these machines is a significant challenge. Mini-applications, small computer programs that represent the performance characteristics of some larger application, can be used to investigate new programming models and improve the performance of the legacy application by proxy. These applications, being both easy to modify and representative, are essential for establishing a path to move legacy applications into the exascale era. The focus of the work presented in this thesis is the design, development and employment of a new mini-application, CleverLeaf, for shock hydro- dynamics with block-structured adaptive mesh refinement (AMR). We report on the development of CleverLeaf, and show how the fresh start provided by a mini-application can be used to develop an application that is flexible, accurate, and easy to employ in the investigation of exascale architectures. We also detail the development of the first reported resident parallel block-structured AMR library for Graphics Processing Units (GPUs). Extending the SAMRAI library using the CUDA programming model, we develop datatypes that store data only in GPU memory, as well the necessary operators for moving and interpolating data on an adaptive mesh. We show that executing AMR simulations on a GPU is up to 4.8⇥ faster than a CPU, and demonstrate scalability on over 4,000 nodes using a combination of CUDA and MPI. Finally, we show how mini-applications can be employed to improve the performance of production applications on existing parallel architectures by selecting the optimal application configuration. Using CleverLeaf, we identify the most appropriate configurations on three contemporary supercomputer architectures. Selecting the best parameters for our application can reduce run-time by up to 82% and reduce memory usage by up to 32%.
27

Hand gesture recognition in uncontrolled environments

Yao, Yi January 2014 (has links)
Human Computer Interaction has been relying on mechanical devices to feed information into computers with low efficiency for a long time. With the recent developments in image processing and machine learning methods, the computer vision community is ready to develop the next generation of Human Computer Interaction methods, including Hand Gesture Recognition methods. A comprehensive Hand Gesture Recognition based semantic level Human Computer Interaction framework for uncontrolled environments is proposed in this thesis. The framework contains novel methods for Hand Posture Recognition, Hand Gesture Recognition and Hand Gesture Spotting. The Hand Posture Recognition method in the proposed framework is capable of recognising predefined still hand postures from cluttered backgrounds. Texture features are used in conjunction with Adaptive Boosting to form a novel feature selection scheme, which can effectively detect and select discriminative texture features from the training samples of the posture classes. A novel Hand Tracking method called Adaptive SURF Tracking is proposed in this thesis. Texture key points are used to track multiple hand candidates in the scene. This tracking method matches texture key points of hand candidates within adjacent frames to calculate the movement directions of hand candidates. With the gesture trajectories provided by the Adaptive SURF Tracking method, a novel classi�er called Partition Matrix is introduced to perform gesture classification for uncontrolled environments with multiple hand candidates. The trajectories of all hand candidates extracted from the original video under different frame rates are used to analyse the movements of hand candidates. An alternative gesture classifier based on Convolutional Neural Network is also proposed. The input images of the Neural Network are approximate trajectory images reconstructed from the tracking results of the Adaptive SURF Tracking method. For Hand Gesture Spotting, a forward spotting scheme is introduced to detect the starting and ending points of the prede�ned gestures in the continuously signed gesture videos. A Non-Sign Model is also proposed to simulate meaningless hand movements between the meaningful gestures. The proposed framework can perform well with unconstrained scene settings, including frontal occlusions, background distractions and changing lighting conditions. Moreover, it is invariant to changing scales, speed and locations of the gesture trajectories.
28

Self organising map machine learning approach to pattern recognition for protein secondary structures and robotic limb control

Hall, Vincent Austin January 2014 (has links)
With every corner of science, engineering and business generating vast amounts of data, it is becoming increasingly important to be able to understand what these data mean, and make sensible decisions based on the findings. One tool that can assist with this aim is the type of program called a self-organising map (SOM). SOMs are unsupervised Artificial Neural Networks (ANNs) that are used for pattern recognition, dimensionality-reduction of datasets, and can give a visual representation of the data using topology. For this project, SOMs were used to do pattern recognition on circular dichroism (CD) and myoelectric signal (MES) data, among other applications. To the first of these SOMs, we gave the name SSNN for Secondary Structure Neural Network, as it analyses CD spectra to find structures of proteins. CD is a polarised UV light spectroscopy, it is a useful for estimating structures (conformations) of chiral molecules in solution. In this work we report on its use with proteins and lipoproteins. The problem with using CD spectra is that they can be difficult to interpret, especially if quantitative results are required. We have improved the structure estimations compared with similar methodologies. The overall error across all structures for SELCON3 was 0.2, for CDSSTR: 0.3, for K2d: 0.2, but for our methodology, SSNN, it was 0.1. Another difficult problem the world faces is that thousands more people every year have limb amputations or are born with non-fully-functioning limbs. Robotic limbs can help people with these afflictions, and while many are available, none give much dexterity or natural movements, or are easy to use. To help rectify the situation we adapted the SOM tool we developed, SSNN, to work as part of a software platform that is used to control robotic prostheses, calling it HASSANN, Hand Activation Signals, SOM Artificial Neural Network. The system works by performing pattern recognition on myoelectric signals, which are electrical signals from muscles. The software platform is called BioPatRec, and was developed by Max Ortiz-Catalan and his other collaborators. The SOM HASSANN was written by the author, who also tested how well the software works at predicting which robotic limb movements are needed.
29

Scaffolding for social personalised adaptive e-learning

Shi, Lei January 2014 (has links)
This work aims to alleviate the weaknesses and pitfalls of the strong modern trend of e-learning by capitalising on and taking advantage of theoretical and implementation advances that have been made in the fields of adaptive hypermedia, social computing, games research and motivation theories. Whilst both demand for and supply of e-learning are growing, especially with the rise of MOOCs, the problems that it faces remain to be addressed, notably isolation, depersonalisation and lack of individual navigation. This often leads to poor learning experience. This work explores an innovative method of combining, threading and balancing the amount of adaptation, social interaction, gamification and open learner modelling for e-learning techniques and technologies. As a starting point, a novel combination of classical adaptation based on user modelling, fine-grained social interaction features and a Facebook-like appearance is explored. This has been shown to be able to ensure a high level of effectiveness, efficiency and satisfaction amongst learners when using the e-learning system. Contextual gamification strategies rooted in Self-Determination Theory (SDT) are then proposed, which have been shown to be able to ensure learners of the system adopt desirable learning behaviours and achieve pre-specified learning goals, thus providing a high level of motivation. Finally, a multifaceted open social learner modelling is proposed. This allows visualising both learners’ performance and their contributions to a learning community, provides various modes of comparison, and is integrated and adapted to learning content. Evidence has shown that this can provide a high level of effectiveness, efficiency and satisfaction amongst learners. Two innovative social personalised adaptive e-learning systems including Topolor and Topolor 2 are devised to enable the proposed approach to be tested in the real world. They have been used as online learning environments for undergraduate and postgraduate students in Western and Eastern Europe as well as Middle Eastern universities, including the University of Warwick, UK, Jordan University, Jordan, and Sarajevo School of Science and Technology, Bosnia and Herzegovina. Students’ feedback has shown this approach to be very promising, suggesting further implementation of the systems and follow-up research. The worldwide use of Topolor has also promoted international collaborations.
30

Efficient and reliable data dissemination and convergecast in Wireless Sensor Networks

Saginbekov, Sain January 2014 (has links)
With the availability of cheap sensor nodes now it is possible to use hundreds of nodes in a Wireless Sensor Network (WSN) application. Since then WSN applications have been being used in a wide range of applications, including environmental, industrial, military, health-care and indoor applications. WSNs are composed of sensor nodes, also known as motes, that are small in size, usually battery powered, and have limited memory and computing capabilities. As opposed to other wireless networks of more powerful nodes such as laptops, cellular phones, PDAs, etc., where communications can occur between any two nodes, in WSNs there are mainly two communication types: (i) broadcast, where a designated node, called a sink, disseminates data to all other nodes and (ii) convergecast, where all nodes send their generated data to the sink. After deploying sensor nodes in an area of interest, they are usually unattended for a long time. Since motes are battery powered, the energy conservation is of great importance. Furthermore, due to limited resources such as computing, memory and energy, harsh environmental conditions and buggy programs, wireless sensors may experience a number of different types of faults. Given the characteristics of sensor nodes and the environment they are deployed in, any WSN communication protocol and algorithm should be energy efficient and tolerant to faults. Several efficient communication protocols have been proposed so far. However, there are several aspects that has seen very little activity in the literature: (i) Handling transient faults and (ii) Dealing with two or more sinks. Therefore, in this thesis, we are proposing to address some of the issues that are still open. Specifically, we are planning to look at fault tolerance in data dissemination and the development of an infrastructure for two sinks. In this thesis, (i) we try to make data dissemination protocols resilient to faults that can corrupt values stored in the memory and messages by presenting two algorithms that when added to fault-intolerant dissemination protocols, make the code dissemination protocols fault-tolerant, (ii) we try to minimize drawbacks of existing code update maintenance algorithms by proposing a new algorithm that efficiently maintains code updates in WSNs, and (iii) we propose an efficient data aggregation convergecast scheduling algorithm for wireless sensor networks with two sinks.

Page generated in 0.0783 seconds