• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3248
  • 1209
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8701
  • 4039
  • 2505
  • 2429
  • 2429
  • 805
  • 805
  • 588
  • 579
  • 554
  • 551
  • 525
  • 486
  • 480
  • 471
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Safe and scalable parallel programming with session types

Ng, Chun Wang Nicholas January 2014 (has links)
Parallel programming is a technique that can coordinate and utilise multiple hardware resources simultaneously, to improve the overall computation performance. However, reasoning about the communication interactions between the resources is difficult. Moreover, scaling an application often leads to increased number and complexity of interactions, hence we need a systematic way to ensure the correctness of the communication aspects of parallel programs. In this thesis, we take an interaction-centric view of parallel programming, and investigate applying and adapting the theory of Session Types, a formal typing discipline for structured interaction-based communication, to guarantee the lack of communication mismatches and deadlocks in concurrent systems. We focus on scalable, distributed parallel systems that use message-passing for communication. We explore programming language primitives, tools and frameworks to simplify parallel programming. First, we present the design and implementation of Session C, a program ming toolchain for message-passing parallel programming. Session C can ensure deadlock freedom, communication safety and global progress through static type checking, and supports optimisations by refinements through session subtyping. Then we introduce Pabble, a protocol description language for designing parametric interaction protocols. The language can capture scalable interaction patterns found in parallel applications, and guarantees communication-safety and deadlock-freedom despite the undecidability of the underlying parameterised session type theory. Next, we demonstrate an application of Pabble in a workflow that combines Pabble protocols and computation kernel code describing the sequential computation behaviours, to generate a Message-Passing Interface (MPI) parallel application. The framework guarantees, by construction, that generated code are free from communication errors and deadlocks. Finally, we formalise an extension of binary session types and new language primitives for safe and efficient implementations of multiparty parallel applications in a binary server-client programming environment. Our exploration with session-based parallel programming shows that it is a feasible and practical approach to guaranteeing communication aspects of complex, interaction-based scalable parallel programming.

Dictionaries for fast and informative dynamic MRI acquisition

Caballero, Jose January 2015 (has links)
Magnetic resonance (MR) imaging is an invaluable tool for medical research and diagnosis but suffers from inefficiencies. The speed of its acquisition mechanism, based on sequentially probing the interactions between nuclear atom spins and a changing magnetic field, is limited by atomic properties and scanner physics. Modern sampling techniques termed compressed sensing have nevertheless demonstrated how near perfect reconstructions are possible from undersampled, accelerated acquisitions, showing promise for more efficient MR acquisition paradigms. At the same time, information extraction from MR images through image analysis implies a considerable dimensionality reduction, in which an image is processed for the extraction of a few clinically useful parameters. This signals an inefficient handling of information in the separated treatment of acquisition and analysis that could be tackled by joining these two essential stages of the imaging pipeline. In this thesis, we explore the use of adaptive sparse modelling for novel acquisition strategies of cardiac cine MR data. Conventional compressed sensing MR acquisition relies on fixed basis transforms for sparse modelling, which are only able to guarantee suboptimal sparse modelling. We introduce spatio-temporal dictionaries that are able to optimally adapt sparse modelling by absorbing salient features of cardiac cine data, and demonstrate how they can outperform sampling methods based on fixed basis transforms. Additionally, we extend the framework introduced to handle parallel data acquisition. Given the flexibility of the formulation, we show how it can be combined with a labelling model that provides a segmentation of the image as a by-product of the reconstruction, hence performing joint reconstruction and analysis.

Automated organ localisation in fetal Magnetic Resonance Imaging

Keraudren, Kevin January 2015 (has links)
Fetal Magnetic Resonance Imaging (MRI) provides an invaluable diagnostic tool complementary to ultrasound due to its high resolution and tissue contrast. In order to accommodate fetal and maternal motion, MR images of the fetus are typically acquired as stacks of two-dimensional (2D) slices that freeze in-plane motion, but may form an inconsistent three-dimensional (3D) volume. Motion correction methods, which reconstruct a high-resolution 3D volume from such motion corrupted stacks of 2D slices, have revolutionised fetal MRI, enabling detailed studies of the fetal brain development. However, such motion correction and reconstruction procedures require a substantial amount of manual data preprocessing in order to isolate fetal tissues from the rest of the image. Beside the presence of motion artefacts, the main challenges when automating the processing of fetal MRI are the unpredictable position and orientation of the fetus, as well as the variability in anatomy due to fetal development. This thesis presents novel methods based on machine learning and prior knowledge of fetal development to localise automatically organs in fetal MRI in order to automate the preprocessing step of motion correction. This localisation can also be used to initialise a segmentation, or orient images based on the fetal anatomy to facilitate clinical examination. The fetal brain is first localised independently of the orientation of the fetus, and then used as an anchor point to steer features used in the subsequent localisation of the heart, lungs and liver. The localisation results are used to segment fetal tissues in each 2D slice and this segmentation can be further refined throughout the motion correction procedure. The proposed method to segment the fetal brain is shown to perform as well as a manual preprocessing. Preliminary results on a similar application to the motion correction of the fetal thorax are also presented.

Embedding individual-based plankton ecosystem models in a finite element ocean model

Lange, Michael January 2014 (has links)
Computational models of the ocean plankton ecosystem are traditionally based on simulating entire populations of microbes using sets of coupled differential equations. However, due to recent advances in high-performance computing, a new class of individual-based models (IBM) has come to the fore, which uses computational agents to model individual sub-populations of marine plankton. Although computationally more expensive, these agent-based models offer features that cannot be re-created using population-level dynamics, such as individual life cycles, intra-population variability and an increased stability over parameter ranges. The main focus of this thesis is the implementation and verification of an embedded modelling framework for creating agent-based plankton ecology models in Fluidity-ICOM, a state-of-the-art ocean model that solves the Navier-Stokes equations on adaptive unstructured finite element meshes. Since Fluidity-ICOM provides an interface for creating population-based ecology models, a generic agent-based framework not only enables the integration of existing plankton IBMs with adaptive remeshing technology, but also allows individual and population-based components to be used within a single hybrid ecosystem. This thesis gives a full account of the implementation of such a framework, focusing in particular on the movement and tracking of agents in an unstructured finite element mesh and the coupling mechanism used to facilitate agent-mesh and agent-agent interactions. The correctness of the framework is verified using an existing agent-based ecosystem model with four trophic levels, which is shown to settle on a stationary annual attractor given a stable cycle of annual forcing. A regular cycle of phytoplankton primary production and zooplankton reproduction is achieved using a purely agent-based implementation and a hybrid food chain version of the model, where the two top-level components of the ecosystem are modelled using Eulerian field equations. Finally, a standalone phytoplankton model is used to investigate the effects of vertical mesh adaptivity on the ecosystem in a three-dimensional mesh.

Productive and efficient computational science through domain-specific abstractions

Rathgeber, Florian January 2014 (has links)
In an ideal world, scientific applications are computationally efficient, maintainable and composable and allow scientists to work very productively. We argue that these goals are achievable for a specific application field by choosing suitable domain-specific abstractions that encapsulate domain knowledge with a high degree of expressiveness. This thesis demonstrates the design and composition of domain-specific abstractions by abstracting the stages a scientist goes through in formulating a problem of numerically solving a partial differential equation. Domain knowledge is used to transform this problem into a different, lower level representation and decompose it into parts which can be solved using existing tools. A system for the portable solution of partial differential equations using the finite element method on unstructured meshes is formulated, in which contributions from different scientific communities are composed to solve sophisticated problems. The concrete implementations of these domain-specific abstractions are Firedrake and PyOP2. Firedrake allows scientists to describe variational forms and discretisations for linear and non-linear finite element problems symbolically, in a notation very close to their mathematical models. PyOP2 abstracts the performance-portable parallel execution of local computations over the mesh on a range of hardware architectures, targeting multi-core CPUs, GPUs and accelerators. Thereby, a separation of concerns is achieved, in which Firedrake encapsulates domain knowledge about the finite element method separately from its efficient parallel execution in PyOP2, which in turn is completely agnostic to the higher abstraction layer. As a consequence of the composability of those abstractions, optimised implementations for different hardware architectures can be automatically generated without any changes to a single high-level source. Performance matches or exceeds what is realistically attainable by hand-written code. Firedrake and PyOP2 are combined to form a tool chain that is demonstrated to be competitive with or faster than available alternatives on a wide range of different finite element problems.

Patch-based segmentation with spatial context for medical image analysis

Wang, Zehan January 2014 (has links)
Accurate segmentations in medical imaging form a crucial role in many applications from pa- tient diagnosis to population studies. As the amount of data generated from medical images increases, the ability to perform this task without human intervention becomes ever more de- sirable. One approach, known broadly as atlas-based segmentation, is to propagate labels from images which have already been manually labelled by clinical experts. Methods using this ap- proach have been shown to be e ective in many applications, demonstrating great potential for automatic labelling of large datasets. However, these methods usually require the use of image registration and are dependent on the outcome of the registration. Any registrations errors that occur are also propagated to the segmentation process and are likely to have an adverse e ect on segmentation accuracy. Recently, patch-based methods have been shown to allow a relaxation of the required image alignment, whilst achieving similar results. In general, these methods label each voxel of a target image by comparing the image patch centred on the voxel with neighbouring patches from an atlas library and assigning the most likely label according to the closest matches. The main contributions of this thesis focuses around this approach in providing accurate segmentation results whilst minimising the dependency on registration quality. In particular, this thesis proposes a novel kNN patch-based segmentation framework, which utilises both intensity and spatial information, and explore the use of spatial context in a diverse range of applications. The proposed methods extend the potential for patch-based segmentation to tolerate registration errors by rede ning the \locality" for patch selection and comparison, whilst also allowing similar looking patches from di erent anatomical structures to be di erentiated. The methods are evaluated on a wide variety of image datasets, ranging from the brain to the knees, demonstrating its potential with results which are competitive to state-of-the-art techniques.

Transparently improving regression testing using symbolic execution

Marinescu, Paul Dan January 2014 (has links)
Software testing is an expensive and time-consuming process, often involving the manual creation of comprehensive regression test suites. Current testing methodologies, however, do not take full advantage of these tests. In this thesis, we present two techniques for amplifying the effect of existing test suites using a lightweight symbolic execution mechanism. We approach the problem from two complementary perspectives: first, we aim to execute the code that was never executed by the regression tests by combining the existing tests, symbolic execution and a set of heuristics based on program analysis. Second, we thoroughly check all sensitive operations (e.g., pointer dereferences) executed by the test suite for errors, and explore additional paths around sensitive operations. We have implemented these approaches into two tools - katch and zesti - which we have used to test a large body of open-source code. We have applied katch to all the patches written in a combined period of approximately six years for nineteen mature programs from the popular GNU diffutils, GNU binutils and GNU findutils application suites, which are shipped with virtually all UNIX-based distributions. Our results show that katch can automatically synthesise inputs that significantly increase the patch coverage achieved by the existing manual test suites, and find bugs at the moment they are introduced. We have applied zesti to three open-source code bases - GNU Coreutils, libdwarf and readelf - where it found 52 previously unknown bugs, many of which are out of reach of standard symbolic execution. Our technique works transparently to the tester, requiring no additional human effort or changes to source code or tests. Furthermore, we have conducted a systematic empirical study to examine how code and tests co-evolve in six popular open-source systems and assess the applicability of katch and zesti to other systems.

Patch-based image analysis : application to segmentation and disease classification

Tong, Tong January 2014 (has links)
In recent years, image analysis using local patches has received significant interest and has been shown to be highly effective in many medical imaging applications. In this work, we investigate machine learning methods which utilize local patches for different discriminative tasks. Specifically, this thesis focuses mainly on the applications of medical image segmentation in different imaging modalities as well as the classification of AD by using patch based image analysis. The first contribution of the thesis is a novel approach for the segmentation of the hippocampus in brain MR images. This approach utilizes local image patches and introduces dictionary learning techniques for supervised image segmentation. The proposed approach is evaluated on two different datasets, demonstrating competitive segmentation performance compared with state-of-the-art techniques. Furthermore, we extend the proposed approach for segmentation of multiple structures and evaluate it in the context of multi-organ segmentation of abdominal CT images. The second contribution of this thesis is a new classification framework for the detection of AD. This framework utilizes local intensity patches as features and constructs patch-based graphs for classification. Images from the ADNI study are used for the evaluation of the proposed framework. The experimental results suggest that not only patch intensities but also the relationships among patches are related to the pathological changes of AD and provide discriminative information for classification.

Specification and performance optimisation of real-time trading strategies for betting exchange platforms

Tsirimpas, Polyvios January 2014 (has links)
Since their introduction in June 2000, betting exchanges have revolutionised the nature and practice of betting. Betting exchange markets share some similarities with financial markets in terms of their operation. However, in stark contrast to financial markets, there are very few quantitative analysis tools available to support the development of automated betting exchange trading strategies. This thesis confronts challenges related to the generic specification, back-testing, optimisation and execution of parameterised automated trading strategies for betting exchange markets, and presents a related framework called SPORTSBET. The framework is built on an open-source event-driven platform called URBI, which, to date, has been mainly used to develop applications in the domains of robotics and artificial intelligence. SPORTSBET consists of three main components, each of which addresses a hitherto-unmet research challenge. The first is UBEL, a novel generic betting strategy specification language based on the event-driven scripting language of URBI, which can be used to specify parameterised betting strategies for markets related to a wide range of sports. The second is a complex event processor which is capable of synchronising multiple data streams and either replaying them on an historical basis with dynamic market re-construction, in order to quantify strategy performance, or executing them in real time with either real or virtual capital. The final component is an optimisation platform whereby strategy parameters are automatically refined using a stochastic search heuristic in order to improve strategy performance. Explicitly, the optimisation process involves stochastic initialisation, intermediate stochastic selection and acceptance of the candidate solution. To demonstrate the applicability and effectiveness of SPORTSBET, case studies are presented for betting strategies for a range of sports. As illustrated in the case studies, the SPORTSBET optimisation platform implements Walk-Forward Analysis for the robust parameterisation of betting exchange trading strategies without overfitting. Nonetheless, the outcomes should be carefully interpreted, while numerous tests of a strategy are recommended.

Scalable multithreaded algorithms for mutable irregular data with application to anisotropic mesh adaptivity

Rokos, Georgios January 2014 (has links)
Anisotropic mesh adaptation is a powerful way to directly minimise the computational cost of mesh based simulation. It is particularly important for multi-scale problems where the required number of floating-point operations can be reduced by orders of magnitude relative to more traditional static mesh approaches. Increasingly, finite element/volume codes are being optimised for modern multicore architectures. Inter-node parallelism for mesh adaptivity has been successfully implemented by a number of groups using domain decomposition methods. However, thread-level parallelism using programming models such as OpenMP is significantly more challenging because the underlying data structures are extensively modified during mesh adaptation and a greater degree of parallelism must be realised while keeping the code race-free. In this thesis we describe a new thread-parallel implementation of four anisotropic mesh adaptation algorithms, namely edge coarsening, element refinement, edge swapping and vertex smoothing. For each of the mesh optimisation phases we describe how safe parallel execution is guaranteed by processing workitems in batches of independent sets and using a deferred-operations strategy to update the mesh data structures in parallel without data contention. Scalable execution is further assisted by creating worklists using atomic operations, which provides a synchronisation-free alternative to reduction-based worklist algorithms. Additionally, we compare graph colouring methods for the creation of independent sets and present an improved version which can run up to 50% faster than existing techniques. Finally, we describe some early work on an interrupt-driven work-sharing for-loop scheduler which is shown to perform better than existing work-stealing schedulers. Combining all aforementioned novel techniques, which are generally applicable to other unordered irregular problems, we show that despite the complex nature of mesh adaptation and inherent load imbalances, we achieve a parallel efficiency of 60% on an 8-core Intel(R) Xeon(R) Sandy Bridge and 40% using 16 cores on a dual-socket Intel(R) Xeon(R) Sandy Bridge ccNUMA system.

Page generated in 0.0412 seconds