• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 17
  • 17
  • 17
  • 17
  • 8
  • 8
  • 7
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Molecular modelling of the Streptococcus Pneumoniae serogroup 6 capsular polysaccharide antigens.

Mathai, Neann 01 January 2013 (has links)
In this thesis, a systematic study of the structural characterization of the capsular polysaccharides of Streptococcus pneumoniae is conducted using Molecular Modelling methods. S.pneumoniae causes invasive pneumococcal disease (IPD), a leading cause of death in children under five. The serotypes in group 6 are amongst the most common of IPD causing serotypes. We performed structural characterization of serogroup 6 to understand the structural relationships between serotypes 6A, 6B, 6C and 6D in an attempt to understand the cross protection seen within the group. The 6B saccharide has been included in the early conjugate vaccine (PCV-7), and has shown to elicit protection against the 6B as well as offer some cross-protection against 6A. 6A has since been included in the latter conjugate vaccines in the hopes of eliciting stronger protection against 6A and 6C. Molecular Dynamics simulations were used to investigate the conformations of oligosaccharides with the aim of elucidating a conformational rationale for why small changes in the carbohydrate primary structure result in variable efficacy. We began by examining the Potential of Mean Force (PMF) plots of the disaccharide subunits which make up the Serogroup 6 oligosaccharides. The PMFs showed the free energy proles along the torsional angles space of the disaccharides. This conformational information was then used to build the four oligosaccharides on which simulations were conducted. These simulations showed that serotype pairs 6A/6C and 6B/6D have similar structures.
2

Force Field Comparison through Computational Analysis of Capsular Polysaccharides of Streptococcus Pneumoniae Serotypes 19A and F

Gordon, Marc 01 August 2014 (has links)
Modern Molecular Dynamics force fields, such as the CHARMM36 and GLYCAM06 carbohydrate force fields, are parametrised to reproduce behaviours for specific molecules under specific conditions in order to be able to predict the behaviour of similar molecular systems, where there is often no experimental data. Coupled with the sheer number available, this makes choosing the appropriate force field a formidable task. For this reason it is important that modern force fields be regularly compared. Streptococcus pneumoniae is a cause of invasive pneumococcal disease (IPD) such as pneumonia and meningitis in children under five. While there are over 90 pneumococcal serotypes only a handful of these are responsible for disease. Immunisation with the conjugate vaccine PCV7, has markedly decreased invasive pneumoccocal disease. Following PCV7 immunisation, incidences of non-vaccine serotypes, especially serotype 19A, have increased. Serotype 19F's capsular polysaccharide differs from 19A's at a single linkage position. Where 19A possesses an a-D-Glcp-(1->3)-a-L-Rhap (G13R), 19F possesses an a-D-Glcp-(1->2)-a-L-Rhap (G12R) linkage. For this reason it was thought that a 19F conjugate would cross protect against 19A. Unfortunately PCV7 vaccination appears to have been largely ineffective against 19A disease. The lack of conformational information for the G12R and G13R disaccharides provided a good opportunity to compare the CHARMM and GLYCAM force fields. The dynamics of the G12R and G13R disaccharides were investigated under both CHARMM and GLYCAM.While we did identify some discrepancies, overall the force fields were in agreement in predicting a more flexible G12R than the more restricted G13R. While it is possible that these differences account for the lack of 19F to 19A cross protectionprotection, further research is required.
3

Fast Galactic Structure Finding using Graphics Processing Units

Wood, Daniel 01 June 2014 (has links)
Cosmological simulations are used by astronomers to investigate large scale structure formation and galaxy evolution. Structure nding, that is, the discovery of gravitationally-bound objects such as dark matter halos, is a crucial step in many such simulations. During recent years, advancing computational capacity has lead to halo-nders needing to manage increasingly larger simulations. As a result, many multi-core solutions have arisen in an attempt to process these simulations more eciently. However, a many-core approach to the problem using graphics processing units (GPUs) appears largely unexplored. Since these simulations are inherently n-body problems, they contain a high degree of parallelism, which makes them very well suited to a GPU architecture. Therefore, it makes sense to determine the potential for further research in halo-nding algorithms on a GPU. We present a number of modified algorithms, for accelerating the identication of halos and sub-structures, using entry-level graphics hardware. The algorithms are based on an adaptive hierarchical renement of the friends-of-friends (FoF) method using six phase-space dimensions: This allows for robust tracking of sub-structures. These methods are highly amenable to parallel implementation and run on GPUs. We implemented four separate systems; two on GPUs and two on CPUs. The first system for both CPU and GPU was implemented as a proof of concept exercise to familiarise us with the problem: These utilised minimum spanning trees (MSTs) and brute force methods. Our second implementation, for the CPU and GPU, capitalised on knowledge gained from the proof of concept applications, leading us to use kd-trees to efficiently solve the problem. The CPU implementations were intended to serve as benchmarks for our GPU applications. In order to verify the efficacy of the implemented systems, we applied our halo finders to cosmological simulations of varying size and compared the results obtained to those given by a widely used FoF commercial halo-finder. To conduct a fair comparison, CPU benchmarks were implemented using well-known libraries optimised for these calculations. The best performing implementation, with minimal optimisation, used kd-trees on the GPU. This achieved a 12x speed-up over our CPU implementation, which used similar methods. The same GPU implementation was compared with a current, widely-used commercial halo finder FoF system, and achieved a 2x speed-up for up to 5 million particles. Results suggest a scalable solution, where speed-up increases with the size of dataset used. We conclude that there is great potential for future research into an optimised kd-tree implementation on graphics hardware for the problem of structure finding in cosmological simulations.
4

Addition of flexible linkers to GPU-accelerated coarse-grained simulations of protein-protein docking

Pinska, Adrianna 01 January 2019 (has links)
Multiprotein complexes are responsible for many vital cellular functions, and understanding their formation has many applications in medical research. Computer simulation has become a valuable tool in the study of biochemical processes, but simulation of large molecular structures such as proteins on a useful scale is computationally expensive. A compromise must be made between the level of detail at which a simulation can be performed, the size of the structures which can be modelled and the time scale of the simulation. Techniques which can be used to reduce the cost of such simulations include the use of coarse-grained models and parallelisation of the code. Parallelisation has recently been made more accessible by the advent of Graphics Processing Units (GPUs), a consumer technology which has become an affordable alternative to more specialised parallel hardware. We extend an existing implementation of a Monte Carlo protein-protein docking simulation using the Kim and Hummer coarse-grained protein model [1] on a heterogeneous GPU-CPU architecture [2]. This implementation has achieved a significant speed-up over previous serial implementations as a result of the efficient parallelisation of its expensive non-bonded potential energy calculation on the GPU. Our contribution is the addition of the optional capability for modelling flexible linkers between rigid domains of a single protein. We implement additional Monte Carlo mutations to allow for movement of residues within linkers, and for movement of domains connected by a linker with respect to each other. We also add potential terms for pseudo-bonds, pseudo-angles and pseudo-torsions between residues to the potential calculation, and include additional residue pairs in the non-bonded potential sum. Our flexible linker code has been tested, validated and benchmarked. We find that the implementation is correct, and that the addition of the linkers does not significantly impact the performance of the simulation. This modification may be used to enable fast simulation of the interaction between component proteins in a multiprotein complex, in configurations which are constrained to preserve particular linkages between the proteins. We demonstrate this utility with a series of simulations of diubiquitin chains, comparing the structure of chains formed through all known linkages between two ubiquitin monomers. We find reasonable agreement between our simulated structures and experimental data on the characteristics of diubiquitin chains in solution.
5

Accelerated cooperative co-evolution on multi-core architectures

Moyo, Edmore 01 February 2019 (has links)
The Cooperative Co-Evolution model has been used in Evolutionary Computation to optimize the training of artificial neural networks (ANNs). This architecture has proven to be a useful extension to domains such as Neuro-Evolution (NE), which is the training of ANNs using concepts of natural evolution. However, there is a need for real-time systems and the ability to solve more complex tasks which has prompted a further need to optimize these Cooperative Co-Evolution methods. Cooperative Co-Evolution methods consist of a number of phases, however the evaluation phase is still the most compute intensive phase, for some complex tasks taking as long as weeks to complete. This study uses NE as a test case study and we design a parallel Cooperative Co-Evolution processing framework and implement the optimized serial and parallel versions using the Golang (Go) programming language. Go is a multi-core programming language with first-class constructs, channels and goroutines, that make it well suited to parallel programming. Our study focuses on Enforced Subpopulations (ESP) for single-agent systems and Multi-Agent ESP for multi-agent systems. We evaluate the parallel versions in the benchmark tasks; double pole balancing and prey-capture, for single and multi-agent systems respectively, in tasks of increasing complexity. We observe a maximum speed-up of 20x for the parallel Multi-Agent ESP implementation over our single core optimized version in the prey-capture task and a maximum speedup of 16x for ESP in the harder version of double pole balancing task. We also observe linear speed-ups for the difficult versions of the tasks for a certain range of cores, indicating that the Go implementations are efficient and that the parallel speed-ups are better for more complex tasks. We find that in complex tasks, the Cooperative Co-Evolution Neuro-Evolution (CCNE) methods are amenable to multi-core acceleration, which provides a basis for the study of even more complex Cooperative Co-Evolution methods in a wider range of domains.
6

Force-extension of the Amylose Polysaccharide

van den Berg, Rudolf 01 January 2010 (has links)
Atomic Force Microscopy (AFM) single-molecule stretching experiments have been used in a number of studies to characterise the elasticity of single polysaccharide molecules. Steered molecular dynamics (SMD) simulations can reproduce the force-extension behaviour of polysaccharides, while allowing for investigation of the molecular mechanisms behind the macroscopic behaviour. Various stretching experiments on single amylose molecules, using AFM combined with SMD simulations have shown that the molecular elasticity in saccharides is a function of both rotational motion about the glycosidic bonds and the flexibility of individual sugar rings. This study investigates the molecular mechanisms that determine the elastic properties exhibited by amylose when subjected to deformations with the use of constant force SMD simulations. Amylose is a linear polysaccharide of glucose linked mainly by (14) bonds. The elastic properties of amylose are explored by investigating the effect of both stretching speed and strand length on the force-extension profile. On the basis of this work, we confirm that the elastic behaviour of amylose is governed by the mechanics of the pyranose rings and their force-induced conformational transitions. The molecular mechanism can be explained by a combination of syn and anti-parallel conformations of the dihedral angles and chair-to-boat transitional changes. Almost half the chair-to-boat transitional changes of the pyranose rings occur in quick succession in the first part of the force-extension profile (cooperatively) and then the rest follow later (anti-cooperatively) at higher forces, with a much greater interval between them. At low forces, the stretching profile is characterised by the transition of the dihedral angles to the anti-conformation, with low elasticities measured for all the chain lengths. Chair-to-boat transitional changes of the pyranose rings of the shorter chains only occurred anti-cooperatively at high stretching forces, whereas much lower forces were recorded for the same conformational change in the longer chains. For the shorter chains most of these conversions produced the characteristic “shoulder" in the amylose stretching curve. Faster ramping rates were found to increase the force required to reach a particular extension of an amylose fragment. The transitions were similar in shape, but occur at lower forces, proving that decreasing the ramping rate lowers the expected force. The mechanism was also essentially the same, with very little change between the simulations. Simulations performed with slower ramping rates were found to be adequate for reproduction of the experimental curve.
7

Graphics Processing Unit Accelerated Coarse-Grained Protein-Protein Docking

Tunbridge, Ian 01 January 2011 (has links)
Graphics processing unit (GPU) architectures are increasingly used for general purpose computing, providing the means to migrate algorithms from the SISD paradigm, synonymous with CPU architectures, to the SIMD paradigm. Generally programmable commodity multi-core hardware can result in significant speed-ups for migrated codes. Because of their computational complexity, molecular simulations in particular stand to benefit from GPU acceleration. Coarse-grained molecular models provide reduced complexity when compared to the traditional, computationally expensive, all-atom models. However, while coarse-grained models are much less computationally expensive than the all-atom approach, the pairwise energy calculations required at each iteration of the algorithm continue to cause a computational bottleneck for a serial implementation. In this work, we describe a GPU implementation of the Kim-Hummer coarse-grained model for protein docking simulations, using a Replica Exchange Monte-Carlo (REMC) method. Our highly parallel implementation vastly increases the size- and time scales accessible to molecular simulation. We describe in detail the complex process of migrating the algorithm to a GPU as well as the effect of various GPU approaches and optimisations on algorithm speed-up. Our benchmarking and profiling shows that the GPU implementation scales very favourably compared to a CPU implementation. Small reference simulations benefit from a modest speedup of between 4 to 10 times. However, large simulations, containing many thousands of residues, benefit from asynchronous GPU acceleration to a far greater degree and exhibit speed-ups of up to 1400 times. We demonstrate the utility of our system on some model problems. We investigate the effects of macromolecular crowding, using a repulsive crowder model, finding our results to agree with those predicted by scaled particle theory. We also perform initial studies into the simulation of viral capsids assembly, demonstrating the crude assembly of capsid pieces into a small fragment. This is the first implementation of REMC docking on a GPU, and the effectuate speed-ups alter the tractability of large scale simulations: simulations that otherwise require months or years can be performed in days or weeks using a GPU.
8

GPU-based Acceleration of Radio Interferometry Point Source Visibility Simulations in the MeqTrees Framework

Baxter, Richard 01 January 2013 (has links)
Modern radio interferometer arrays are powerful tools for obtaining high resolution images of low frequency electromagnetic radiation signals in deep space. While single dish radio telescopes convert the electromagnetic radiation directly into an image of the sky (or sky intensity map), interferometers convert the interference patterns between dishes in the array into samples of the Fourier plane (UV-data or visibilities). A subsequent Fourier transform of the visibilities yields the image of the sky. Conversely, a sky intensity map comprising a collection of point sources can be subjected to an inverse Fourier transform to simulate the corresponding Point Source Visibilities (PSV). Such simulated visibilities are important for testing models of external factors that aect the accuracy of observed data, such as radio frequency interference and interaction with the ionosphere. MeqTrees is a widely used radio interferometry calibration and simulation software package that contains a Point Source Visibility module. Unfortunately, calculation of visibilities is computationally intensive: it requires application of the same Fourier equation to many point sources across multiple frequency bands and time slots. There is great potential for this module to be accelerated by the highly parallel Single-Instruction-Multiple-Data (SIMD) architectures in modern commodity Graphics Processing Units (GPU). With many traditional high performance computing techniques requiring high entry and maintenance costs, GPUs have proven to be a cost effective and high performance parallelisation tool for SIMD problems such as PSV simulations. This thesis presents a GPU/CUDA implementation of the Point Source Visibility calculation within the existing MeqTrees framework. For a large number of sources, this implementation achieves an 18 speed-up over the existing CPU module. With modications to the MeqTrees memory management system to reduce overheads by incorporating GPU memory operations, speed-ups of 25 are theoretically achievable. Ignoring all serial overheads, and considering only the parallelisable sections of code, speed-ups reach up to 120.
9

Lattice Boltzmann Liquid Simulations on Graphics Hardware

Clough, Duncan 01 June 2013 (has links)
Fluid simulation is widely used in the visual effects industry. The high level of detail required to produce realistic visual effects requires significant computation. Usually, expensive computer clusters are used in order to reduce the time required. However, general purpose Graphics Processing Unit (GPU) computing has potential as a relatively inexpensive way to reduce these simulation times. In recent years, GPUs have been used to achieve enormous speedups via their massively parallel architectures. Within the field of fluid simulation, the Lattice Boltzmann Method (LBM) stands out as a candidate for GPU execution because its grid-based structure is a natural fit for GPU parallelism. This thesis describes the design and implementation of a GPU-based free-surface LBM fluid simulation. Broadly, our approach is to ensure that the steps that perform most of the work in the LBM (the stream and collide steps) make efficient use of GPU resources. We achieve this by removing complexity from the core stream and collide steps and handling interactions with obstacles and tracking of the fluid interface in separate GPU kernels. To determine the efficiency of our design, we perform separate, detailed analyses of the performance of the kernels associated with the stream and collide steps of the LBM. We demonstrate that these kernels make efficient use of GPU resources and achieve speedups of 29.6 and 223.7, respectively. Our analysis of the overall performance of all kernels shows that significant time is spent performing obstacle adjustment and interface movement as a result of limitations associated with GPU memory accesses. Lastly, we compare our GPU LBM implementation with a single-core CPU LBM implementation. Our results show speedups of up to 81.6 with no significant differences in output from the simulations on both platforms. We conclude that order of magnitude speedups are possible using GPUs to perform free-surface LBM fluid simulations, and that GPUs can, therefore, significantly reduce the cost of performing high-detail fluid simulations for visual effects.
10

Computer-aided Timing Training System for Musicians

Manchip, David 01 November 2011 (has links)
Traditionally, musicians make use of a metronome for timing training. A typical metronome, whether hardware or software emulation, will provide the musician with a regular, metrical click to use as a temporal guide. The musician will synchronise his or her actions to the metronome click, thereby producing music that is in time. With regular usage, a musician’s sense of time will gradually improve. To investigate potential benefits offered by computer-assisted instruction, an Alternate Timing Training System was designed and a prototype software implementation developed. The system employed alternative training methods and exercises beyond those offered by a standard metronome. An experiment was conducted with a sample of musicians that attempted to measure and compare improvements in timing accuracy using a standard metronome and the Alternate Timing Training System. The software was also made available for public download and evaluated by a number of musicians who subsequently completed an online survey. A number of limitations were identified in the experiment, including too short a training period, too small a sample size and subjects that already had a highly developed sense of time. Whilst the results of the experiment were inconclusive, analysis of survey results indicated a significant preference for the Alternate Timing Training System over a standard metronome as an effective means of timing training.

Page generated in 0.0137 seconds