• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 10
  • 10
  • 7
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Addition of flexible linkers to GPU-accelerated coarse-grained simulations of protein-protein docking

Pinska, Adrianna 01 January 2019 (has links)
Multiprotein complexes are responsible for many vital cellular functions, and understanding their formation has many applications in medical research. Computer simulation has become a valuable tool in the study of biochemical processes, but simulation of large molecular structures such as proteins on a useful scale is computationally expensive. A compromise must be made between the level of detail at which a simulation can be performed, the size of the structures which can be modelled and the time scale of the simulation. Techniques which can be used to reduce the cost of such simulations include the use of coarse-grained models and parallelisation of the code. Parallelisation has recently been made more accessible by the advent of Graphics Processing Units (GPUs), a consumer technology which has become an affordable alternative to more specialised parallel hardware. We extend an existing implementation of a Monte Carlo protein-protein docking simulation using the Kim and Hummer coarse-grained protein model [1] on a heterogeneous GPU-CPU architecture [2]. This implementation has achieved a significant speed-up over previous serial implementations as a result of the efficient parallelisation of its expensive non-bonded potential energy calculation on the GPU. Our contribution is the addition of the optional capability for modelling flexible linkers between rigid domains of a single protein. We implement additional Monte Carlo mutations to allow for movement of residues within linkers, and for movement of domains connected by a linker with respect to each other. We also add potential terms for pseudo-bonds, pseudo-angles and pseudo-torsions between residues to the potential calculation, and include additional residue pairs in the non-bonded potential sum. Our flexible linker code has been tested, validated and benchmarked. We find that the implementation is correct, and that the addition of the linkers does not significantly impact the performance of the simulation. This modification may be used to enable fast simulation of the interaction between component proteins in a multiprotein complex, in configurations which are constrained to preserve particular linkages between the proteins. We demonstrate this utility with a series of simulations of diubiquitin chains, comparing the structure of chains formed through all known linkages between two ubiquitin monomers. We find reasonable agreement between our simulated structures and experimental data on the characteristics of diubiquitin chains in solution.
2

RFI Monitoring for the MeerKAT Radio Telescope

Schollar, Christopher 01 January 2015 (has links)
South Africa is currently building MeerKAT, a 64 dish radio telescope array, as a pre-cursor for the proposed Square Kilometre Array (SKA). Both telescopes will be located at a remote site in the Karoo with a low level of Radio Frequency Interference (RFI). It is important to maintain a low level of RFI to ensure that MeerKAT has an unobstructed view of the universe across its bandwidth. The only way to effectively manage the environment is with a record of RFI around the telescope. The RFI management team on the MeerKAT site has multiple tools for monitoring RFI. There is a 7 dish radio telescope array called KAT7 which is used for bi-weekly RFI scans on the horizon. The team has two RFI trailers which provide a mobile spectrum and transient measurement system. They also have commercial handheld spectrum analysers. Most of these tools are only used sporadically during RFI measurement campaigns. None of the tools provided a continuous record of the environment and none of them perform automatic RFI detection. Here we design and implement an automatic, continuous RFI monitoring solution for MeerKAT. The monitor consists of an auxiliary antenna on site which continuously captures and stores radio spectra. The statistics of the spectra describe the radio frequency environment and identify potential RFI sources. All of the stored RFI data is accessible over the web. Users can view the data using interactive visualisations or download the raw data. The monitor thus provides a continuous record of the RF environment, automatically detects RFI and makes this information easily accessible. This RFI monitor functioned successfully for over a year with minimal human intervention. The monitor assisted RFI management on site during RFI campaigns. The data has proved to be accurate, the RFI detection algorithm shown to be effective and the web visualisations have been tested by MeerKAT engineers and astronomers and proven to be useful. The monitor represents a clear improvement over previous monitoring solutions used by MeerKAT and is an effective site management tool.
3

Force-extension of the Amylose Polysaccharide

van den Berg, Rudolf 01 January 2010 (has links)
Atomic Force Microscopy (AFM) single-molecule stretching experiments have been used in a number of studies to characterise the elasticity of single polysaccharide molecules. Steered molecular dynamics (SMD) simulations can reproduce the force-extension behaviour of polysaccharides, while allowing for investigation of the molecular mechanisms behind the macroscopic behaviour. Various stretching experiments on single amylose molecules, using AFM combined with SMD simulations have shown that the molecular elasticity in saccharides is a function of both rotational motion about the glycosidic bonds and the flexibility of individual sugar rings. This study investigates the molecular mechanisms that determine the elastic properties exhibited by amylose when subjected to deformations with the use of constant force SMD simulations. Amylose is a linear polysaccharide of glucose linked mainly by (14) bonds. The elastic properties of amylose are explored by investigating the effect of both stretching speed and strand length on the force-extension profile. On the basis of this work, we confirm that the elastic behaviour of amylose is governed by the mechanics of the pyranose rings and their force-induced conformational transitions. The molecular mechanism can be explained by a combination of syn and anti-parallel conformations of the dihedral angles and chair-to-boat transitional changes. Almost half the chair-to-boat transitional changes of the pyranose rings occur in quick succession in the first part of the force-extension profile (cooperatively) and then the rest follow later (anti-cooperatively) at higher forces, with a much greater interval between them. At low forces, the stretching profile is characterised by the transition of the dihedral angles to the anti-conformation, with low elasticities measured for all the chain lengths. Chair-to-boat transitional changes of the pyranose rings of the shorter chains only occurred anti-cooperatively at high stretching forces, whereas much lower forces were recorded for the same conformational change in the longer chains. For the shorter chains most of these conversions produced the characteristic “shoulder" in the amylose stretching curve. Faster ramping rates were found to increase the force required to reach a particular extension of an amylose fragment. The transitions were similar in shape, but occur at lower forces, proving that decreasing the ramping rate lowers the expected force. The mechanism was also essentially the same, with very little change between the simulations. Simulations performed with slower ramping rates were found to be adequate for reproduction of the experimental curve.
4

Graphics Processing Unit Accelerated Coarse-Grained Protein-Protein Docking

Tunbridge, Ian 01 January 2011 (has links)
Graphics processing unit (GPU) architectures are increasingly used for general purpose computing, providing the means to migrate algorithms from the SISD paradigm, synonymous with CPU architectures, to the SIMD paradigm. Generally programmable commodity multi-core hardware can result in significant speed-ups for migrated codes. Because of their computational complexity, molecular simulations in particular stand to benefit from GPU acceleration. Coarse-grained molecular models provide reduced complexity when compared to the traditional, computationally expensive, all-atom models. However, while coarse-grained models are much less computationally expensive than the all-atom approach, the pairwise energy calculations required at each iteration of the algorithm continue to cause a computational bottleneck for a serial implementation. In this work, we describe a GPU implementation of the Kim-Hummer coarse-grained model for protein docking simulations, using a Replica Exchange Monte-Carlo (REMC) method. Our highly parallel implementation vastly increases the size- and time scales accessible to molecular simulation. We describe in detail the complex process of migrating the algorithm to a GPU as well as the effect of various GPU approaches and optimisations on algorithm speed-up. Our benchmarking and profiling shows that the GPU implementation scales very favourably compared to a CPU implementation. Small reference simulations benefit from a modest speedup of between 4 to 10 times. However, large simulations, containing many thousands of residues, benefit from asynchronous GPU acceleration to a far greater degree and exhibit speed-ups of up to 1400 times. We demonstrate the utility of our system on some model problems. We investigate the effects of macromolecular crowding, using a repulsive crowder model, finding our results to agree with those predicted by scaled particle theory. We also perform initial studies into the simulation of viral capsids assembly, demonstrating the crude assembly of capsid pieces into a small fragment. This is the first implementation of REMC docking on a GPU, and the effectuate speed-ups alter the tractability of large scale simulations: simulations that otherwise require months or years can be performed in days or weeks using a GPU.
5

GPU-based Acceleration of Radio Interferometry Point Source Visibility Simulations in the MeqTrees Framework

Baxter, Richard 01 January 2013 (has links)
Modern radio interferometer arrays are powerful tools for obtaining high resolution images of low frequency electromagnetic radiation signals in deep space. While single dish radio telescopes convert the electromagnetic radiation directly into an image of the sky (or sky intensity map), interferometers convert the interference patterns between dishes in the array into samples of the Fourier plane (UV-data or visibilities). A subsequent Fourier transform of the visibilities yields the image of the sky. Conversely, a sky intensity map comprising a collection of point sources can be subjected to an inverse Fourier transform to simulate the corresponding Point Source Visibilities (PSV). Such simulated visibilities are important for testing models of external factors that aect the accuracy of observed data, such as radio frequency interference and interaction with the ionosphere. MeqTrees is a widely used radio interferometry calibration and simulation software package that contains a Point Source Visibility module. Unfortunately, calculation of visibilities is computationally intensive: it requires application of the same Fourier equation to many point sources across multiple frequency bands and time slots. There is great potential for this module to be accelerated by the highly parallel Single-Instruction-Multiple-Data (SIMD) architectures in modern commodity Graphics Processing Units (GPU). With many traditional high performance computing techniques requiring high entry and maintenance costs, GPUs have proven to be a cost effective and high performance parallelisation tool for SIMD problems such as PSV simulations. This thesis presents a GPU/CUDA implementation of the Point Source Visibility calculation within the existing MeqTrees framework. For a large number of sources, this implementation achieves an 18 speed-up over the existing CPU module. With modications to the MeqTrees memory management system to reduce overheads by incorporating GPU memory operations, speed-ups of 25 are theoretically achievable. Ignoring all serial overheads, and considering only the parallelisable sections of code, speed-ups reach up to 120.
6

Acceleration of the noise suppression component of the DUCHAMP source-finder.

Badenhorst, Scott 01 January 2015 (has links)
The next-generation of radio interferometer arrays - the proposed Square Kilometre Array (SKA) and its precursor instruments, The Karoo Array Telescope (MeerKAT) and Australian Square Kilometre Pathfinder (ASKAP) - will produce radio observation survey data orders of magnitude larger than current sizes. The sheer size of the imaged data produced necessitates fully automated solutions to accurately locate and produce useful scientific data for radio sources which are (for the most part) partially hidden within inherently noisy radio observations (source extraction). Automated extraction solutions exist but are computationally expensive and do not yet scale to the performance required to process large data in practical time-frames. The DUCHAMP software package is one of the most accurate source extraction packages for general (source shape unknown) source finding. DUCHAMP's accuracy is primarily facilitated by the à trous wavelet reconstruction algorithm, a multi-scale smoothing algorithm which suppresses erratic observation noise. This algorithm is the most computationally expensive and memory intensive within DUCHAMP and consequently improvements to it greatly improve overall DUCHAMP performance. We present a high performance, multithreaded implementation of the à trous algorithm with a focus on `desktop' computing hardware to enable standard researchers to do their own accelerated searches. Our solution consists of three main areas of improvement: single-core optimisation, multi-core parallelism and the efficient out-of-core computation of large data sets with memory management libraries. Efficient out-of-core computation (data partially stored on disk when primary memory resources are exceeded) of the à trous algorithm accounts for `desktop' computing's limited fast memory resources by mitigating the performance bottleneck associated with frequent secondary storage access. Although this work focuses on `desktop' hardware, the majority of the improvements developed are general enough to be used within other high performance computing models. Single-core optimisations improved algorithm accuracy by reducing rounding error and achieved a 4 serial performance increase which scales with the filter size used during reconstruction. Multithreading on a quad-core CPU further increased performance of the filtering operations within reconstruction to 22 (performance scaling approximately linear with increased CPU cores) and achieved 13 performance increase overall. All evaluated out-of-core memory management libraries performed poorly with parallelism. Single-threaded memory management partially mitigated the slow disk access bottleneck and achieved a 3.6 increase (uniform for all tested large data sets) for filtering operations and a 1.5 increase overall. Faster secondary storage solutions such as Solid State Drives or RAID arrays are required to process large survey data on `desktop' hardware in practical time-frames.
7

A Parallel Multidimensional Weighted Histogram Analysis Method

Potgieter, Andrew 01 January 2014 (has links)
The Weighted Histogram Analysis Method (WHAM) is a technique used to calculate free energy from molecular simulation data. WHAM recombines biased distributions of samples from multiple Umbrella Sampling simulations to yield an estimate of the global unbiased distribution. The WHAM algorithm iterates two coupled, non-linear, equations, until convergence at an acceptable level of accuracy. The equations have quadratic time complexity for a single reaction coordinate. However, this increases exponentially with the number of reaction coordinates under investigation, which makes multidimensional WHAM a computationally expensive procedure. There is potential to use general purpose graphics processing units (GPGPU) to accelerate the execution of the algorithm. Here we develop and evaluate a multidimensional GPGPU WHAM implementation to investigate the potential speed-up attained over its CPU counterpart. In addition, to avoid the cost of multiple Molecular Dynamics simulations and for validation of the implementations we develop a test system to generate samples analogous to Umbrella Sampling simulations. We observe a maximum problem size dependent speed-up of approximately 19 for the GPGPU optimized WHAM implementation over our single threaded CPU optimized version. We find that the WHAM algorithm is amenable to GPU acceleration, which provides the means to study ever more complex molecular systems in reduced time periods.
8

A GPU-Based Level of Detail System for the Real-Time Simulation and Rendering of Large-Scale Granular Terrain

Leach, Craig 01 June 2014 (has links)
Real-time computer games and simulations often contain large virtual outdoor environments. Terrain forms an important part of these environments. This terrain may consist of various granular materials, such as sand, rubble and rocks. Previous approaches to rendering such terrains rely on simple textured geometry, with little to no support for dynamic interactions. Recently, particle-based granular terrain simulations have emerged as an alternative method for rendering granular terrain. These systems simulate granular materials by using particles to represent the individual granules, and exhibit realistic, physically correct interactions with dynamic objects. However, they are extremely computationally expensive, and thus may only feasibly be used to simulate small areas of terrain. In order to overcome this limitation, this thesis builds upon a previously created particle-based granular terrain simulation, by integrating it with a heightfield-based terrain system. In this way, we create a level of detail system for simulating large-scale granular terrain. The particle-based terrain system is used to represent areas of terrain around dynamic objects, whereas the heightfield-based terrain is used elsewhere. This allows large-scale granular terrain to be simulated in real-time, with physically correct dynamic interactions. This is made possible by a novel system, which allows for terrain to be converted from one representation to the other in real-time, while maintaining changes made to the particle-based system in the heightfield-based system. The system also allows for updates to particle-systems to be paused, creating the illusion that more particle systems are active than actually are. We show that the system is capable of simulating and rendering multiple particle-based simulations across a large-scale terrain, whilst maintaining real-time performance. However, the number of particles used, and thus the number of particle-based simulations which may be used, is limited by the computational resources of the GPU.
9

Analysis of Particle Precipitation and Development of the Atmospheric Ionization Module OSnabrück - AIMOS

Wissing, Jan Maik 31 August 2011 (has links)
The goal of this thesis is to improve our knowledge on energetic particle precipitation into the Earth’s atmosphere from the thermosphere to the surface. The particles origin from the Sun or from temporarily trapped populations inside the magnetosphere. The best documented influence of solar (high-) energetic particles on the atmosphere is the Ozone depletion in high latitudes, attributed to the generation of HOx and NOx by precipitating particles (Crutzen et al., 1975; Solomon et al., 1981; Reid et al., 1991). In addition Callis et al. (1996b, 2001) and Randall et al. (2005, 2006) point out the importance of low-energetic precipitating particles of magnetospheric origin, creating NOx in the lower thermosphere, which may be transported downwards where it also contributes to Ozone depletion. The incoming particle flux is dramatically changing as a function of auroral/geomagnetical activity and in particular during solar particle events. As a result, the degree of ionization and the chemical composition of the atmosphere are substantially affected by the state of the Sun. Therefore the direct energetic or dynamical influences of ions on the upper atmosphere depend on solar variability at different time scales. Influences on chemistry have been considered so far with simplified precipitation patterns, limited energy range and restrictions to certain particle species, see e.g. Jackman et al. (2000); Sinnhuber et al. (2003b, for solar energetic protons and no spatial differentiation), and Callis et al. (1996b, 2001, for magnetospheric electrons only). A comprehensive atmospheric ionization model with spatially resolved particle precipitation including a wide energy range and all main particle species as well as a dynamic magnetosphere was missing. In the scope of this work, a 3-D precipitation model of solar and magnetospheric particles has been developed. Temporal as well as spatial ionization patterns will be discussed. Apart from that, the ionization data are used in different climate models, allowing (a) simulations of NOx and HOx formation and transport, (b) comparisons to incoherent scatter radar measurements and (c) inter-comparison of the chemistry part in different models and comparison of model results to MIPAS observations. In a bigger scope the ionization data may be used to better constrain the natural sources of climate change or consequences for atmospheric dynamics due to local temperature changes by precipitating particles and their implications for chemistry. Thus the influence of precipitating energetic particles on the composition and dynamics of the atmosphere is a challenging issue in climate modeling. The ionization data is available online and can be adopted automatically to any user specific model grid.
10

Ab-initio-Untersuchungen von Oberflächen- und Bulksystemen

Greuling, Andreas 21 December 2010 (has links)
In dieser Arbeit setzen wir ab-initio-Methoden zur Untersuchung einiger Oberflächensysteme und eines Bulksystems ein. Im Wesentlichen greifen wir hierbei auf die Dichtefunktionaltheorie (DFT) und die GW-Approximation (GWA) im Rahmen der Vielteilchenstörungstheorie zurück. Wir nutzen diese Methoden um die Adsorption von TMA auf der Rutil TiO2-Oberfläche zu untersuchen, optische Spektren von TiO2 zu berechnen und um die Adsorption von [7]-HCA auf der Calcit(10-14)-Oberfläche zu verstehen. Weiterhin beschäftigen wir uns intensiv mit PTCDA auf Ag(111), welches mit einer chemisch kontaktierten STM-Spitze manipuliert wird.

Page generated in 0.0348 seconds