• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 17
  • 17
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Force-extension of the Amylose Polysaccharide

van den Berg, Rudolf 01 January 2010 (has links)
Atomic Force Microscopy (AFM) single-molecule stretching experiments have been used in a number of studies to characterise the elasticity of single polysaccharide molecules. Steered molecular dynamics (SMD) simulations can reproduce the force-extension behaviour of polysaccharides, while allowing for investigation of the molecular mechanisms behind the macroscopic behaviour. Various stretching experiments on single amylose molecules, using AFM combined with SMD simulations have shown that the molecular elasticity in saccharides is a function of both rotational motion about the glycosidic bonds and the flexibility of individual sugar rings. This study investigates the molecular mechanisms that determine the elastic properties exhibited by amylose when subjected to deformations with the use of constant force SMD simulations. Amylose is a linear polysaccharide of glucose linked mainly by (14) bonds. The elastic properties of amylose are explored by investigating the effect of both stretching speed and strand length on the force-extension profile. On the basis of this work, we confirm that the elastic behaviour of amylose is governed by the mechanics of the pyranose rings and their force-induced conformational transitions. The molecular mechanism can be explained by a combination of syn and anti-parallel conformations of the dihedral angles and chair-to-boat transitional changes. Almost half the chair-to-boat transitional changes of the pyranose rings occur in quick succession in the first part of the force-extension profile (cooperatively) and then the rest follow later (anti-cooperatively) at higher forces, with a much greater interval between them. At low forces, the stretching profile is characterised by the transition of the dihedral angles to the anti-conformation, with low elasticities measured for all the chain lengths. Chair-to-boat transitional changes of the pyranose rings of the shorter chains only occurred anti-cooperatively at high stretching forces, whereas much lower forces were recorded for the same conformational change in the longer chains. For the shorter chains most of these conversions produced the characteristic “shoulder" in the amylose stretching curve. Faster ramping rates were found to increase the force required to reach a particular extension of an amylose fragment. The transitions were similar in shape, but occur at lower forces, proving that decreasing the ramping rate lowers the expected force. The mechanism was also essentially the same, with very little change between the simulations. Simulations performed with slower ramping rates were found to be adequate for reproduction of the experimental curve.
12

Graphics Processing Unit Accelerated Coarse-Grained Protein-Protein Docking

Tunbridge, Ian 01 January 2011 (has links)
Graphics processing unit (GPU) architectures are increasingly used for general purpose computing, providing the means to migrate algorithms from the SISD paradigm, synonymous with CPU architectures, to the SIMD paradigm. Generally programmable commodity multi-core hardware can result in significant speed-ups for migrated codes. Because of their computational complexity, molecular simulations in particular stand to benefit from GPU acceleration. Coarse-grained molecular models provide reduced complexity when compared to the traditional, computationally expensive, all-atom models. However, while coarse-grained models are much less computationally expensive than the all-atom approach, the pairwise energy calculations required at each iteration of the algorithm continue to cause a computational bottleneck for a serial implementation. In this work, we describe a GPU implementation of the Kim-Hummer coarse-grained model for protein docking simulations, using a Replica Exchange Monte-Carlo (REMC) method. Our highly parallel implementation vastly increases the size- and time scales accessible to molecular simulation. We describe in detail the complex process of migrating the algorithm to a GPU as well as the effect of various GPU approaches and optimisations on algorithm speed-up. Our benchmarking and profiling shows that the GPU implementation scales very favourably compared to a CPU implementation. Small reference simulations benefit from a modest speedup of between 4 to 10 times. However, large simulations, containing many thousands of residues, benefit from asynchronous GPU acceleration to a far greater degree and exhibit speed-ups of up to 1400 times. We demonstrate the utility of our system on some model problems. We investigate the effects of macromolecular crowding, using a repulsive crowder model, finding our results to agree with those predicted by scaled particle theory. We also perform initial studies into the simulation of viral capsids assembly, demonstrating the crude assembly of capsid pieces into a small fragment. This is the first implementation of REMC docking on a GPU, and the effectuate speed-ups alter the tractability of large scale simulations: simulations that otherwise require months or years can be performed in days or weeks using a GPU.
13

GPU-based Acceleration of Radio Interferometry Point Source Visibility Simulations in the MeqTrees Framework

Baxter, Richard 01 January 2013 (has links)
Modern radio interferometer arrays are powerful tools for obtaining high resolution images of low frequency electromagnetic radiation signals in deep space. While single dish radio telescopes convert the electromagnetic radiation directly into an image of the sky (or sky intensity map), interferometers convert the interference patterns between dishes in the array into samples of the Fourier plane (UV-data or visibilities). A subsequent Fourier transform of the visibilities yields the image of the sky. Conversely, a sky intensity map comprising a collection of point sources can be subjected to an inverse Fourier transform to simulate the corresponding Point Source Visibilities (PSV). Such simulated visibilities are important for testing models of external factors that aect the accuracy of observed data, such as radio frequency interference and interaction with the ionosphere. MeqTrees is a widely used radio interferometry calibration and simulation software package that contains a Point Source Visibility module. Unfortunately, calculation of visibilities is computationally intensive: it requires application of the same Fourier equation to many point sources across multiple frequency bands and time slots. There is great potential for this module to be accelerated by the highly parallel Single-Instruction-Multiple-Data (SIMD) architectures in modern commodity Graphics Processing Units (GPU). With many traditional high performance computing techniques requiring high entry and maintenance costs, GPUs have proven to be a cost effective and high performance parallelisation tool for SIMD problems such as PSV simulations. This thesis presents a GPU/CUDA implementation of the Point Source Visibility calculation within the existing MeqTrees framework. For a large number of sources, this implementation achieves an 18 speed-up over the existing CPU module. With modications to the MeqTrees memory management system to reduce overheads by incorporating GPU memory operations, speed-ups of 25 are theoretically achievable. Ignoring all serial overheads, and considering only the parallelisable sections of code, speed-ups reach up to 120.
14

Lattice Boltzmann Liquid Simulations on Graphics Hardware

Clough, Duncan 01 June 2013 (has links)
Fluid simulation is widely used in the visual effects industry. The high level of detail required to produce realistic visual effects requires significant computation. Usually, expensive computer clusters are used in order to reduce the time required. However, general purpose Graphics Processing Unit (GPU) computing has potential as a relatively inexpensive way to reduce these simulation times. In recent years, GPUs have been used to achieve enormous speedups via their massively parallel architectures. Within the field of fluid simulation, the Lattice Boltzmann Method (LBM) stands out as a candidate for GPU execution because its grid-based structure is a natural fit for GPU parallelism. This thesis describes the design and implementation of a GPU-based free-surface LBM fluid simulation. Broadly, our approach is to ensure that the steps that perform most of the work in the LBM (the stream and collide steps) make efficient use of GPU resources. We achieve this by removing complexity from the core stream and collide steps and handling interactions with obstacles and tracking of the fluid interface in separate GPU kernels. To determine the efficiency of our design, we perform separate, detailed analyses of the performance of the kernels associated with the stream and collide steps of the LBM. We demonstrate that these kernels make efficient use of GPU resources and achieve speedups of 29.6 and 223.7, respectively. Our analysis of the overall performance of all kernels shows that significant time is spent performing obstacle adjustment and interface movement as a result of limitations associated with GPU memory accesses. Lastly, we compare our GPU LBM implementation with a single-core CPU LBM implementation. Our results show speedups of up to 81.6 with no significant differences in output from the simulations on both platforms. We conclude that order of magnitude speedups are possible using GPUs to perform free-surface LBM fluid simulations, and that GPUs can, therefore, significantly reduce the cost of performing high-detail fluid simulations for visual effects.
15

Computer-aided Timing Training System for Musicians

Manchip, David 01 November 2011 (has links)
Traditionally, musicians make use of a metronome for timing training. A typical metronome, whether hardware or software emulation, will provide the musician with a regular, metrical click to use as a temporal guide. The musician will synchronise his or her actions to the metronome click, thereby producing music that is in time. With regular usage, a musician’s sense of time will gradually improve. To investigate potential benefits offered by computer-assisted instruction, an Alternate Timing Training System was designed and a prototype software implementation developed. The system employed alternative training methods and exercises beyond those offered by a standard metronome. An experiment was conducted with a sample of musicians that attempted to measure and compare improvements in timing accuracy using a standard metronome and the Alternate Timing Training System. The software was also made available for public download and evaluated by a number of musicians who subsequently completed an online survey. A number of limitations were identified in the experiment, including too short a training period, too small a sample size and subjects that already had a highly developed sense of time. Whilst the results of the experiment were inconclusive, analysis of survey results indicated a significant preference for the Alternate Timing Training System over a standard metronome as an effective means of timing training.
16

Contribution à la modélisation temps-réel de la chaîne d’air dédiée à l’estimation du remplissage / Contribution to real-time air system modeling dedicated to trapped mass estimation

Meddahi, Farouq 12 December 2016 (has links)
L'impact de la dynamique des gaz sur la chaine d’air s'est imposé fortement en raison du contenu de la dynamique dans les nouveaux cycles de test automobile tels que le WLTC. Cela rend les modèles 0D actuels moins fiables car ils reposent sur plusieurs positions sur les cartographies mesurées sur des points de fonctionnements stationnaires. En outre, les phénomènes d'onde et les effets inertiels des gaz sont intrinsèquement négligés. Une méthodologie pour reproduire efficacement les effets d'ondes le long des conduites de moteurs à combustion interne a été présentée dans ce travail. L'idée est basée sur la combinaison des modèles à paramètres concentrés et les modèles quasi-unidimensionnels. Cette combinaison donne la possibilité de prendre les effets d'inertie de la dynamique des gaz tout en évitant le coût lourd de calcul de l'approche de modélisation 1D. La première partie s'est intéressée aux schémas numériques à une dimension, dans le but de les évaluer par rapport aux temps de calcul, d’exactitude et de définir une bonne référence pour davantage validations numériques pour les modèles réduits. Le modèle « quasi-Propagatory » était le meilleur candidat pour modéliser les ondes avec moins de puissance de calcul. Pour avoir une propre estimation de la pression de suralimentation, on s'est intéressé plus particulièrement au compresseur. Un modèle physique a été présenté on se basant sur les travaux de Martin et al. [55]. Finalement, les développements sont validés expérimentalement sur tous les points de fonctionnement du moteur. / Gas dynamics impact on air system dynamics and hence on combustion products, i.e. emissions, has imposed itself strongly due to the dynamics content in new test drive cycles such as the WLTC. This makes current real-time 0D models less reliable as they rely on stationary measured look up tables. In addition, wave phenomena and gas inertial effects are inherently neglected. This makes the estimation of the flow into and from the cylinder inaccurate. A methodology to efficiently reproduce wave effects along the internal combustion engine ducts was presented in this work. The idea relies on combining both lumped parameter and quasi-one-dimensional models. This combination gives the possibility to take inertial effects of gas dynamics while avoiding the heavy computational cost of the 1D modeling approach. The first part investigated one-dimensional numerical schemes, with the aim of evaluating them with respect to real-time applications and defining a good reference for further numerical validations for the low order models. The Quasi-Propagatory model was the best candidate to model waves with less computational power. To have a proper boost pressure estimation, more focus was on the compressor. A physics based model was presented based on [55]. Results have also shown a better interpretation and extrapolation ability. Finally, the developments have been validated experimentally using the complete engine operation map.
17

Algorithms for efficiently and effectively matching agents in microsimulations of sexually transmitted infections

Geffen, Nathan 01 January 2018 (has links)
Mathematical models of the HIV epidemic have been used to estimate incidence, prevalence and life-expectancy, as well the benets and costs of public health interventions, such as the provision of antiretroviral treatment. Models of sexually transmitted infection epidemics attempt to account for varying levels of risk across a population based on diverse | or heterogeneous | sexual behaviour. Microsimulations are a type of model that can account for fine-grained heterogeneous sexual behaviour. This requires pairing individuals, or agents, into sexual partnerships whose distribution matches that of the population being studied, to the extent this is known. But pair-matching is computationally expensive. There is a need for computer algorithms that pair-match quickly. In this work we describe the role of modelling in responses to the South African HIV epidemic. We also chronicle a three-decade debate, greatly influenced since 2008 by a mathematical model, on the optimal time for people with HIV to start antiretroviral treatment. We then present and analyse several pair-matching algorithms, and compare them in a microsimulation of a fictitious STI. We find that there are algorithms, such as Cluster Shuffle Pair-Matching, that offer a good compromise between speed and approximating the distribution of sexual relationships of the study-population. An interesting further finding is that infection incidence decreases as population increases, all other things being equal. Whether this is an artefact of our methodology or a natural world phenomenon is unclear and a topic for further research.
18

Simulação física de fluxos gravitacionais = efeitos da variação de concentração e vazão do fluxo no depósito gerado / Physical simulation of gravitational flows : the effects of variations in concertration and output of the flow in the deposit generated

Fioriti, Lenita de Souza, 1985- 19 August 2018 (has links)
Orientador: Giorgio Basilici / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Geociências / Made available in DSpace on 2018-08-19T02:29:09Z (GMT). No. of bitstreams: 1 Fioriti_LenitadeSouza_M.pdf: 9172636 bytes, checksum: 3ae99280216e3efccb7e0ac6bd5c83c0 (MD5) Previous issue date: 2011 / Resumo: Simulações físicas de correntes de densidade em escala reduzida têm sido desenvolvidas para o entendimento dos processos físicos que ocorrem nos eventos naturais. O presente trabalho apresenta modelagens físicas de fluxos gravitacionais realizadas em tanque com forma de canal (4,5x0,15x0,5m). O objetivo foi o entendimento dos processos hidrodinâmicos e deposicionais de tais fluxos, mediante variações de vazão e concentração. Os dados obtidos experimentalmente foram correlacionados com informações extraídas de afloramentos e do monitoramento dos eventos naturais, obtidas na literatura. Os sedimentos eram constituídos por 30% carvão, 30% ballotini e 40% caulim do volume total de massa da mistura, com frações granulométricas entre argila e areia fina. Foram simuladas correntes: i) com vazão constante e alta concentração (20%); ii) com vazão constante e baixa concentração (10%); iii) com vazão variada e alta concentração (20%); iv) com vazão variada e baixa concentração (10%). As variações da vazão foram diretamente proporcionais às variações da altura e velocidade da corrente. A maior intensidade das vazões e das velocidades ocasionou uma maior força de resistência do fluído ambiente. O desenvolvimento da altura da corrente foi favorecido devido a essa ação de reação da água ambiente. A variação da concentração foi diretamente proporcional à variação da velocidade e inversamente proporcional à variação da altura da corrente. Esse comportamento foi explicado pelo número de Reynolds. O aumento da concentração do fluxo ocasionou a diminuição da intensidade da turbulência e das alturas desenvolvidas pelo corpo da corrente. Quanto menor a viscosidade de um fluxo, maior é o número de Reynolds, e seu escoamento tende a caracterizar um fluxo turbulento. As correntes apresentaram estratificação vertical de densidade (fluxo bipartido). A porção inferior foi caracterizada pelo fluxo cisalhante basal, onde a deposição ocorreu por progressiva gradação, e pelo fluxo laminar ou região de plug, onde a deposição ocorreu por congelamento em massa. A porção superior foi caracterizada pelo fluxo de turbidez cujo mecanismo de deposição foi a decantação e tração. O aumento da concentração favoreceu o desenvolvimento do debris flow. A porção superior turbulenta foi substituída por uma nuvem diluída de grãos finos. Os sedimentos apresentaram tendência de acumulação na porção proximal do tanque, com diminuição da espessura e frações granulométricas dos depósitos em direção à porção distal. Concluiu-se que o aumento da concentração implicou no aumento da massa do depósito, porém a sua espessura tendeu a permanecer constante. Os grãos foram transportados para mais longe, o que fez com que o comprimento do depósito aumentasse. Isso foi decorrente da interação entre os grãos, a qual favoreceu a capacidade de transporte e inibiu a decantação dos sedimentos (hindered settling). As correntes simuladas corresponderam aos fluxos gravitacionais subcríticos monitorados na natureza, cujos sedimentos foram representados por grãos de tamanho entre areia muito fina e seixos. Os resultados experimentais apresentaram analogia com a Unidade Apiúna, localizada na Bacia do Itajaí (Santa Catarina/Brasil), a qual representa uma seqüência clássica de depósitos de água profundo / Abstract: Physical simulations of density currents in small scale have been developed to study the physical processes that occur in natural events. The present study deals with laboratory models in flume of gravitational flows (4,5x0,15x0,5m). It was devoted to understanding hydrodynamic and depositional processes, depending on variations in discharle and concentrations of density currents. The experimental data were correlated with information derived from outcrops and monitoring of natural events, which were obtained from literature. Sediments used in the model consisted of 30% coal, 30% silica and 40% kaolin, with grain size range between clay and fine sand. The currents ran down with i) constant discharle and high concentration; ii) constant discharle and low concentration; iii) varied discharle and high concentration; iv) varied discharle and low concentration. Discharle variations were directly proportional to height and velocity variations of the flow. The highest intensity of flows and velocities resulted in greater resistance force from the fluid environment. The development of high current was favored because of this response action of the fluid environment. Concentrations variations were directly proportional to the velocity and inversely proportional to the height variations of the flow. This behavior was explained by Reynolds number. Flow concentration increases caused the decrease of turbulence intensity and the decrease of heights carried by the current. The currents showed density stratified profile (bipartite flow). The sediments were deposited by progressive aggradations from a basal shear flow, by "freezing" en masse from laminar flow and by settling and traction from turbulent/dilute flow. The increase in concentration favored the development of debris flow. The upper turbulent flow was replaced by a dilute cloud of fine grains. The sediments showed a tendency to accumulate in the proximal portion of the flume with deposit thickness and grain size range decreasing towards the distal portion of the flume. The increase in discharle and concentration resulted in increase of the depositional mass, but its thickness tended to remain constant. The grains were transported over longer distances, which meant that the length of the deposit increase. This was due to the interaction among the grains, which increased the carrying capacity and inhibited the settling of sediments (hindered settling). The simulated currents corresponded to subcritical gravity flows monitored in nature, whose sediments were represented by grain size between fine sand and pebbles. The experimental results showed analogy with Apiúna Unit, located in the Itajaí Basin (Santa Catarina/Brazil), which represents a classic sequence of deep water deposits / Mestrado / Geologia e Recursos Naturais / Mestre em Geociências
19

Physics-based Simulation of Tablet Disintegration and Dissolution

Yue Li (11202198) 29 July 2021 (has links)
<p>As the most used dosage form in the world, tablets are widely used for the mass production of drugs. The disintegration and dissolution kinetics of tablets play a vital role in the pharmacokinetics and pharmacodynamics of drugs. It is also critical for evaluating the quality of drug formulations. This thesis reports a modeling and simulation approach of tablet disintegration and dissolution processes in a dissolution test device. By coupling the lattice Boltzmann method with the discrete element method, we simulate the hydrodynamics as well as the particle dynamics in the dissolution test device. Our computational methods could model the tablet structure,</p><p>disintegration of the tablet in the dissolution device, and dissolution of particles under the influence of hydrodynamics. The simulation results show that our computational methods can reproduce experimental results. Our methods pave the path toward an in-silico platform for tablet formulation design and verification.</p>
20

On the surface quality of continuously cast steels and phosphor bronzes

Saleem, Saud January 2016 (has links)
This thesis work concerns about the importance of the cast surfaces, surface phenomenon such as the formation of the oscillation marks and exudation and related defects including cracks and segregation that happened during the continuous casting. All of the investigated materials were collected during the plant trials while an in-depth analysis on these materials was performed at the laboratory scale with certain explanations supported by the schematic and theoretical models. The work consists on different material classes such as steels and phosphor bronzes with a focus on the surface defects and their improvements. In order to facilitate the theoretical analysis which could be capable of explaining the suggested phenomenon in the thesis, a reduced model is developed which required lesser computational resources with lesser convergence problems. / <p>QC 20160527</p> / Oscilation mark formation during continous casting of steel

Page generated in 0.154 seconds