• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 25
  • 21
  • 12
  • 11
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 245
  • 109
  • 108
  • 39
  • 38
  • 33
  • 30
  • 30
  • 25
  • 24
  • 19
  • 18
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Coarse-grained model for a motor protein on a microtubule

Alanazi, Mansour Awadh, Alanazi January 2017 (has links)
No description available.
92

AN APPROACH FOR FINE-GRAINED PROFILING OF PARALLEL APPLICATIONS

DESHMUKH, AMOL S. 07 October 2004 (has links)
No description available.
93

Molecular Investigations into the Titin-Telethonin Complex: A study in Protein-Protein Interactions

Bodmer, Nicholas 16 October 2015 (has links)
No description available.
94

Investigation of sexithiophene properties with Monte Carlo simulations of a coarse-grained model

Almutairi, Amani January 2016 (has links)
No description available.
95

ACCURATE LANGEVIN INTEGRATION METHODS FOR COARSE-GRAINED MOLECULAR DYNAMICS WITH LARGE TIME STEPS

Finkelstein, Joshua January 2020 (has links)
The Langevin equation is a stochastic differential equation frequently used in molecular dynamics for simulating systems with a constant temperature. Recent developments have given rise to wide uses of Langevin dynamics at different levels of spatial resolution, which necessitate time step and friction parameter choices outside of the range for which many existing temporal discretization methods were originally developed. We first study the GJ--F, BAOAB and BBK numerical algorithms, originally developed for atomistic simulations, on a coarse-grained polymer melt, paying close attention to the large time step regime. The results of this study then inspire our search for new algorithms and lead to a general class of velocity Verlet-based time-stepping schemes designed to perform well for all parameter regions, by ensuring that they faithfully reproduce statistical quantities for the case of a free particle and harmonic oscillator. This family of methods depends on the choice of a single free parameter function and we explore some of the methods defined for certain choices of this parameter on realistic coarse-grained and atomistic molecular systems relevant in material and bio-molecular science. In addition, we provide an equivalent splitting formulation of this one-parameter family which allows for enhanced insight into the hidden time scaling induced by the choice of the free parameter in the Hamiltonian and stochastic time scales. / Mathematics
96

The Effects of Molecular Structure and Design on the Plasticizer Performance Through Coarse-Grained Molecular Simulation

Panchal, Kushal January 2018 (has links)
Plasticizers are a commonly used additive used in the polymer industry to make the plastic more pliable by reducing the glass transition temperature, Tg and Young's modulus, Y. As the plasticizer aids in polymer process-ability and making it suitable for applications from industrial cables to sensitive medical equipment, the mechanism of plasticization is not fully understood. There are three theories used to explain plasticization: lubricity theory, gel theory, and free volume theory. The latter is a fundamental concept of polymer science that is used to calculate many polymer properties, but they all do not give a clear picture on plasticization. With molecular dynamics (MD) simulation, a coarse-grained (CG) model - which consist of a simple bead-spring model that generalizes particles as a bead and connects them via a finite spring – is used to explore the impact of plasticizer size throughout the polymer system. The interaction characteristics of the plasticizer is explored by representing the plasticizer molecules as a single bead of varying size. This gives better control on the variability of the mixture and pinpoint the significant contributions to plasticization. A path to understanding the the mechanism of plasticization will give insight in glass formation, and can later be used to find an optimal plasticizer architecture to minimize the migration of the additive by tuning the compatibility. Current results show a decoupling between the Tg and Y of the polymer-additive system. The overall understanding of finite-size effects shows: as additive of increasing size is added, the polymer free volume increases which in-turn would decrease the Y, but Tg is shown to increase because the polymer and additive are not as mobile to reduce caging effect of monomeric units. / Thesis / Master of Applied Science (MASc)
97

M3D: Multimodal MultiDocument Fine-Grained Inconsistency Detection

Tang, Chia-Wei 10 June 2024 (has links)
Validating claims from misinformation is a highly challenging task that involves understanding how each factual assertion within the claim relates to a set of trusted source materials. Existing approaches often make coarse-grained predictions but fail to identify the specific aspects of the claim that are troublesome and the specific evidence relied upon. In this paper, we introduce a method and new benchmark for this challenging task. Our method predicts the fine-grained logical relationship of each aspect of the claim from a set of multimodal documents, which include text, image(s), video(s), and audio(s). We also introduce a new benchmark (M^3DC) of claims requiring multimodal multidocument reasoning, which we construct using a novel claim synthesis technique. Experiments show that our approach significantly outperforms state-of-the-art baselines on this challenging task on two benchmarks while providing finer-grained predictions, explanations, and evidence. / Master of Science / In today's world, we are constantly bombarded with information from various sources, making it difficult to distinguish between what is true and what is false. Validating claims and determining their truthfulness is an essential task that helps us separate facts from fiction, but it can be a time-consuming and challenging process. Current methods often fail to pinpoint the specific parts of a claim that are problematic and the evidence used to support or refute them. In this study, we present a new method and benchmark for fact-checking claims using multiple types of information sources, including text, images, videos, and audio. Our approach analyzes each aspect of a claim and predicts how it logically relates to the available evidence from these diverse sources. This allows us to provide more detailed and accurate assessments of the claim's validity. We also introduce a new benchmark dataset called M^3DC, which consists of claims that require reasoning across multiple sources and types of information. To create this dataset, we developed a novel technique for synthesizing claims that mimic real-world scenarios. Our experiments show that our method significantly outperforms existing state-of-the-art approaches on two benchmarks while providing more fine-grained predictions, explanations, and evidence. This research contributes to the ongoing effort to combat misinformation and fake news by providing a more comprehensive and effective approach to fact-checking claims.
98

Generalizing the Utility of Graphics Processing Units in Large-Scale Heterogeneous Computing Systems

Xiao, Shucai 03 July 2013 (has links)
Today, heterogeneous computing systems are widely used to meet the increasing demand for high-performance computing. These systems commonly use powerful and energy-efficient accelerators to augment general-purpose processors (i.e., CPUs). The graphic processing unit (GPU) is one such accelerator. Originally designed solely for graphics processing, GPUs have evolved into programmable processors that can deliver massive parallel processing power for general-purpose applications. Using SIMD (Single Instruction Multiple Data) based components as building units; the current GPU architecture is well suited for data-parallel applications where the execution of each task is independent. With the delivery of programming models such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), programming GPUs has become much easier than before. However, developing and optimizing an application on a GPU is still a challenging task, even for well-trained computing experts. Such programming tasks will be even more challenging in large-scale heterogeneous systems, particularly in the context of utility computing, where GPU resources are used as a service. These challenges are largely due to the limitations in the current programming models: (1) there are no intra-and inter-GPU cooperative mechanisms that are natively supported; (2) current programming models only support the utilization of GPUs installed locally; and (3) to use GPUs on another node, application programs need to explicitly call application programming interface (API) functions for data communication. To reduce the mapping efforts and to better utilize the GPU resources, we investigate generalizing the utility of GPUs in large-scale heterogeneous systems with GPUs as accelerators. We generalize the utility of GPUs through the transparent virtualization of GPUs, which can enable applications to view all GPUs in the system as if they were installed locally. As a result, all GPUs in the system can be used as local GPUs. Moreover, GPU virtualization is a key capability to support the notion of "GPU as a service." Specifically, we propose the virtual OpenCL (or VOCL) framework for the transparent virtualization of GPUs. To achieve good performance, we optimize and extend the framework in three aspects: (1) optimize VOCL by reducing the data transfer overhead between the local node and remote node; (2) propose GPU synchronization to reduce the overhead of switching back and forth if multiple kernel launches are needed for data communication across different compute units on a GPU; and (3) extend VOCL to support live virtual GPU migration for quick system maintenance and load rebalancing across GPUs. With the above optimizations and extensions, we thoroughly evaluate VOCL along three dimensions: (1) show the performance improvement for each of our optimization strategies; (2) evaluate the overhead of using remote GPUs via several microbenchmark suites as well as a few real-world applications; and (3) demonstrate the overhead as well as the benefit of live virtual GPU migration. Our experimental results indicate that VOCL can generalize the utility of GPUs in large-scale systems at a reasonable virtualization and migration cost. / Ph. D.
99

Molecular insights on the interference of simplified lung surfactant models by gold nanoparticle pollutants

Hossain, S.I., Gandhi, N.S., Hughes, Zak E., Gu, Y.T., Saha, S.C. 01 July 2019 (has links)
Yes / Inhaled nanoparticles (NPs) are experienced by the first biological barrier inside the alveolus known as lung surfactant (LS), a surface tension reducing agent, consisting of phospholipids and proteins in the form of the monolayer at the air-water interface. The monolayer surface tension is continuously regulated by the alveolus compression and expansion and protects the alveoli from collapsing. Inhaled NPs can reach deep into the lungs and interfere with the biophysical properties of the lung components. The interaction mechanisms of bare gold nanoparticles (AuNPs) with the LS monolayer and the consequences of the interactions on lung function are not well understood. Coarse-grained molecular dynamics simulations were carried out to elucidate the interactions of AuNPs with simplified LS monolayers at the nanoscale. It was observed that the interactions of AuNPs and LS components deform the monolayer structure, change the biophysical properties of LS and create pores in the monolayer, which all interfere with the normal lungs function. The results also indicate that AuNP concentrations >0.1 mol% (of AuNPs/lipids) hinder the lowering of the LS surface tension, a prerequisite of the normal breathing process. Overall, these findings could help to identify the possible consequences of airborne NPs inhalation and their contribution to the potential development of various lung diseases. / University of Technology Sydney (UTS) FEIT Research Scholarship, UTS IRS (S.I.H.), 2018 Blue Sky scheme–Suvash Saha (Activity 2232368), N.S.G is supported by the Vice-Chancellor fellowship funded by QUT.
100

[en] MOLECULAR DYNAMICS OF PREDNISOLONE ADSORPTION ON A LUNG SURFACTANT MODEL / [pt] DINÂMICA MOLECULAR DA ADSORÇÃO DE PREDNISOLONA EM UM MODELO DE SURFACTANTE PULMONAR

EVELINA DUNESKA ESTRADA LOPEZ 28 May 2018 (has links)
[pt] A simulação da adsorção da prednisolona em um modelo de surfactante pulmonar foi realizada com sucesso usando dinâmica molecular coarse grained a uma temperatura de 310 K. O modelo coarse grained da prednisolona foi parametrizado usando o modelo do colesterol e validado utilizando cálculos de coeficientes de partição octanol-água e coeficientes de difusão lateral. O coeficiente de partição octanol-água calculado para prednisolona a 298 K é 3,9 mais ou menos 1,6 que possui um acordo razoável com o valor experimental. O coeficiente de difusão lateral da prednisolona na monocamada mista de DPPC/POPC é estimado ser (6 mais ou menos 4) x10(-7) cm(2) s(-1) a 20 mN m(-1), o que está de acordo com o encontrado para o colesterol. A monocamada mista de DPPC/POPC foi utilizada como modelo de surfactante pulmonar onde moléculas de prednisolona foram adsorvidas formando nanoagregados. Os nanoagregados de prednisolona foram transferidos dentro da monocamada mista DPPC/POPC sendo espalhados na tensão superficial de 20 mN m(-1). A 0 e 10 mN m(-1) os nanoagregados de prednisolona induzem o colapso da monocamada mista DPPC/POPC formando bicamadas. A implicação deste trabalho é que a prednisolona pode apenas ser administrada com surfactante pulmonar utilizando baixas frações em massa de prednisolona por lipídio (menor que 10 por cento). Com frações elevadas, o colapso inativa as propriedades do surfactante pulmonar pela formação de uma bicamada. Os resultados desta pesquisa podem ser utilizados para o desenvolvimento de novos tratamentos clínicos de doenças como a síndrome da angústia respiratória do recém-nascido, asma e doença pulmonar obstrutiva crônica. / [en] The simulation of prednisolone adsorption on a lung surfactant model was successfully performed using coarse grained molecular dynamics at 310 K (dynamics first performed). The coarse grained model for prednisolone was parameterized using a well-established cholesterol model and validated by using calculations of octanol–water partition coefficients and lateral diffusion coefficients. The calculated octanol–water partition coefficient of prednisolone at 298 K is 3.9 more or less 1.6, which is in reasonable agreement with experiment. The lateral diffusion coefficient of prednisolone in the DPPC/POPC mixed monolayer is estimated to be (6 more or less 4) x10(-7) cm(2) s(-1) at 20 mN m(-1), which is in agreement with that found for cholesterol. The DPPC/POPC mixed monolayer was used as lung surfactant model where prednisolone molecules were adsorbed forming nanoaggregates. The nanoaggregates of prednisolone were transferred into the DPPC/POPC mixed monolayer being spread at the surface tension of 20 mN m(-1). At 0 and 10 mN m(-1) , the prednisolone nanoaggregates induce the collapse of the DPPC/POPC mixed monolayer forming a bilayer. The implications of this work are that prednisolone may only be administered with lung surfactant by using low mass fractions of prednisolone per lipid (less than 10 percent). And, with high fractions, the collapse inactivates the properties of the lung surfactant by forming a bilayer. The results of this research can be used to develop new clinical treatments for diseases such as respiratory distress syndrome of the newborn, asthma and chronic obstructive pulmonary disease.

Page generated in 0.0569 seconds