• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 141
  • 26
  • 13
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 428
  • 428
  • 124
  • 95
  • 73
  • 62
  • 58
  • 54
  • 49
  • 48
  • 43
  • 43
  • 39
  • 39
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modélisation et score de complexes protéine-ARN / Modelling and scoring of protein-RNA complexes

Guilhot-Gaudeffroy, Adrien 29 September 2014 (has links)
Cette thèse présente des résultats dans le domaine de la prédiction d’interactions protéine-ARN. C’est un domaine de recherche très actif, pour lequel la communauté internationale organise régulièrement des compétitions pour évaluer différentes techniques de prédictions in silico d’interactions protéine-protéine et protéine-ARN sur des données benchmarks (CAPRI, Critical Assessment of PRedictedInteractions), par prédiction en aveugle et en temps limité. Dans ce cadre, de nombreuses approches reposant sur des techniques d’apprentissage supervisé ont récemment obtenus de très bons résultats.Nos travaux s’inscrivent dans cette démarche.Nous avons travaillé sur des jeux de données de 120 complexes protéine-ARN extraits de la PRIDB non redondante (Protein-RNA Interface DataBase, banque de données de référence pour les interactions protéine-ARN). La méthodologie de prédiction d'interactions protéine-ARN a aussi été testée sur 40 complexes issus de benchmarks de l'état de l'art et indépendants des complexes de la PRIDB non redondante. Le faible nombre de structures natives et la difficulté de générer in silico des structures identiques à la solution in vivo nous a conduit à mettre en place une stratégie de génération de candidats par perturbation de l’ARN partenaire d’un complexe protéine-ARN natif. Les candidats ainsi obtenus sont considérés comme des conformations presque-natives si elles sont suffisamment proches du natif. Les autres candidats sont des leurres. L’objectif est de pouvoir identifier les presque natifs parmi l’ensemble des candidats potentiels, par apprentissage supervisé d'une fonction de score.Nous avons conçu pour l'évaluation des fonctions de score une méthodologie de validation croisée originale appelée le leave-"one-pdb"-out, où il existe autant de strates que de complexes protéine-ARN et où chaque strate est constituée des candidats générés à partir d'un complexe. L’une des approches présentant les meilleures performances à CAPRI est l’approche RosettaDock, optimisée pour la prédiction d’interactions protéine-protéine. Nous avons étendu la fonction de score native de RosettaDock pour résoudre la problématique protéine-ARN. Pour l'apprentissage de cette fonction de score, nous avons adapté l'algorithme évolutionnaire ROGER (ROC-based Genetic LearnER) à l'apprentissage d'une fonction logistique. Le gain obtenu par rapport à la fonction native est significatif.Nous avons aussi mis au point d'autres modèles basés sur des approches de classifieurs et de métaclassifieurs, qui montrent que des améliorations sont encore possibles.Dans un second temps, nous avons introduit et mis en oeuvre une nouvelle stratégie pour l’évaluation des candidats qui repose sur la notion de prédiction multi-échelle. Un candidat est représenté à la fois au niveau atomique, c'est-à-dire le niveau de représentation le plus détaillé, et au niveau dit “gros-grain”où nous utilisons une représentation géométrique basée sur des diagrammes de Voronoï pour regrouper ensemble plusieurs composants de la protéine ou de l’ARN. L'état de l'art montre que les diagrammes de Voronoï ont déjà permis d'obtenir de bons résultats pour la prédiction d'interactions protéine-protéine. Nous en évaluons donc les performances après avoir adapté le modèle à la prédiction d'interactions protéine-ARN. L’objectif est de pouvoir rapidement identifier la zone d’interaction (épitope) entre la protéine et l’ARN avant d’utiliser l’approche atomique, plus précise,mais plus coûteuse en temps de calcul. L’une des difficultés est alors de pouvoir générer des candidats suffisamment diversifiés. Les résultats obtenus sont prometteurs et ouvrent desperspectives intéressantes. Une réduction du nombre de paramètres impliqués de même qu'une adaptation du modèle de solvant explicite pourraient en améliorer les résultats. / My thesis shows results for the prediction of protein-RNA interactions with machine learning. An international community named CAPRI (Critical Assessment of PRedicted Interactions) regularly assesses in silico methods for the prediction of the interactions between macromolecules. Using blindpredictions within time constraints, protein-protein interactions and more recently protein-RNA interaction prediction techniques are assessed.In a first stage, we worked on curated protein-RNA benchmarks, including 120 3D structures extracted from the non redundant PRIDB (Protein-RNA Interface DataBase). We also tested the protein-RNA prediction method we designed using 40 protein-RNA complexes that were extracted from state-ofthe-art benchmarks and independent from the non redundant PRIDB complexes. Generating candidates identical to the in vivo solution with only a few 3D structures is an issue we tackled by modelling a candidate generation strategy using RNA structure perturbation in the protein-RNAcomplex. Such candidates are either near-native candidates – if they are close enough to the solution– or decoys – if they are too far away. We want to discriminate the near-native candidates from thedecoys. For the evaluation, we performed an original cross-validation process we called leave-”onepdb”-out, where there is one fold per protein-RNA complex and each fold contains the candidates generated using one complex. One of the gold standard approaches participating in the CAPRI experiment as to date is RosettaDock. RosettaDock is originally optimized for protein-proteincomplexes. For the learning step of our scoring function, we adapted and used an evolutionary algorithm called ROGER (ROC-based Genetic LearnER) to learn a logistic function. The results show that our scoring function performs much better than the original RosettaDock scoring function. Thus,we extend RosettaDock to the prediction of protein-RNA interactions. We also evaluated classifier based and metaclassifier-based approaches, which can lead to new improvements with further investigation.In a second stage, we introduced a new way to evaluate candidates using a multi-scale protocol. A candidate is geometrically represented on an atomic level – the most detailed scale – as well as on a coarse-grained level. The coarse-grained level is based on the construction of a Voronoi diagram over the coarse-grained atoms of the 3D structure. Voronoi diagrams already successfully modelled coarsegrained interactions for protein-protein complexes in the past. The idea behind the multi-scale protocolis to first find the interaction patch (epitope) between the protein and the RNA before using the time consuming and yet more precise atomic level. We modelled new scoring terms, as well as new scoring functions to evaluate generated candidates. Results are promising. Reducing the number of parameters involved and optimizing the explicit solvent model may improve the coarse-grained level predictions.
12

Multi-scale whole-plant model of Arabidopsis growth to flowering

Chew, Yin Hoon January 2013 (has links)
In this study, theoretical and experimental approaches were combined, using Arabidopsis as the studied species. The multi-scale model incorporates the following, existing sub-models: a phenology model that can predict the flowering time of plants grown in the field, a gene circuit of the circadian clock network that regulates flowering through the photoperiod pathway, a process-based model describing carbon assimilation and resource partitioning, and a functional-structural module that determines shoot structure for light interception and root growth. First, the phenology model was examined on its ability to predict the flowering time of field plantings at different sites and seasons in light of the specific meteorological conditions that pertained. This analysis suggested that the synchrony of temperature and light cycles is important in promoting floral initiation. New features were incorporated into the phenology model that improved its predictive accuracy across seasons. Using both lab and field data, this study has revealed an important seasonal effect of night temperatures on flowering time. Further model adjustments to describe phytochrome (phy) mutants supported the findings and implicated phyB in the temporal gating of temperature-induced flowering. The improved phenology model was next linked to the clock gene circuit model. Simulation of clock mutants with different free-running periods highlighted the complex mechanism associated with daylength responses for the induction of flowering. Finally, the carbon assimilation and functional-structural growth modules were integrated to form the multi-component, whole-plant model. The integrated model was successfully validated with experimental data from a few genotypes grown in the laboratory. In conclusion, the model has the ability to predict the flowering time, leaf biomass and ecosystem exchange of plants grown under conditions of varying light intensity, temperature, CO2 level and photoperiod, though extensions of some model components to incorporate more biological details would be relevant. Nevertheless, this meso-scale model creates obvious application routes from molecular and cellular biology to crop improvement and biosphere management. It could provide a framework for whole-organism modelling to help address global issues such as food security and the energy crisis.
13

CFD evaluation of cluster specific image based asthma lung features on particle transport and hygroscopic particle growth model validation

LeBlanc, Lawrence Joseph 01 May 2017 (has links)
Aerosolized drug delivery to the human lungs for asthma treatment has long been studied and yet the relationship between the delivery efficacy and the inter-subject variability due to gender, age, and disease severity remains unclear. A recent imaging-based cluster analysis on a population of asthmatic patients identifies four clusters with distinct structural and functional characteristics. The use of cluster membership to explore inter-subject variability by investigating numerically the air flow and particle transport in representative subjects of the asthmatic clusters on inhalation drug delivery in asthma sub-populations is proposed. Large-eddy simulations using computed tomography (CT)-based airway models were performed with a slow and deep breathing profile corresponding to application of a metered dose inhaler. Physiologically consistent subject specific boundary conditions in peripheral airways were produced using an image registration technique and a resistance network compliance model. Particle simulations and final deposition statistics were calculated for particle sizes ranging from 1–8 μm. The results suggested an emphasis on the importance of airway constriction for regional particle deposition and prominent effects of local features in lobar, segmental, and sub-segmental airways on overall deposition patterns. Asthmatic clusters characterized by airway constriction had an increase in deposition efficiency in lobar, segmental, and sub-segmental airways. Local constrictions produced jet flows that impinged on distal bifurcations and resulted in large inertial depositions. Decreased right main bronchus (RMB) branching angle decreased the fraction of particles ventilated to the right upper lobe (RUL). Cluster-based computational fluid dynamics results demonstrate particle deposition characteristics associated with imaging based variables that could be useful for future drug delivery improvements. One method for circumventing low deposition in small airways due to constriction in tracheobronchial airways is through hygroscopic growth of aerosols for inhalation. Hygroscopic materials have an affinity for water and can enlarge in size significantly as they traverse through respiratory tract. Hygroscopic growth has shown promise as a viable drug delivery method for decreasing deposition in the upper tracheobronchial region and increasing drug penetration and retention in small airways. Current models for hygroscopic growth models show promise in predicting steady state final diameter aerosol droplet sizes, but much uncertainty in predicting transient effects exists. This paper discusses in detail one such growth model and modifies it to include realistic spatial temperature and humidity variations associated with the lung. The growth model is simplified through grouping of terms and is then solved using MATLAB ODE 45 solver. The model is compared to experimentally acquired in vitro data for validation. The results do not show good agreement with the model, and suggests that additional factors exist that inhibit aerosol droplet growth from commencing immediately upon entering the respiratory tract like is assumed true in literature. This paper briefly hypothesizes for reasons for model and data disagreement and limitations of current growth models.
14

MULTI-SCALE MODELING OF POLYMERIC MATERIALS: AN ATOMISTIC AND COARSE-GRAINED MOLECULAR DYNAMICS STUDY

Wang, Qifei 01 August 2011 (has links)
Computational study of the structural, thermodynamic and transport properties of polymeric materials at equilibrium requires multi-scale modeling techniques due to processes occurring across a broad spectrum of time and length scales. Classical molecular-level simulation, such as Molecular Dynamics (MD), has proved very useful in the study of polymeric oligomers or short chains. However, there is a strong, nonlinear dependence of relaxation time with respect to chain length that requires the use of less computationally demanding techniques to describe the behavior of longer chains. As one of the mesoscale modeling techniques, Coarse-grained (CG) procedure has been developed recently to extend the molecular simulation to larger time and length scales. With a CG model, structural and dynamics of long chain polymeric systems can be directly studied though CG level simulation. In the CG simulations, the generation of the CG potential is an area of current research activity. The work in this dissertation focused on both the development of techniques for generating CG potentials as well as the application of CG potentials in Coarse-grained Molecular Dynamics (CGMD) simulations to describe structural, thermodynamic and transport properties of various polymer systems. First, an improved procedure for generated CG potentials from structural data obtained from atomistic simulation of short chains was developed. The Ornstein-Zernike integral equation with the Percus Yevick approximation was invoked to solve this inverse problem (OZPY-1). Then the OZPY-1 method was applied to CG modeling of polyethylene terephthalate (PET) and polyethylene glycol (PEG). Finally, CG procedure was applied to a model of sulfonated and cross-linked Poly (1, 3-cyclohexadiene) (sxPCHD) polymer that is designed for future application as a proton exchange membrane material used in fuel cell. Through above efforts, we developed an understanding of the strengths and limitations of various procedures for generating CG potentials. We were able to simulate entangled polymer chains for PET and study the structure and dynamics as a function of chain length. The work here also provides the first glimpses of the nanoscale morphology of the hydrated sxPCHD membrane. An understanding of this structure is important in the prediction of proton conductivity in the membrane.
15

Multi-scale Modeling of Chemical Vapor Deposition: From Feature to Reactor Scale

Jilesen, Jonathan January 2009 (has links)
Multi-scale modeling of chemical vapor deposition (CVD) is a very broad topic because a large number of physical processes affect the quality and speed of film deposition. These processes have different length scales associated with them creating the need for a multi-scale model. The three main scales of importance to the modeling of CVD are the reactor scale, the feature scale, and the atomic scale. The reactor scale ranges from meters to millimeters and is called the reactor scale because it corresponds with the scale of the reactor geometry. The micrometer scale is labeled as the feature scale in this study because this is the scale related to the feature geometries. However, this is also the scale at which grain boundaries and surface quality can be discussed. The final scale of importance to the CVD process is the atomic scale. The focus of this study is on the reactor and feature scales with special focus on the coupling between these two scales. Currently there are two main methods of coupling between the reactor and feature scales. The first method is mainly applied when a modified line of sight feature scale model is used, with coupling occurring through a mass balance performed at the wafer surface. The second method is only applicable to Monte Carlo based feature scale models. Coupling in this second method is accomplished through a mass balance performed at a plane offset from the surface. During this study a means of using an offset plane to couple a continuum based reactor/meso scale model to a modified line of sight feature scale model was developed. This new model is then applied to several test cases and compared with the surface coupling method. In order to facilitate coupling at an offset plane a new feature scale model called the Ballistic Transport with Local Sticking Factors (BTLSF) was developed. The BTLSF model uses a source plane instead of a hemispherical source to calculate the initial deposition flux arriving from the source volume. The advantage of using a source plane is that it can be made to be the same plane as the coupling plane. The presence of only one interface between the feature and reactor/meso scales simplifies coupling. Modifications were also made to the surface coupling method to allow it to model non-uniform patterned features. Comparison of the two coupling methods showed that they produced similar results with a maximum of 4.6% percent difference in their effective growth rate maps. However, the shapes of individual effective reactivity functions produced by the offset coupling method are more realistic, without the step functions present in the effective reactivity functions of the surface coupling method. Also the cell size of the continuum based component of the multi-scale model was shown to be limited when the surface coupling method was used. Thanks to the work done in this study researchers using a modified line of sight feature scale model now have a choice of using either a surface or an offset coupling method to link their reactor/meso and feature scales. Furthermore, the comparative study of these two methods in this thesis highlights the differences between the two methods allowing their selection to be an informed decision.
16

Multi-scale Modeling of Chemical Vapor Deposition: From Feature to Reactor Scale

Jilesen, Jonathan January 2009 (has links)
Multi-scale modeling of chemical vapor deposition (CVD) is a very broad topic because a large number of physical processes affect the quality and speed of film deposition. These processes have different length scales associated with them creating the need for a multi-scale model. The three main scales of importance to the modeling of CVD are the reactor scale, the feature scale, and the atomic scale. The reactor scale ranges from meters to millimeters and is called the reactor scale because it corresponds with the scale of the reactor geometry. The micrometer scale is labeled as the feature scale in this study because this is the scale related to the feature geometries. However, this is also the scale at which grain boundaries and surface quality can be discussed. The final scale of importance to the CVD process is the atomic scale. The focus of this study is on the reactor and feature scales with special focus on the coupling between these two scales. Currently there are two main methods of coupling between the reactor and feature scales. The first method is mainly applied when a modified line of sight feature scale model is used, with coupling occurring through a mass balance performed at the wafer surface. The second method is only applicable to Monte Carlo based feature scale models. Coupling in this second method is accomplished through a mass balance performed at a plane offset from the surface. During this study a means of using an offset plane to couple a continuum based reactor/meso scale model to a modified line of sight feature scale model was developed. This new model is then applied to several test cases and compared with the surface coupling method. In order to facilitate coupling at an offset plane a new feature scale model called the Ballistic Transport with Local Sticking Factors (BTLSF) was developed. The BTLSF model uses a source plane instead of a hemispherical source to calculate the initial deposition flux arriving from the source volume. The advantage of using a source plane is that it can be made to be the same plane as the coupling plane. The presence of only one interface between the feature and reactor/meso scales simplifies coupling. Modifications were also made to the surface coupling method to allow it to model non-uniform patterned features. Comparison of the two coupling methods showed that they produced similar results with a maximum of 4.6% percent difference in their effective growth rate maps. However, the shapes of individual effective reactivity functions produced by the offset coupling method are more realistic, without the step functions present in the effective reactivity functions of the surface coupling method. Also the cell size of the continuum based component of the multi-scale model was shown to be limited when the surface coupling method was used. Thanks to the work done in this study researchers using a modified line of sight feature scale model now have a choice of using either a surface or an offset coupling method to link their reactor/meso and feature scales. Furthermore, the comparative study of these two methods in this thesis highlights the differences between the two methods allowing their selection to be an informed decision.
17

Multi-scale Models of Tumor Growth and Invasion

Soos, Boglarka January 2012 (has links)
Cancer is a complex, multi-scale disease marked by unchecked cellular growth and proliferation. As a tumor grows, it is known to lose its capacity to maintain a compact structure. This stage of development, known as invasion, is marked by the disaggregation and dispersion of peripheral cells, and the formation of finger-like margins. This thesis provides an overview of three multi-scale models of tumor growth and invasion. The hybrid discrete-continuum (HDC) model couples a cellular automaton approach, which is used to direct the behavior and interactions of individual cells, with a system of reaction-diffusion-chemotaxis equations that describe the micro-environment. The evolutionary hybrid cellular automaton (EHCA) model maintains the core of the HDC approach, but employs an artificial response network to describe cellular dynamics. In contrast to these two, the immersed boundary (IBCell) model describes cells as fully deformable, viscoelastic entities that interact with each other using membrane bound receptors. As part of this thesis, the HDC model has been modified to examine the role of the ECM as a barrier to cellular expansion. The results of these simulations will be presented and discussed in the context of tumor progression.
18

Multi-scale methods for stochastic differential equations / Flerskaliga metoder för stockastiska differentialekvationer

Zettervall, Niklas January 2012 (has links)
Standard Monte Carlo methods are used extensively to solve stochastic differential equations. This thesis investigates a Monte Carlo (MC) method called multilevel Monte Carlo that solves the equations on several grids, each with a specific number of grid points. The multilevel MC reduces the computational cost compared to standard MC. When using a fixed computational cost the variance can be reduced by using the multilevel method compared to the standard one. Discretization and statistical error calculations are also being conducted and the possibility to evaluate the errors coupled with the multilevel MC creates a powerful numerical tool for calculating equations numerically. By using the multilevel MC method together with the error calculations it is possible to efficiently determine how to spend an extended computational budget. / Standard Monte Carlo metoder används flitigt för att lösa stokastiska differentialekvationer. Denna avhandling undersöker en Monte Carlo-metod (MC) kallad multilevel Monte Carlo som löser ekvationerna på flera olika rutsystem, var och en med ett specifikt antal punkter. Multilevel MC reducerar beräkningskomplexiteten jämfört med standard MC. För en fixerad beräkningskoplexitet kan variansen reduceras genom att multilevel MC-metoden används istället för standard MC-metoden. Diskretiserings- och statistiska felberäkningar görs också och möjligheten att evaluera de olika felen, kopplat med multilevel MC-metoden skapar ett kraftfullt verktyg för numerisk beräkning utav ekvationer. Genom att använda multilevel MC tillsammans med felberäkningar så är det möjligt att bestämma hur en utökad beräkningsbudget speneras så effektivt som möjligt.
19

A Multi-scale Framework for Thermo-viscoelastic Analysis of Fiber Metal Laminates

Sawant, Sourabh P. 14 January 2010 (has links)
Fiber Metal Laminates (FML) are hybrid composites with alternate layers of orthotropic fiber reinforced polymers (FRP) and isotropic metal alloys. FML can exhibit a nonlinear thermo-viscoelastic behavior under the influence of external mechanical and non-mechanical stimuli. Such a behavior can be due to the stress and temperature dependent viscoelastic response in one or all of its constituents, namely, the fiber and matrix (within the FRP layers) or the metal layers. To predict the overall thermoviscoelastic response of FML, it is necessary to incorporate different responses of the individual constituents through a suitable multi-scale framework. A multi-scale framework is developed to relate the constituent material responses to the structural response of FML. The multi-scale framework consists of a micromechanical model of unidirectional FRP for ply level homogenization. The upper (structural) level uses a layered composite finite element (FE) with multiple integration points through the thickness. The micromechanical model is implemented at these integration points. Another approach (alternative to use of layered composite element) uses a sublaminate model to homogenize responses of the FRP and metal layers and integrate it to continuum 3D or shell elements within the FE code. Thermo-viscoelastic constitutive models of homogenous orthotropic materials are used at the lowest constituent level, i.e., fiber, matrix, and metal in the framework. The nonlinear and time dependent response of the constituents requires the use of suitable correction algorithms (iterations) at various levels in the multi-scale framework. The multi-scale framework can be efficiently used to analyze nonlinear thermo-viscoelastic responses of FML structural components. The multi-scale framework is also beneficial for designing FML materials and structures since different FML performances can be first simulated by varying constituent properties and microstructural arrangements.
20

Multi-scale thermal and circuit analysis for nanometre-scale integrated circuits

Allec, NICHOLAS 27 September 2008 (has links)
Chip temperature is increasing with continued technology scaling due to increased power density and decreased device feature sizes. Since temperature has significant impact on performance and reliability, accurate thermal and circuit analysis are of great importance. Due to the shrinking device feature size, effects occurring at the nanometre scale, such as ballistic transport of energy carriers and electron tunneling, have become increasingly important and must be considered. However, many existing thermal and circuit analysis methods are not able to consider these effects efficiently, if at all. This thesis presents methods for accurate and efficient multi-scale thermal and circuit analysis. For circuit analysis, the simulation of single-electron device circuits is specifically studied. To target thermal analysis, in this work, ThermalScope, a multi-scale thermal analysis method for nanometre-scale IC design is developed. It unifies microscopic and macroscopic thermal physics modeling methods, i.e., the Boltzmann transport and Fourier modeling methods. Moreover, it supports adaptive multi-resolution modeling. Together, these ideas enable efficient and accurate characterization of nanometre-scale heat transport as well as chip-package level heat flow. ThermalScope is designed for full chip thermal analysis of billion-transistor nanometre-scale IC designs, with accuracy at the scale of individual devices. ThermalScope has been implemented in software and used for full chip thermal analysis and temperature-dependent leakage analysis of an IC design with more than 150 million transistors. To target circuit analysis, in this work, SEMSIM, a multi-scale single-electron device simulator is developed with an adaptive simulation technique based on the Monte Carlo method. This technique significantly improves the time efficiency while maintaining accuracy for single-electron device and circuit simulation. It is shown that it is possible to reduce simulation time up to nearly 40 times and maintain an average propagation delay error of under 5% compared to a non-adaptive Monte Carlo method. This simulator has been used to handle large circuit benchmarks with more than 6000 junctions, showing efficiency comparable to SPICE, with much better accuracy. In addition, the simulator can characterize important secondary effects including cotunneling and Cooper pair tunneling, which are critical for device research. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-09-26 13:33:12.389

Page generated in 0.0607 seconds