• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Computational Investigations of Noise-mediated Cell Population Dynamics

Charlebois, Daniel January 2014 (has links)
Fluctuations, or "noise", can play a key role in determining the behaviour of living systems. The molecular-level fluctuations that occur in genetic networks are of particular importance. Here, noisy gene expression can result in genetically identical cells displaying significant variation in phenotype, even in identical environments. This variation can act as a basis for natural selection and provide a fitness benefit to cell populations under stress. This thesis focuses on the development of new conceptual knowledge about how gene expression noise and gene network topology influence drug resistance, as well as new simulation techniques to better understand cell population dynamics. Network topology may at first seem disconnected from expression noise, but genes in a network regulate each other through their expression products. The topology of a genetic network can thus amplify or attenuate noisy inputs from the environment and influence the expression characteristics of genes serving as outputs to the network. The main body of the thesis consists of five chapters: 1. A published review article on the physical basis of cellular individuality. 2. A published article presenting a novel method for simulating the dynamics of cell populations. 3. A chapter on modeling and simulating replicative aging and competition using an object-oriented framework. 4. A published research article establishing that noise in gene expression can facilitate adaptation and drug resistance independent of mutation. 5. An article submitted for publication demonstrating that gene network topology can affect the development of drug resistance. These chapters are preceded by a comprehensive introduction that covers essential concepts and theories relevant to the work presented.
12

Multiplexing Techniques and Design-Automation Tools for FRET-Enabled Optical Computing

Mottaghi, Mohammad January 2014 (has links)
<p>FRET-enabled optical computing is a new computing paradigm that uses the energy of incident photons to perform computation in molecular-scale circuits composed of inter-communicating photoactive molecules. Unlike conventional computing approaches, computation in these circuits does not require any electric current; instead, it relies on the controlled-migration of energy in the circuit through a phenomenon called Förster Resonance Energy Transfer (FRET). This, coupled with other unique features of FRET circuits can enable computing in new domains that are unachievable by the conventional semiconductor-based computing, such as in-cell computing or targeted drug delivery. In this thesis, we explore novel FRET-based multiplexing techniques to significantly increase the storage density of optical storage media. Further, we develop analysis algorithms, and computer-aided design tools for FRET circuits.</p><p>Existing computer-aided design tools for FRET circuits are predominantly ad hoc and specific to particular functionalities. We develop a generic design-automation framework for FRET-circuit optimization that is not limited to any particular functionality. We also show that within a fixed time-budget, the low-speed of Monte-Carlo-based FRET-simulation (MCS) algorithms can have a potentially-significant negative impact on the quality of the design process, and to address this issue, we design and implement a fast FRET-simulation algorithm which is up to several million times faster than existing MCS algorithms. We finally exploit the unique features of FRET-enabled optical computing to develop novel multiplexing techniques that enable orders of magnitude higher storage density compared to conventional optical storage media, such as DVD or Blu-Ray.</p> / Dissertation
13

Mathematical modelling of oncolytic virotherapy

Shabala, Alexander January 2013 (has links)
This thesis is concerned with mathematical modelling of oncolytic virotherapy: the use of genetically modified viruses to selectively spread, replicate and destroy cancerous cells in solid tumours. Traditional spatially-dependent modelling approaches have previously assumed that virus spread is due to viral diffusion in solid tumours, and also neglect the time delay introduced by the lytic cycle for viral replication within host cells. A deterministic, age-structured reaction-diffusion model is developed for the spatially-dependent interactions of uninfected cells, infected cells and virus particles, with the spread of virus particles facilitated by infected cell motility and delay. Evidence of travelling wave behaviour is shown, and an asymptotic approximation for the wave speed is derived as a function of key parameters. Next, the same physical assumptions as in the continuum model are used to develop an equivalent discrete, probabilistic model for that is valid in the limit of low particle concentrations. This mesoscopic, compartment-based model is then validated against known test cases, and it is shown that the localised nature of infected cell bursts leads to inconsistencies between the discrete and continuum models. The qualitative behaviour of this stochastic model is then analysed for a range of key experimentally-controllable parameters. Two-dimensional simulations of in vivo and in vitro therapies are then analysed to determine the effects of virus burst size, length of lytic cycle, infected cell motility, and initial viral distribution on the wave speed, consistency of results and overall success of therapy. Finally, the experimental difficulty of measuring the effective motility of cells is addressed by considering effective medium approximations of diffusion through heterogeneous tumours. Considering an idealised tumour consisting of periodic obstacles in free space, a two-scale homogenisation technique is used to show the effects of obstacle shape on the effective diffusivity. A novel method for calculating the effective continuum behaviour of random walks on lattices is then developed for the limiting case where microscopic interactions are discrete.
14

Tvorba spolehlivostních modelů pro pokročilé číslicové systémy / Construction of Reliability Models for Advanced Digital Systems

Trávníček, Jan January 2013 (has links)
This thesis deals with the systems reliability. At First, there is discussed the concept of reliability itself and its indicators, which can specifically express reliability. The second chapter describes the different kinds of reliability models for simple and complex systems. It further describes the basic methods for construction of reliability models. The fourth chapter is devoted to a very important Markov models. Markov models are very powerful and complex model for calculating the reliability of advanced systems. Their suitability is explained here for recovered systems, which may contain absorption states. The next chapter describes the standby redundancy. Discusses the advantages and disadvantages of static, dynamic and hybrid standby. There is described the influence of different load levels on the service life. The sixth chapter is devoted to the implementation, description of the application and description of the input file in XML format. There are discussed the results obtaining in experimental calculations.
15

Développement d'une méthodologie de modélisation cinétique de procédés de raffinage traitant des charges lourdes / Development of a novel methodology for kinetic modelling of heavy oil refining processes

Pereira De Oliveira, Luís Carlos 21 May 2013 (has links)
Une nouvelle méthodologie de modélisation cinétique des procédés de raffinage traitant les charges lourdes a été développée. Elle modélise, au niveau moléculaire, la composition de la charge et les réactions mises en œuvre dans le procédé.La composition de la charge est modélisée à travers un mélange de molécules dont les propriétés sont proches de celles de la charge. Le mélange de molécules est généré par une méthode de reconstruction moléculaire en deux étapes. Dans la première étape, les molécules sont créées par assemblage de blocs structuraux de manière stochastique. Dans la deuxième étape, les fractions molaires sont ajustées en maximisant un critère d’entropie d’information.Le procédé de raffinage est ensuite simulé en appliquant, réaction par réaction, ses principales transformations sur le mélange de molécules, à l'aide d'un algorithme de Monte Carlo.Cette méthodologie est appliquée à deux cas particuliers : l’hydrotraitement de gazoles et l’hydroconversion de résidus sous vide (RSV). Pour le premier cas, les propriétés globales de l’effluent sont bien prédites, ainsi que certaines propriétés moléculaires qui ne sont pas accessibles dans les modèles traditionnels. Pour l'hydroconversion de RSV, dont la structure moléculaire est nettement plus complexe, la conversion des coupes lourdes est correctement reproduite. Par contre, la prédiction des rendements en coupes légères et de la performance en désulfuration est moins précise. Pour les améliorer, il faut d'une part inclure de nouvelles réactions d'ouverture de cycle et d'autre part mieux représenter la charge en tenant compte des informations moléculaires issues des analyses des coupes de l'effluent. / In the present PhD thesis, a novel methodology for the kinetic modelling of heavy oil refining processes is developed. The methodology models both the feedstock composition and the process reactions at a molecular level. The composition modelling consists of generating a set of molecules whose properties are close to those obtained from the process feedstock analyses. The set of molecules is generated by a two-step molecular reconstruction algorithm. In the first step, an equimolar set of molecules is built by assembling structural blocks in a stochastic manner. In the second step, the mole fractions of the molecules are adjusted by maximizing an information entropy criterion. The refining process is then simulated by applying, step by step, its main reactions to the set of molecules, by a Monte Carlo method. This methodology has been applied to two refining processes: The hydrotreating (HDT) of Light Cycle Oil (LCO) gas oils and the hydroconversion of vacuum residues (VR). For the HDT of LCO gas oils, the overall properties of the effluent are well predicted. The methodology is also able to predict molecular properties of the effluent that are not accessible from traditional kinetic models. For the hydroconversion of VR, which have more complex molecules than LCO gas oils, the conversion of heavy fractions is correctly predicted. However, the results for the composition of lighter fractions and the desulfurization yield are less accurate. To improve them, one must on one hand include new ring opening reactions and on the other hand refine the feedstock representation by using additional molecular information from the analyses of the process effluents.
16

Gaussian Reaction Diffusion Master Equation: A Reaction Diffusion Master Equation With an Efficient Diffusion Model for Fast Exact Stochastic Simulations

Subic, Tina 13 September 2023 (has links)
Complex spatial structures in biology arise from random interactions of molecules. These molecular interactions can be studied using spatial stochastic models, such as Reaction Diffusion Master Equation (RDME), a mesoscopic model that subdivides the spatial domain into smaller, well mixed grid cells, in which the macroscopic diffusion-controlled reactions take place. While RDME has been widely used to study how fluctuations in number of molecules affect spatial patterns, simulations are computationally expensive and it requires a lower bound for grid cell size to avoid an apparent unphysical loss of bimolecular reactions. In this thesis, we propose Gaussian Reaction Diffusion Master Equation (GRDME), a novel model in the RDME framework, based on the discretization of the Laplace operator with Particle Strength Exchange (PSE) method with a Gaussian kernel. We show that GRDME is a computationally efficient model compared to RDME. We further resolve the controversy regarding the loss of bimolecular reactions and argue that GRDME can flexibly bridge the diffusion-controlled and ballistic regimes in mesoscopic simulations involving multiple species. To efficiently simulate GRDME, we develop Gaussian Next Subvolume Method (GNSM). GRDME simulated with GNSM up to six-times lower computational cost for a three-dimensional simulation, providing a significant computational advantage for modeling three-dimensional systems. The computational cost can be further lowered by increasing the so-called smoothing length of the Gassian jumps. We develop a guideline to estimate the grid resolution below which RDME and GRDME exhibit loss of bimolecular reactions. This loss of reactions has been considered unphysical by others. Here we show that this loss of bimolecular reactions is consistent with the well-established theory on diffusion-controlled reaction rates by Collins and Kimball, provided that the rate of bimolecular propensity is interpreted as the rate of the ballistic step, rather than the macroscopic reaction rate. We show that the reaction radius is set by the grid resolution. Unlike RDME, GRDME enables us to explicitly model various sizes of the molecules. Using this insight, we explore the diffusion-limited regime of reaction dynamics and discover that diffusion-controlled systems resemble small, discrete systems. Others have shown that a reaction system can have discreteness-induced state inversion, a phenomenon where the order of the concentrations differs when the system size is small. We show that the same reaction system also has diffusion-controlled state inversion, where the order of concentrations changes, when the diffusion is slow. In summary, we show that GRDME is a computationally efficient model, which enables us to include the information of the molecular sizes into the model.:1 Modeling Mesoscopic Biology 1.1 RDME Models Mesoscopic Stochastic Spatial Phenomena 1.2 A New Diffusion Model Presents an Opportunity For A More Efficient RDME 1.3 Can A New Diffusion Model Provide Insights Into The Loss Of Reactions? 1.4 Overview 2 Preliminaries 2.1 Reaction Diffusion Master Equation 2.1.1 Chemical Master Equation 2.1.2 Diffusion-controlled Bimolecular Reaction Rate 2.1.3 RDME is an Extention of CME to Spatial Problems 2.2 Next Subvolume Method 2.2.1 First Reaction Method 2.2.2 NSM is an Efficient Spatial Stochastic Algorithm for RDME 2.3 Discretization of the Laplace Operator Using Particle Strength Exchange 2.4 Summary 3 Gaussian Reaction Diffusion Master Equation 3.1 Design Constraints for the Diffusion Model in the RDME Framework 3.2 Gaussian-jump-based Model for RDME 3.3 Summary 4 Gaussian Next Subvolume Method 4.1 Constructing the neighborhood N 4.2 Finding the Diffusion Event 4.3 Comparing GNSM to NSM 4.4 Summary 5 Limits of Validity for (G)RDME with Macroscopic Bimolecular Propensity Rate 5.1 Previous Works 5.2 hmin Based on the Kuramoto length of a Grid Cell 5.3 hmin of the Two Limiting Regimes 5.4 hmin of Bimolecular Reactions for the Three Cases of Dimensionality 5.5 hmin of GRDME in Comparison to hmin of RDME 5.6 Summary 6 Numerical Experiments To Verify Accuracy, Efficiency and Validity of GRDME 6.1 Accuracy of the Diffusion Model 6.2 Computational Cost 6.3 hmin and Reaction Loss for (G)RDME With Macroscopic Bimolecular Propensity Rate kCK 6.3.1 Homobiomlecular Reaction With kCK at the Ballistic Limit 6.3.2 Homobiomlecular Reaction With kCK at the Diffusional Limit 6.3.3 Heterobiomlecular Reaction With kCK at the Ballistic Limit 6.4 Summary 7 (G)RDME as a Spatial Model of Collins-Kimball Diffusion-controlled Reaction Dynamics 7.1 Loss of Reactions in Diffusion-controlled Reaction Systems 7.2 The Loss of Reactions in (G)RDME Can Be Explained by Collins Kimball Theory 7.3 Cell Width h Sets the Reaction Radius σ∗ 7.4 Smoothing Length ε′ Sets the Size of the Molecules in the System 7.5 Heterobimolecular Reactions Can Only Be Modeled With GRDME 7.6 Zeroth Order Reactions Impose a Lower Limit on Diffusivity Dmin 7.6.1 Consistency of (G)RDME Could Be Improved by Redesigning Zeroth Order Reactions 7.7 Summary 8 Difussion-Controlled State Inversion 8.1 Diffusion-controlled Systems Resemble Small Systems 8.2 Slow Diffusion Leads to an Inversion of Steady States 8.3 Summary 9 Conclusion and Outlook 9.1 Two Physical Interpretations of (G)RDME 9.2 Advantages of GRDME 9.3 Towards Numerically Consistent (G)RDME 9.4 Exploring Mesoscopic Biology With GRDME Bibliography
17

MODELING AND SIMULATION OF CUTTING MECHANICS IN CFRP MACHINING AND ITS MACHINING SOUND ANALYSIS

Kyeongeun Song (13169763) 28 July 2022 (has links)
<p>Carbon fiber bending during Carbon Fiber Reinforced Plastic (CFRP) milling is an important factor on the quality of the machined surface. When the milling tool rotates, the fiber first contacts the rake face instead of the tool edge at a certain cutting angle, then the fiber is bent instead of being cut by the tool. It causes the matrix and the fiber to fall out, and the fiber is broken from deep inside the machined surface. The broken fibers are pulled out as the tool rotates, which is known as pull-out fibers. The machining defect is the main cause of deteriorating the quality of the machined surface. To reduce such machining defects, it is important to predict the carbon fiber bending during CFRP milling. However, it is difficult to determine a point where fiber bending occurs because the fiber cutting angle changes every moment as the tool rotates. Therefore, in this study, CFRP milling simulation was performed to numerically analyze the machining parameters such as fiber cutting angle, fiber length, and the magnitude of fiber bending according to the different milling conditions. In addition, the deformation of the matrix existing between carbon fibers is predicted based on the fiber bending information obtained through simulation, and matrix shear strain energy model is developed. Also, the relationship between the matrix shear strain energy and machining quality is analyzed. Through verification experiments under various machining conditions, it is confirmed that the quality of the machined surface deteriorated as the matrix shear strain energy increased. Moreover, this study analyzed the fiber cutting mechanism considering bent fibers during CFRP milling and proposed a method to identify the type of machining mechanism through machining sound analysis. Through experiments, it was verified that fiber bending or defects can be identified through machining sound analysis in the high-frequency range between 7,500 Hz and 14,800 Hz. From the analysis, the effect of different chip thickness in up-milling and down-milling on fiber bending was investigated by analyzing simulation and sound signal. From machining experiments, the effect of this difference on cutting force and machining quality was verified. Lastly, we developed a minimum chip thickness and fiber fracture model in CFRP milling and analyzed the effect of fractured fibers on the machining sound. Carbon fibers located below the minimum chip thickness do not contact the tool edge and are compressed by the bottom face of the tool, and these fibers are excessively bent and broken. As these broken fibers are discharged while scratching the flank face of the tool, a loud machining sound is generated. Moreover, through the verification experiment, it was confirmed that the number of broken fibers is proportional to the loudness of the sound, and calculated number of broken fibers for one second using the fiber fracture model coincides with the high-frequency machining sound range of 7,500 Hz to 14,800 Hz.</p>

Page generated in 0.073 seconds