251 |
Modelling and analysis of oscillations in gene expression through neural developmentPhillips, Nick January 2016 (has links)
The timing of differentiation underlies the development of any organ system. In neural development, the expression of the transcription factor Hes1 has been shown to be oscillatory in neural progenitors, but at a low steady state in differentiated neurons. This change in the dynamics of expression marks the timing of differentiation. We previously constructed a mathematical model to test the experimental hypothesis that the topology of the miR-9/Hes1 network and specifically the accumulation of the micro-RNA, miR-9, could terminate Hes1 oscillations and account for the timing of neuronal differentiation, using deterministic delay differential equations. However, biochemical reactions are the result of random encounters between discrete numbers of molecules, and some of these molecules may be present at low numbers. The finite number of molecules interacting within the system leads to inherent randomness, and this is known as intrinsic stochasticity. The stochastic model predicts that low molecular number causes the time to differentiation to be distributed, which is in agreement with recent experimental evidence and considered important to generate cell type diversity. For the exact same model, fewer reacting molecules causes a decrease in the average time to differentiation, showing that the number of molecules can systematically change the timing of differentiation. Oscillations are important for a wide range of biological processes, but current methods for discovering oscillatory genes have primarily been designed for measurements performed on a population of cells. We introduce a new approach for analysing biological time series data designed for cases where the underlying dynamics of gene expression is inherently noisy at a single cell level. Our analysis method combines mechanistic stochastic modelling with the powerful methods of Bayesian nonparametric regression, and can distinguish oscillatory expression in single cell data from random fluctuations of nonoscillatory gene expression, despite peak-to-peak variability in period and amplitude of single cell oscillations. Models of gene expression commonly involve delayed biological processes, but the combination of stochasticity, delay and nonlinearity lead to emergent dynamics that are not understood at a theoretical level. We develop a theory to explain these effects, and apply it to a simple model of gene regulation. The new theory can account for long time-scale dynamics and nonlinear character of the system that emerge when the number of interacting molecules becomes low. Both the absolute length and the uncertainty in the delay time are shown to be crucial in controlling the magnitude of nonlinear effects.
|
252 |
Exploring time-dependent approaches towards the calculation of dynamics and spectroscopic signals: A mixed quantum/semiclassical wave packet method and the theory of transient absorption and femtosecond stimulated Raman scatteringKovac, Philip 10 April 2018 (has links)
We present a time-dependent mixed quantum/semiclassical approach to calculating linear absorption spectra. Applying Variational Fixed Vibrational Basis/Gaussian Bath theory (FVB/GB) to the treatment of small molecules isolated in an extended cryogenic medium, an assumed time-scale separation between the few rapid, largely intramolecular modes of the guest and the several slower extended modes of the medium is utilized to partition a system from the surrounding bath. The system dynamics are handled with basis set methods, while the bath degrees of freedom are subject to a semiclasscial thawed Gaussian ansatz. The linear absorption spectrum for a realistic model system is calculated using FVB/GB results and then compared with a numerically exact calculation. Also contained in this dissertation are previously published theoretical works on Transient Absorption and Femtosecond Stimulated Raman Spectroscopy. Both encompass a rebuilding of the theory and elucidate the information content of the respective spectroscopic signals.
This dissertation includes previously published co-authored material.
|
253 |
Gaussian process emulators for the analysis of complex models in engineeringDiaz De la O, Francisco Alejandro January 2011 (has links)
No description available.
|
254 |
Uma introdução a teoria das partições / An introduction to the theory of partitionsAndrade, Cecília Pereira de, 1983- 14 August 2018 (has links)
Orientador: Jose Plinio de Oliveira Santos / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-14T04:49:07Z (GMT). No. of bitstreams: 1
Andrade_CeciliaPereirade_M.pdf: 439122 bytes, checksum: 1ebde938fe2e698dd7e6b405a6324c2c (MD5)
Previous issue date: 2009 / Resumo: Este trabalho está dividido em duas partes. A primeira refere-se a partições, constando dos principais resultados, algumas representações das partições e uma importante ferramenta que são as funções geradoras. A segunda parte apresenta os polinômios gaussianos e alguns teoremas importantes, bem como as identidades de Rogers-Ramanujan. / Abstract: This work is divided into two parts. The rest refers to partitions, consisting of the main results, some representations of the partitions and an important tool that are the generating functions. The second part presents the Gaussian polynomials and some important theorems as well as the identities of Rogers-Ramanujan. / Mestrado / Combinatoria / Mestre em Matemática Aplicada
|
255 |
Bases mixtes ondelettes-gaussiennes pour le calcul de structures électroniques / Galerkin method using optimized wavelet-Gaussian mixed bases for electronic structure calculations in quantum chemistryPham, Dinh Huong 30 June 2017 (has links)
Cette thèse apporte une contribution aux méthodes numériques pour la simulation moléculaire ab initio, et plus spécifiquement pour le calcul de structures électroniques par l’équation de Schrödinger, ou par des formalismes comme la théorie de Hartree-Fock ou la théorie de la fonctionnelle de la densité. Elle propose une stratégie pour construire des bases mixtes ondelettes-gaussiennes dans l’approximation de Galerkin, combinant les qualités respectives de ces deux types de bases avec l’objectif de mieux capturer les points de rebroussement de la fonction d’onde. Les nombreux logiciels actuellement disponibles à l’usage des chimistes dans ce domaine (VASP, Gaussian, ABINIT...) se différencient par divers choix méthodologiques, notamment celui des fonctions de base pour exprimer les orbitales atomiques. Nouvel arrivant sur le marché, le code massivement parallèle BigDFT a opté pour les bases d’ondelettes. Comme le nombre de niveaux de résolution y est limité pour des raisons de performance,ces fonctions ne peuvent déployer pleinement leur puissance. La question posée est alors de savoir comment accroître la précision des calculs tous électrons au voisinage des singularités de type cusp de la solution, sans augmenter excessivement la complexité de BigDFT. La réponse que nous suggérons consiste à enrichir la base de fonctions d’échelles (niveau de résolution bas des bases d’ondelettes) par des fonctions gaussiennes centrées sur chaque position de noyau. La difficulté principale dans la construction d’une telle base mixte réside dans la détermination optimale du nombre de gaussiennes requises et de leurs écarts-types, de sorte que ces gaussiennes supplémentaires soient le mieux possible compatibles avec la base existante sous la contrainte d’un seuil d’erreur donné à l’avance. Nous proposons pour cela l’utilisation conjointe d’un estimateur a posteriori sur la diminution du niveau d’énergie et d’un algorithme glouton, ce qui aboutit à une suite incrémentale quasi-optimale de gaussiennes supplémentaires. Cette idée est directement inspirée des techniques de bases réduites. Nous développons les fondements théoriques de cette stratégie sur deux modèles 1-D linéaires qui sont des simplifications de l’équation de Schrödinger pour un électron,posée en domaine infini ou domaine périodique. Ces modèles prototypes sont étudiés en profondeur dans la première partie. La définition de l’estimateur a posteriori de type norme duale du résidu, ainsi que la déclinaison de la philosophie glouton en différents algorithmes concrets, sont présentées en seconde partie, accompagnées de résultats numériques. Les algorithmes proposés vont dans le sens d’une économie croissante du temps de calcul.Ils sont aussi de plus en plus empiriques, au sens où ils reposent de plus en plus sur les intuitions avec lesquelles les chimistes sont familiers. En particulier, le dernier algorithme pour plusieurs noyaux s’appuie en partie sur la validité du transfert atome/molécule et rappelle dans une certaine mesure les bases d’orbitales atomiques. / This thesis aims to be a contribution to numerical methods for ab initio molecular simulation, and more specifically for electronic structure calculations by means of the Schrödingerequation or formalisms such as the Hartree-Fock theory or the Density Functional Theory. It puts forward a strategy to build mixed wavelet-Gaussian bases for the Galerkinapproximation, combining the respective advantages of these two types of bases in orderto better capture the cusps of the wave function.Numerous software programs are currently available to the chemists in this field (VASP,Gaussian, ABINIT... ) and differ from each other by various methodological choices,notably that of the basis functions used for expressing atomic orbitals. As a newcomer tothis market, the massively parallel BigDFT code has opted for a basis of wavelets. Dueto performance considerations, the number of multiresolution levels has been limited andtherefore users cannot benefit from the full potential of wavelets. The question is thus howto improve the accuracy of all-electron calculations in the neighborhood of the cusp-typesingularities of the solution, without excessively increasing the complexity of BigDFT.The answer we propose is to enrich the scaling function basis (low level of resolutionof the wavelet basis) by Gaussian functions centered on each nucleus position. The maindifficulty in constructing such a mixed basis lies in the optimal determination of the numberof Gaussians required and their standard deviations, so that these additional Gaussiansare compatible in the best possible way with the existing basis within the constraint of anerror threshold given in advance. We advocate the conjunction of an a posteriori estimateon the diminution of the energy level and a greedy algorithm, which results in a quasi-optimal incremental sequence of additional Gaussians. This idea is directly inspired bythe techniques of reduced bases.We develop the theoretical foundations of this strategy on two 1-D linear models thatare simplified versions of the Schrödinger equation for one electron in an infinite domainor a periodic domain. These prototype models are investigated in depth in the firstpart. The definition of the a posteriori estimate as a residual dual norm, as well as theimplementation of the greedy philosophy into various concrete algorithms, are presented inthe second part, along with extensive numerical results. These algorithms allow for moreand more saving of CPU time and become more and more empirical, in the sense thatthey rely more and more on the intuitions with which chemists are familiar. In particular,the last proposed algorithm partly assumes the validity of the atom/molecule transfer andis somehow reminiscent of atomic orbitals bases.
|
256 |
Some non-standard statistical dependence problemsBere, Alphonce January 2016 (has links)
Philosophiae Doctor - PhD / The major result of this thesis is the development of a framework for the application
of pair-mixtures of copulas to model asymmetric dependencies in bivariate data. The main motivation is the inadequacy of mixtures of bivariate Gaussian models which are commonly fitted to data. Mixtures of rotated single parameter Archimedean and Gaussian copulas are fitted to real data sets. The method of maximum likelihood is used for parameter estimation. Goodness-of-fit tests performed on the models giving the highest log-likelihood values show that the models fit the data well. We use mixtures of univariate Gaussian models and mixtures of regression models to investigate the existence of bimodality in the distribution of the widths of autocorrelation functions in a sample of 119 gamma-ray bursts. Contrary to previous findings, our results do not reveal any evidence of bimodality. We extend a study by Genest et al. (2012) of the power and significance levels of tests of copula symmetry, to two copula models which have not been considered previously. Our results confirm that for small sample sizes, these tests fail to maintain their 5% significance level and that the Cramer-von Mises-type statistics are the most powerful.
|
257 |
Optical vortex detection and strongly scintillated beam correction using Vortex Dipole AnnihilationChen, Mingzhou 06 May 2009 (has links)
Please read the abstract on page i of this thesis / Thesis (PhD)--University of Pretoria, 2009. / Electrical, Electronic and Computer Engineering / unrestricted
|
258 |
Valid estimation and prediction inference in analysis of a computer modelNagy, Béla 11 1900 (has links)
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the
past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output.
Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments".
The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried
locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response
surface of the original computer model.
One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make
assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able
to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable.
In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching
coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments. / Science, Faculty of / Statistics, Department of / Graduate
|
259 |
A Gaussian approximation to the effective potentialMorgan, David C. January 1987 (has links)
This thesis investigates some of the properties of a variational approximation to scalar field theories: a trial wavefunctional which has a gaussian form is used as a ground state ansatz for an interacting scalar field theory - the expectation value of the Hamiltonian in this state is then minimized. This we call the Gaussian Approximation; the resulting effective potential we follow others by calling the Gaussian Effective Potential (GEP). An equivalent but more general finite temperature formalism is then reviewed and used for the calculations of the GEP in this thesis. Two scalar field theories are described: ϕ⁴ theory in four dimensions (ϕ⁴₄) and ϕ⁶ theory in three dimensions (ϕ⁶₃). After showing what the Gaussian Approximation does in terms of Feynman diagrams, renormalized GEP's are calculated for both theories. Dimensional Regularization is used in the renormalization and this this is especially convenient for the GEP in ϕ⁶₃ theory because it becomes trivially renor-malizable. It is noted that ϕ⁶₃ loses its infrared asymptotic freedom in the Gaussian Approximation. Finally, it is shown how a finite temperature GEP can be calculated by finding low and high temperature expansions of the temperature terms in ϕ⁶₃ theory. / Science, Faculty of / Physics and Astronomy, Department of / Graduate
|
260 |
Embedded Feature Selection for Model-based ClusteringJanuary 2020 (has links)
abstract: Model-based clustering is a sub-field of statistical modeling and machine learning. The mixture models use the probability to describe the degree of the data point belonging to the cluster, and the probability is updated iteratively during the clustering. While mixture models have demonstrated the superior performance in handling noisy data in many fields, there exist some challenges for high dimensional dataset. It is noted that among a large number of features, some may not indeed contribute to delineate the cluster profiles. The inclusion of these “noisy” features will confuse the model to identify the real structure of the clusters and cost more computational time. Recognizing the issue, in this dissertation, I propose a new feature selection algorithm for continuous dataset first and then extend to mixed datatype. Finally, I conduct uncertainty quantification for the feature selection results as the third topic.
The first topic is an embedded feature selection algorithm termed Expectation-Selection-Maximization (ESM) model that can automatically select features while optimizing the parameters for Gaussian Mixture Model. I introduce a relevancy index (RI) revealing the contribution of the feature in the clustering process to assist feature selection. I demonstrate the efficacy of the ESM by studying two synthetic datasets, four benchmark datasets, and an Alzheimer’s Disease dataset.
The second topic focuses on extending the application of ESM algorithm to handle mixed datatypes. The Gaussian mixture model is generalized to Generalized Model of Mixture (GMoM), which can not only handle continuous features, but also binary and nominal features.
The last topic is about Uncertainty Quantification (UQ) of the feature selection. A new algorithm termed ESOM is proposed, which takes the variance information into consideration while conducting feature selection. Also, a set of outliers are generated in the feature selection process to infer the uncertainty in the input data. Finally, the selected features and detected outlier instances are evaluated by visualization comparison. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2020
|
Page generated in 0.0641 seconds