• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2115
  • 1047
  • 664
  • 164
  • 115
  • 92
  • 68
  • 54
  • 53
  • 53
  • 35
  • 29
  • 28
  • 21
  • 20
  • Tagged with
  • 5305
  • 5305
  • 1567
  • 1279
  • 565
  • 528
  • 508
  • 501
  • 407
  • 402
  • 395
  • 365
  • 329
  • 302
  • 296
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

High dimensional Bayesian computation / Computation bayésienne en grande dimension

Buchholz, Alexander 22 November 2018 (has links)
La statistique bayésienne computationnelle construit des approximations de la distribution a posteriori soit par échantillonnage, soit en construisant des approximations tractables. La contribution de cette thèse au domaine des statistiques bayésiennes est le développement de nouvelle méthodologie en combinant des méthodes existantes. Nos approches sont mieux adaptées à la dimension ou entraînent une réduction du coût de calcul par rapport aux méthodes existantes.Notre première contribution améliore le calcul bayésien approximatif (ABC) en utilisant le quasi-Monte Carlo (QMC). ABC permet l'inférence bayésienne dans les modèles avec une vraisemblance intractable. QMC est une technique de réduction de variance qui fournit des estimateurs plus précis d’intégrales. Notre deuxième contribution utilise le QMC pour l'inférence variationnelle(VI). VI est une méthode pour construire des approximations tractable à la distribution a posteriori . La troisième contribution développe une approche pour adapter les échantillonneurs Monte Carlo séquentiel (SMC) lorsque on utilise des noyaux de mutation Hamiltonian MonteCarlo (HMC). Les échantillonneurs SMC permettent une estimation non biaisée de l’évidence du modèle, mais ils ont tendance à perdre en performance lorsque la dimension croit. HMC est une technique de Monte Carlo par chaîne de Markov qui présente des propriétés intéressantes lorsque la dimension de l'espace cible augmente mais elle est difficile à adapter. En combinant les deux,nous construisons un échantillonneur qui tire avantage des deux. / Computational Bayesian statistics builds approximations to the posterior distribution either bysampling or by constructing tractable approximations. The contribution of this thesis to the fieldof Bayesian statistics is the development of new methodology by combining existing methods. Ourapproaches either scale better with the dimension or result in reduced computational cost com-pared to existing methods. Our first contribution improves approximate Bayesian computation(ABC) by using quasi-Monte Carlo (QMC). ABC allows Bayesian inference in models with in-tractable likelihoods. QMC is a variance reduction technique that yields precise estimations ofintegrals. Our second contribution takes advantage of QMC for Variational Inference (VI). VIis a method for constructing tractable approximations to the posterior distribution. The thirdcontribution develops an approach for tuning Sequential Monte Carlo (SMC) samplers whenusing Hamiltonian Monte Carlo (HMC) mutation kernels. SMC samplers allow the unbiasedestimation of the model evidence but tend to struggle with increasing dimension. HMC is aMarkov chain Monte Carlo technique that has appealing properties when the dimension of thetarget space increases but is difficult to tune. By combining the two we construct a sampler thattakes advantage of the two.
142

Improved Statistical Methods for Elliptic Stochastic Homogenization Problems : Application of Multi Level- and Multi Index Monte Carlo on Elliptic Stochastic Homogenization Problems

Daloul, Khalil January 2023 (has links)
In numerical multiscale methods, one relies on a coupling between macroscopic model and a microscopic model. The macroscopic model does not include the microscopic properties that the microscopic model offers and that are vital for the desired solution. Such microscopic properties include parameters like material coefficients and fluxes which may variate microscopically in the material. The effective values of this data can be computed by running local microscale simulations while averaging the microscopic data. One desires the effect of the microscopic coefficients on a macroscopic scale, and this can be done using classical homogenisation theory. One method in the homogenization theory is to use local elliptic cell problems in order to compute the homogenized constants and this results in <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda%20/R" data-classname="equation_inline" data-title="" /> error where <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda" data-classname="equation" /> is the wavelength of the microscopic variations and <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?R" data-classname="mimetex" data-title="" /> is the size of the simulation domain. However, one could greatly improve the accuracy by a slight modification in the homogenisation elliptic PDE and use a filter in the averaging process to get much better orders of error. The modification relates the elliptic PDE to a parabolic one, that could be solved and integrated in time to get the elliptic PDE's solution.   In this thesis I apply the modified elliptic cell homogenization method with a qth order filter to compute the homogenized diffusion constant in a 2d Poisson equation on a rectangular domain. Two cases were simulated. The diffusion coefficients used in the first case was a deterministic 2d matrix function and in the second case I used stochastic 2d matrix function, which results in a 2d stochastic differential equation (SDE). In the second case two methods were used to determine the expected value of the homogenized constants, firstly the multi-level Monte Carlo (MLMC) and secondly its generalization multi-index Monte Carlo (MIMC). The performance of MLMC and MIMC is then compared when used in the process of the homogenization.   In the homogenization process the finite element notations in 2d were used to estimate a solution of the Poisson equation. The grid spatial steps were varied in a first order differences in MLMC (square mesh) and first order mixed differences in MIMC (which allows for rectangular mesh).
143

Kinetic Monte Carlo simulations of autocatalytic protein aggregation

Eden-Jones, Kym Denys January 2014 (has links)
The self-assembly of proteins into filamentous structures underpins many aspects of biology, from dynamic cell scaffolding proteins such as actin, to the amyloid plaques responsible for a number of degenerative diseases. Typically, these self-assembly processes have been treated as nucleated, reversible polymerisation reactions, where dynamic fluctuations in a population of monomers eventually overcome an energy barrier, forming a stable aggregate that can then grow and shrink by the addition and loss of more protein from its ends. The nucleated, reversible polymerisation framework is very successful in describing a variety of protein systems such as the cell scaffolds actin and tubulin, and the aggregation of haemoglobin. Historically, amyloid fibrils were also thought to be described by this model, but measurements of their aggregation kinetics failed to match the model's predictions. Instead, recent work indicates that autocatalytic polymerisation - a process by which the number of growth competent species is increased through secondary nucleation, in proportion to the amount already present - is better at describing their formation. In this thesis, I will extend the predictions made in this mean-field, autocatalytic polymerisation model through use of kinetic Monte Carlo simulations. The ubiquitous sigmoid-like growth curve of amyloid fibril formation often possesses a notable quiescent lag phase which has been variously attributed to primary and secondary nucleation processes. Substantial variability in the length of this lag phase is often seen in replicate experimental growth curves, and naively may be attributed to fluctuations in one or both of these nucleation processes. By comparing analytic waiting-time distributions, to those produced by kinetic Monte Carlo simulation of the processes thought to be involved, I will demonstrate that this cannot be the case in sample volumes comparable with typical laboratory experiments. Experimentally, the length of the lag phase, or "lag time", is often found to scale with the total protein concentration, according to a power law with exponent γ. The models of nucleated polymerisation and autocatalytic polymerisation predict different values for this scaling exponent, and these are sometimes used to identify which of the models best describes a given protein system. I show that this approach is likely to result in a misidentification of the dominant mechanisms under conditions where the lag phase is dominated by a different process to the rest of the growth curve. Furthermore, I demonstrate that a change of the dominant mechanism associated with total protein concentration will produce "kinks" in the scaling of lag time with total protein concentration, and that these may be used to greater effect in identifying the dominant mechanisms from experimental kinetic data. Experimental data for bovine insulin aggregation, which is well described by the autocatalytic polymerisation model for low total protein concentrations, displays an intriguing departure from the predicted behaviour at higher protein concentrations. Additionally, the protein concentration at which the transition occurs, appears to be affected by the presence of salt. Coincident with this, an apparent change in the fibril structure indicates that different aggregation mechanisms may operate at different total protein concentrations. I demonstrate that a transition whereby the self-assembly mechanisms change once a critical concentration of fibrils or fibrillar protein is reached, can explain the observed behaviour and that this predicts a substantially higher abundance of shorter laments - which are thought to be pathogenic - at lower total protein concentrations than if self-assembly were consistently autocatalytic at all protein concentration. Amyloid-like loops have been observed in electron and atomic-force microscographs, together with non-looped fibrils, for a number of different proteins including ovalbumin. This implies that fibrils formed of these proteins are able to grow by fibrillar end-joining, and not only monomer addition as is more commonly assumed. I develop a simple analytic expression for polymerisation by monomer addition and fibrillar end-joining, (without autocatalysis) and show that this is not sufficient to explain the growth curves obtained experimentally for ovalbumin. I then demonstrate that the same data can be explained by combining fibrillar end-joining and fragmentation. Through the use of an analytic expression, I estimate the kinetic rates from the experimental growth curves and, via simulation, investigate the distribution of lament and loop lengths. Together, my findings demonstrate the relative importance of different molecular mechanisms in amyloid fibril formation, how these might be affected by various environmental parameters, and characteristic behaviour by which their involvement might be detected experimentally.
144

Enhancement of thermionic cooling using Monte Carlo simulation

Stephen, Alexander January 2014 (has links)
Advances in the field of semiconductor physics have allowed for rapid development of new, more powerful devices. The new fabrication techniques allow for reductions in device geometry, increasing the possible wafer packing density. The increased output power comes with the price of excessive heat generation, the removal of which proves problematic at such scales for conventional cooling systems. Consequently, there is a rising demand for new cooling systems, preferably those that do not add large amount of additional bulk to the system. One promising system is the thermoelectric (TE) cooler which is small enough to be integrated onto the device wafer. Unlike more traditional gas and liquid coolers, TE coolers do not require moving parts or external liquid reservoirs, relying only on the flow of electrons to transport heat energy away from the device. Although TE cooling provides a neat solution for the extraction of heat from micron scale devices, it can normally only produce small amounts of cooling of 1-2 Kelvin, limiting its application to low power devices. This research aimed to find ways to enhance the performance of the TE cooler using detailed simulation analysis. For this, a self consistent, semi-classical, ensemble Monte Carlo model was designed to investigate the operation of the TE cooler at a higher level than would be possible with experimental measurements alone. As part of its development, the model was validated on a variety of devices including a Gunn diode and two micro-cooler designs from the literature, one which had been previously simulated and another which had been experimentally analysed. When applied to the TE cooler of focus, novel operational data was obtained and signification improvements in cooling power were found with only minor alterations to the device structure and without need for an increase in volume.
145

A Model for Cyber Attack Risks in Telemetry Networks

Shourabi, Neda Bazyar 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / This paper develops a method for analyzing, modeling and simulating cyber threats in a networked telemetry environment as part of a risk management model. The paper includes an approach for incorporating a Monte Carlo computer simulation of this modeling with sample results.
146

Parameterisation of a nitrogen cycle model

Burgoyne, Calum K. January 2012 (has links)
No description available.
147

Uncertainty Determination with Monte-Carlo Based Algorithm

Leite, Nelson Paiva Oliveira, Sousa, Lucas Benedito dos Reis 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The measurement result is complete only if it contains the measurand and its units, uncertainty and coverage factor. The uncertainty estimation for the parameters acquired by the FTI is a known process. To execute this task the Institute of Research and Flight Test (IPEV) developed the SALEV© system which is fully compliant with the applicable standards. But the measurement set also includes Derived Parameters. The uncertainty evaluation of these parameters can be solved by cumbersome partial derivates. The search for a simpler solution leads us to a Monte-Carlo based algorithm. The result of using this approach are presented and discussed.
148

Monte Carlo simulation for confined electrolytes

Lee, Ming, Ripman, 李明 January 2000 (has links)
published_or_final_version / Chemistry / Doctoral / Doctor of Philosophy
149

A Monte Carlo study of the statistical properties of gamma-ray pulsarsin the gould belt

梁寶, Leung, Po. January 2003 (has links)
published_or_final_version / abstract / toc / Physics / Master / Master of Philosophy
150

A study of nonparametric inference problems using Monte Carlo methods

Ho, Hoi-sheung., 何凱嫦. January 2005 (has links)
published_or_final_version / abstract / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy

Page generated in 0.0369 seconds