• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 9
  • 2
  • 2
  • 1
  • Tagged with
  • 43
  • 43
  • 43
  • 15
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Chiral description and physical limit of pseudoscalar decay constants with four dynamical quarks and applicability of quasi-Monte Carlo for lattice systems

Ammon, Andreas 10 June 2015 (has links)
In dieser Arbeit werden Massen und Zerfallskonstanten von pseudoskalaren Mesonen, insbes. dem Pion und dem D-s-Meson, im Rahmen der Quantenchromodynamik (QCD) berechnet. Diese Größen wurden im Rahmen der Gitter-QCD, einer gitter-regularisierten Form der QCD, mit vier dynamischen Twisted-Mass Fermionen (Up-, Down-, Strange- und Charm-Quark) berechnet. Dieses Setup bieten den Vorteil der automatischen O(a)-Verbesserung. Der Gitterabstand a wurde mit Hilfe der Pion-Masse und -Zerfallskonstante durch Extrapolation zum physikalischen Punkt, geg. durch das physikal. Verhältnis von f_pi/M_pi, bestimmt. Dabei kamen Formeln aus der chiralen Störungstheorie, die die speziellen Diskretisierungseffekte des Twisted-Mass-Formalismus berücksichtigen, zum Einsatz. Die bestimmten Werte des Gitterabstands, a=0.0899(13) fm (@ beta=1.9), a=0.0812(11) fm (@ beta=1.95) und a = 0.0624(7) fm (@beta=2.1) liegen etwa fünf Prozent über denen vorheriger Bestimmungen (Baron et. al. 2010). Dies erklärt sich vor allem durch eine Untersuchung bezüglich der Anwendbarkeit des Bereiches der Up-/Down-Quark-Massen auf die verwendeten Extrapolationsformeln. Zur Untersuchung des physikalischen Grenzwertes von f_{D_s} werden Formeln der chiralen Störungstheorie für schwere Mesonen (HM-ChiPT) eingesetzt. Das Endergebnis dieser Betrachtung f_{D_s} = 248.9(5.3) MeV liegt etwas über vorherigen Bestimmungen (ETMC 2009, arXiv:0904.095. HPQCD 2010, arXiv:1008.4018) und etwa zwei Standardabweichungen unter dem Mittel aus experimentellen Werten (PDG 2012). Ein weiterer Teil dieser Arbeit behandelt die i.A. schwierige Berechnung von unverbundenen Beiträgen, die z.B. bei der Berechnung der Masse des neutralen Pions eine Rolle spielen. In dieser Arbeit wird eine neue Methode zur Approximation solcher Beiträge vorgestellt, welche auf der sog. Quasi-Monte-Carlo-Methode (QMC-Methode) beruht. Diese Methode birgt große Möglichkeiten zu enormen Einsparungen der Rechenzeit. / This work deals with the determination of decay constants and masses of the pion and D-s meson. This happens in the framework of lattice QCD, a lattice regularised form of QCD. The four dynamical fermions (up, down, strange and charm quark) are described by the twisted-mass approach (TM-QCD) featuring automatic O(a) improvement. The lattice spacing a has been determined using the pion mass and decay constant extrapolated to the physical point, which is determined by the physical ratio f_pi/m_pi. In order to obtain an accurate description, new formulae from Chi-PT, taking into account the special form of discretisation effects of TM-QCD have been employed. The determined results of a = 0.0899(13) fm (@ beta=1.9), a = 0.0812(11)fm (@ beta=1.95) and a = 0.0624(7) fm (@ beta=2.1) are approximately 5% larger than previous determinations (Baron et. al. 2010). This shift is most likely explained by the reduced range of pion masses (
22

Construction d'ensembles de points basée sur des récurrences linéaires dans un corps fini de caractéristique 2 pour la simulation Monte Carlo et l'intégration quasi-Monte Carlo

Panneton, François January 2004 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
23

Evaluating of path-dependent securities with low discrepancy methods

Krykova, Inna 13 January 2004 (has links)
The objective of this thesis is the implementation of Monte Carlo and quasi-Monte Carlo methods for the valuation of financial derivatives. Advantages and disadvantages of each method are stated based on both the literature and on independent computational experiments by the author. Various methods to generate pseudo-random and quasi-random sequences are implemented in a computationally uniform way to enable objective comparisons. Code is developed in VBA and C++, with the C++ code converted to a COM object to make it callable from Microsoft Excel and Matlab. From the simulated random sequences Brownian motion paths are built using various constructions and variance-reduction techniques including Brownian Bridge and Latin hypercube. The power and efficiency of the methods is compared on four financial securities pricing problems: European options, Asian options, barrier options and mortgage-backed securities. In this paper a detailed step-by-step algorithm is given for each method (construction of pseudo- and quasi-random sequences, Brownian motion paths for some stochastic processes, variance- and dimension- reduction techniques, evaluation of some financial securities using different variance-reduction techniques etc).
24

Quasi Importance Sampling

Hörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
There arise two problems when the expectation of some function with respect to a nonuniform multivariate distribution has to be computed by (quasi-) Monte Carlo integration: the integrand can have singularities when the domain of the distribution is unbounded and it can be very expensive or even impossible to sample points from a general multivariate distribution. We show that importance sampling is a simple method to overcome both problems. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
25

A Study of Adaptation Mechanisms for Simulation Algorithms

Esteves Jaramillo, Rodolfo Gabriel 07 August 2012 (has links)
The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data.
26

A Study of Adaptation Mechanisms for Simulation Algorithms

Esteves Jaramillo, Rodolfo Gabriel 07 August 2012 (has links)
The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data.
27

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
28

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
29

Variance reduction methods for numerical solution of plasma kinetic diffusion

Höök, Lars Josef January 2012 (has links)
Performing detailed simulations of plasma kinetic diffusion is a challenging task and currently requires the largest computational facilities in the world. The reason for this is that, the physics in a confined heated plasma occur on a broad range of temporal and spatial scales. It is therefore of interest to improve the computational algorithms together with the development of more powerful computational resources. Kinetic diffusion processes in plasmas are commonly simulated with the Monte Carlo method, where a discrete set of particles are sampled from a distribution function and advanced in a Lagrangian frame according to a set of stochastic differential equations. The Monte Carlo method introduces computational error in the form of statistical random noise produced by a finite number of particles (or markers) N and the error scales as αN−β where β = 1/2 for the standard Monte Carlo method. This requires a large number of simulated particles in order to obtain a sufficiently low numerical noise level. Therefore it is essential to use techniques that reduce the numerical noise. Such methods are commonly called variance reduction methods. In this thesis, we have developed new variance reduction methods with application to plasma kinetic diffusion. The methods are suitable for simulation of RF-heating and transport, but are not limited to these types of problems. We have derived a novel variance reduction method that minimizes the number of required particles from an optimization model. This implicitly reduces the variance when calculating the expected value of the distribution, since for a fixed error the  optimization model ensures that a minimal number of particles are needed. Techniques that reduce the noise by improving the order of convergence, have also been considered. Two different methods have been tested on a neutral beam injection scenario. The methods are the scrambled Brownian bridge method and a method here called the sorting and mixing method of L´ecot and Khettabi[1999]. Both methods converge faster than the standard Monte Carlo method for modest number of time steps, but fail to converge correctly for large number of time steps, a range required for detailed plasma kinetic simulations. Different techniques are discussed that have the potential of improving the convergence to this range of time steps. / QC 20120314
30

Bayesian and Quasi-Monte Carlo spherical integration for global illumination

Marques, Ricardo 22 October 2013 (has links) (PDF)
The spherical sampling of the incident radiance function entails a high computational cost. Therefore the llumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. In this thesis, we show that existing Monte Carlo-based approaches can be improved by fully exploiting the information available which is later used for careful samples placement and weighting.The first contribution of this thesis is a strategy for producing high quality Quasi-Monte Carlo (QMC) sampling patterns for spherical integration by resorting to spherical Fibonacci point sets. We show that these patterns, when applied to the rendering integral, are very simple to generate and consistently outperform existing approaches. Furthermore, we introduce theoretical aspects on QMC spherical integration that, to our knowledge, have never been used in the graphics community, such as spherical cap discrepancy and point set spherical energy. These metrics allow assessing the quality of a spherical points set for a QMC estimate of a spherical integral.In the next part of the thesis, we propose a new heoretical framework for computing the Bayesian Monte Carlo quadrature rule. Our contribution includes a novel method of quadrature computation based on spherical Gaussian functions that can be generalized to a broad class of BRDFs (any BRDF which can be approximated sum of one or more spherical Gaussian functions) and potentially to other rendering applications. We account for the BRDF sharpness by using a new computation method for the prior mean function. Lastly, we propose a fast hyperparameters evaluation method that avoids the learning step.Our last contribution is the application of BMC with an adaptive approach for evaluating the illumination integral. The idea is to compute a first BMC estimate (using a first sample set) and, if the quality criterion is not met, directly inject the result as prior knowledge on a new estimate (using another sample set). The new estimate refines the previous estimate using a new set of samples, and the process is repeated until a satisfying result is achieved.

Page generated in 0.1324 seconds