• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 9
  • 2
  • 2
  • 1
  • Tagged with
  • 43
  • 43
  • 43
  • 15
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analys av osäkerheter vid hydraulisk modellering av torrfåror / Analysis of uncertainties for hydraulic modelling of dry river stretches

Ene, Simon January 2021 (has links)
Hydraulisk modellering är ett viktigt verktyg vid utvärdering av lämpliga åtgärder för torrfåror. Modelleringen påverkas dock alltid av osäkerheter och om dessa är stora kan en modells simuleringsresultat bli opålitligt. Det kan därför vara viktigt att presentera dess simuleringsresultat tillsammans med osäkerheter. Denna studie utreder olika typer av osäkerheter som kan påverka hydrauliska modellers simuleringsresultat. Dessutom utförs känslighetsanalyser där en andel av osäkerheten i simuleringsresultatet tillskrivs de olika inmatningsvariablerna som beaktas. De parametrar som ingår i analysen är upplösningen i använd terrängmodell, upplösning i den hydrauliska modellens beräkningsnät, inflöde till modellen och råheten genom Mannings tal. Studieobjektet som behandlades i denna studie var en torrfåra som ligger nedströms Sandforsdammen i Skellefteälven och programvaran TELEMAC-MASCARET nyttjades för samtliga hydrauliska simuleringar i denna studie.  För att analysera osäkerheter kopplade till upplösning i en terrängmodell och ett beräkningsnät användes ett kvalitativt tillvägagångsätt. Ett antal simuleringar utfördes där alla parametrar förutom de kopplade till upplösning fixerades. Simuleringsresultaten illustrerades sedan genom profil, sektioner, enskilda raster och raster som visade differensen mellan olika simuleringar. Resultaten för analysen visade att en låg upplösning i terrängmodeller och beräkningsnät kan medföra osäkerheter lokalt där det är högre vattenhastigheter och där det finns stor variation i geometrin. Några signifikanta effekter kunde dock inte skönjas på större skala.  Separat gjordes kvantitativa osäkerhets- och känslighetsanalyser för vattendjup och vattenhastighet i torrfåran. Inmatningsparametrarna inflöde till modellen och råhet genom Mannings tal ansågs medföra störst påverkan och övriga parametrar fixerades således. Genom script skapade i programmeringsspråket Python tillsammans med biblioteket OpenTURNS upprättades ett stort urval av möjliga kombinationer för storlek på inflöde och Mannings tal. Alla kombinationer som skapades antogs till fullo täcka upp för den totala osäkerheten i inmatningsparametrarna. Genom att använda urvalet för simulering kunde osäkerheten i simuleringsresultaten också beskrivas. Osäkerhetsanalyser utfördes både genom klassisk beräkning av statistiska moment och genom Polynomial Chaos Expansion. En känslighetsanalys följde sedan där Polynomial Chaos Expansion användes för att beräkna Sobols känslighetsindex för inflödet och Mannings tal i varje kontrollpunkt. Den kvantitativa osäkerhetsanalysen visade att det fanns relativt stora osäkerheter för både vattendjupet och vattenhastighet vid behandlat studieobjekt. Flödet bidrog med störst påverkan på osäkerheten medan Mannings tals påverkan var insignifikant i jämförelse, bortsett från ett område i modellen där dess påverkan ökade markant. / Hydraulic modelling is an important tool when measures for dry river stretches are assessed. The modelling is however always affected by uncertainties and if these are big the simulation results from the models could become unreliable. It may therefore be important to present its simulation results together with the uncertainties. This study addresses various types of uncertainties that may affect the simulation results from hydraulic models. In addition, sensitivity analysis is conducted where a proportion of the uncertainty in the simulation result is attributed to the various input variables that are included. The parameters included in the analysis are terrain model resolution, hydraulic model mesh resolution, inflow to the model and Manning’s roughness coefficient. The object studied in this paper was a dry river stretch located downstream of Sandforsdammen in the river of Skellefteälven, Sweden. The software TELEMAC-MASCARET was used to perform all hydraulic simulations for this thesis.  To analyze the uncertainties related to the resolution for the terrain model and the mesh a qualitative approach was used. Several simulations were run where all parameters except those linked to the resolution were fixed. The simulation results were illustrated through individual rasters, profiles, sections and rasters that showed the differences between different simulations. The results of the analysis showed that a low resolution for terrain models and meshes can lead to uncertainties locally where there are higher water velocities and where there are big variations in the geometry. However, no significant effects could be discerned on a larger scale.  Separately, quantitative uncertainty and sensitivity analyzes were performed for the simulation results, water depth and water velocity for the dry river stretch. The input parameters that were assumed to have the biggest impact were the inflow to the model and Manning's roughness coefficient. Other model input parameters were fixed. Through scripts created in the programming language Python together with the library OpenTURNS, a large sample of possible combinations for the size of inflow and Manning's roughness coefficient was created. All combinations were assumed to fully cover the uncertainty of the input parameters. After using the sample for simulation, the uncertainty of the simulation results could also be described. Uncertainty analyses were conducted through both classical calculation of statistical moments and through Polynomial Chaos Expansion. A sensitivity analysis was then conducted through Polynomial Chaos Expansion where Sobol's sensitivity indices were calculated for the inflow and Manning's M at each control point. The analysis showed that there were relatively large uncertainties for both the water depth and the water velocity. The inflow had the greatest impact on the uncertainties while Manning's M was insignificant in comparison, apart from one area in the model where its impact increased.
22

Construction d'ensembles de points basée sur des récurrences linéaires dans un corps fini de caractéristique 2 pour la simulation Monte Carlo et l'intégration quasi-Monte Carlo

Panneton, François January 2004 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
23

Evaluating of path-dependent securities with low discrepancy methods

Krykova, Inna 13 January 2004 (has links)
The objective of this thesis is the implementation of Monte Carlo and quasi-Monte Carlo methods for the valuation of financial derivatives. Advantages and disadvantages of each method are stated based on both the literature and on independent computational experiments by the author. Various methods to generate pseudo-random and quasi-random sequences are implemented in a computationally uniform way to enable objective comparisons. Code is developed in VBA and C++, with the C++ code converted to a COM object to make it callable from Microsoft Excel and Matlab. From the simulated random sequences Brownian motion paths are built using various constructions and variance-reduction techniques including Brownian Bridge and Latin hypercube. The power and efficiency of the methods is compared on four financial securities pricing problems: European options, Asian options, barrier options and mortgage-backed securities. In this paper a detailed step-by-step algorithm is given for each method (construction of pseudo- and quasi-random sequences, Brownian motion paths for some stochastic processes, variance- and dimension- reduction techniques, evaluation of some financial securities using different variance-reduction techniques etc).
24

Quasi Importance Sampling

Hörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
There arise two problems when the expectation of some function with respect to a nonuniform multivariate distribution has to be computed by (quasi-) Monte Carlo integration: the integrand can have singularities when the domain of the distribution is unbounded and it can be very expensive or even impossible to sample points from a general multivariate distribution. We show that importance sampling is a simple method to overcome both problems. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
25

A Study of Adaptation Mechanisms for Simulation Algorithms

Esteves Jaramillo, Rodolfo Gabriel 07 August 2012 (has links)
The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data.
26

A Study of Adaptation Mechanisms for Simulation Algorithms

Esteves Jaramillo, Rodolfo Gabriel 07 August 2012 (has links)
The performance of a program can sometimes greatly improve if it was known in advance the features of the input the program is supposed to process, the actual operating parameters it is supposed to work with, or the specific environment it is to run on. However, this information is typically not available until too late in the program’s operation to take advantage of it. This is especially true for simulation algorithms, which are sensitive to this late-arriving information, and whose role in the solution of decision-making, inference and valuation problems is crucial. To overcome this limitation we need to provide the flexibility for a program to adapt its behaviour to late-arriving information once it becomes available. In this thesis, I study three adaptation mechanisms: run-time code generation, model-specific (quasi) Monte Carlo sampling and dynamic computation offloading, and evaluate their benefits on Monte Carlo algorithms. First, run-time code generation is studied in the context of Monte Carlo algorithms for time-series filtering in the form of the Input-Adaptive Kalman filter, a dynamically generated state estimator for non-linear, non-Gaussian dynamic systems. The second adaptation mechanism consists of the application of the functional-ANOVA decomposition to generate model-specific QMC-samplers which can then be used to improve Monte Carlo-based integration. The third adaptive mechanism treated here, dynamic computation offloading, is applied to wireless communication management, where network conditions are assessed via option valuation techniques to determine whether a program should offload computations or carry them out locally in order to achieve higher run-time (and correspondingly battery-usage) efficiency. This ability makes the program well suited for operation in mobile environments. At their core, all these applications carry out or make use of (quasi) Monte Carlo simulations on dynamic Bayesian networks (DBNs). The DBN formalism and its associated simulation-based algorithms are of great value in the solution to problems with a large uncertainty component. This characteristic makes adaptation techniques like those studied here likely to gain relevance in a world where computers are endowed with perception capabilities and are expected to deal with an ever-increasing stream of sensor and time-series data.
27

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
28

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
29

Variance reduction methods for numerical solution of plasma kinetic diffusion

Höök, Lars Josef January 2012 (has links)
Performing detailed simulations of plasma kinetic diffusion is a challenging task and currently requires the largest computational facilities in the world. The reason for this is that, the physics in a confined heated plasma occur on a broad range of temporal and spatial scales. It is therefore of interest to improve the computational algorithms together with the development of more powerful computational resources. Kinetic diffusion processes in plasmas are commonly simulated with the Monte Carlo method, where a discrete set of particles are sampled from a distribution function and advanced in a Lagrangian frame according to a set of stochastic differential equations. The Monte Carlo method introduces computational error in the form of statistical random noise produced by a finite number of particles (or markers) N and the error scales as αN−β where β = 1/2 for the standard Monte Carlo method. This requires a large number of simulated particles in order to obtain a sufficiently low numerical noise level. Therefore it is essential to use techniques that reduce the numerical noise. Such methods are commonly called variance reduction methods. In this thesis, we have developed new variance reduction methods with application to plasma kinetic diffusion. The methods are suitable for simulation of RF-heating and transport, but are not limited to these types of problems. We have derived a novel variance reduction method that minimizes the number of required particles from an optimization model. This implicitly reduces the variance when calculating the expected value of the distribution, since for a fixed error the  optimization model ensures that a minimal number of particles are needed. Techniques that reduce the noise by improving the order of convergence, have also been considered. Two different methods have been tested on a neutral beam injection scenario. The methods are the scrambled Brownian bridge method and a method here called the sorting and mixing method of L´ecot and Khettabi[1999]. Both methods converge faster than the standard Monte Carlo method for modest number of time steps, but fail to converge correctly for large number of time steps, a range required for detailed plasma kinetic simulations. Different techniques are discussed that have the potential of improving the convergence to this range of time steps. / QC 20120314
30

Bayesian and Quasi-Monte Carlo spherical integration for global illumination

Marques, Ricardo 22 October 2013 (has links) (PDF)
The spherical sampling of the incident radiance function entails a high computational cost. Therefore the llumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. In this thesis, we show that existing Monte Carlo-based approaches can be improved by fully exploiting the information available which is later used for careful samples placement and weighting.The first contribution of this thesis is a strategy for producing high quality Quasi-Monte Carlo (QMC) sampling patterns for spherical integration by resorting to spherical Fibonacci point sets. We show that these patterns, when applied to the rendering integral, are very simple to generate and consistently outperform existing approaches. Furthermore, we introduce theoretical aspects on QMC spherical integration that, to our knowledge, have never been used in the graphics community, such as spherical cap discrepancy and point set spherical energy. These metrics allow assessing the quality of a spherical points set for a QMC estimate of a spherical integral.In the next part of the thesis, we propose a new heoretical framework for computing the Bayesian Monte Carlo quadrature rule. Our contribution includes a novel method of quadrature computation based on spherical Gaussian functions that can be generalized to a broad class of BRDFs (any BRDF which can be approximated sum of one or more spherical Gaussian functions) and potentially to other rendering applications. We account for the BRDF sharpness by using a new computation method for the prior mean function. Lastly, we propose a fast hyperparameters evaluation method that avoids the learning step.Our last contribution is the application of BMC with an adaptive approach for evaluating the illumination integral. The idea is to compute a first BMC estimate (using a first sample set) and, if the quality criterion is not met, directly inject the result as prior knowledge on a new estimate (using another sample set). The new estimate refines the previous estimate using a new set of samples, and the process is repeated until a satisfying result is achieved.

Page generated in 0.0632 seconds