• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 45
  • 45
  • 27
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport

Solomon, Clell J. Jr. January 1900 (has links)
Doctor of Philosophy / Department of Mechanical and Nuclear Engineering / J. Kenneth Shultis / A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
12

Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic Programming

Pierre-Louis, Péguy January 2012 (has links)
Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. In this dissertation, we develop two sets of Monte Carlo sampling-based algorithms to solve classes of two-stage stochastic programs. These algorithms follow a sequential framework such that a candidate solution is generated and evaluated at each step. If the solution is of desired quality, then the algorithm stops and outputs the candidate solution along with an approximate (1 - α) confidence interval on its optimality gap. The first set of algorithms proposed, which we refer to as the fixed-width sequential sampling methods, generate a candidate solution by solving a sampling approximation of the original problem. Using an independent sample, a confidence interval is built on the optimality gap of the candidate solution. The procedures stop when the confidence interval width plus an inflation factor falls below a pre-specified tolerance epsilon. We present two variants. The fully sequential procedures use deterministic, non-decreasing sample size schedules, whereas in another variant, the sample size at the next iteration is determined using current statistical estimates. We establish desired asymptotic properties and present computational results. In another set of sequential algorithms, we combine deterministically valid and sampling-based bounds. These algorithms, labeled sampling-based sequential approximation methods, take advantage of certain characteristics of the models such as convexity to generate candidate solutions and deterministic lower bounds through Jensen's inequality. A point estimate on the optimality gap is calculated by generating an upper bound through sampling. The procedure stops when the point estimate on the optimality gap falls below a fraction of its sample standard deviation. We show asymptotically that this algorithm finds a solution with a desired quality tolerance. We present variance reduction techniques and show their effectiveness through an empirical study.
13

Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication Systems

Ben Rached, Nadhir 08 October 2018 (has links)
The probability that a sum of random variables (RVs) exceeds (respectively falls below) a given threshold, is often encountered in the performance analysis of wireless communication systems. Generally, a closed-form expression of the sum distribution does not exist and a naive Monte Carlo (MC) simulation is computationally expensive when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. For the right-tail region, we develop a unified hazard rate twisting importance sampling (IS) technique that presents the advantage of being logarithmic efficient for arbitrary distributions under the independence assumption. A further improvement of this technique is then developed wherein the twisting is applied only to the components having more impacts on the probability of interest than others. Another challenging problem is when the components are correlated and distributed according to the Log-normal distribution. In this setting, we develop a generalized hybrid IS scheme based on a mean shifting and covariance matrix scaling techniques and we prove that the logarithmic efficiency holds again for two particular instances. We also propose two unified IS approaches to estimate the left-tail of sums of independent positive RVs. The first applies to arbitrary distributions and enjoys the logarithmic efficiency criterion, whereas the second satisfies the bounded relative error criterion under a mild assumption but is only applicable to the case of independent and identically distributed RVs. The left-tail of correlated Log-normal variates is also considered. In fact, we construct an estimator combining an existing mean shifting IS approach with a control variate technique and prove that it possess the asymptotically vanishing relative error property. A further interesting problem is the left-tail estimation of sums of ordered RVs. Two estimators are presented. The first is based on IS and achieves the bounded relative error under a mild assumption. The second is based on conditional MC approach and achieves the bounded relative error property for the Generalized Gamma case and the logarithmic efficiency for the Log-normal case.
14

Accélération de la convergence dans le code de transport de particules Monte-Carlo TRIPOLI-4® en criticité / Convergence acceleration in the Monte-Carlo particle transport code TRIPOLI-4® in criticality

Dehaye, Benjamin 05 December 2014 (has links)
Un certain nombre de domaines tels que les études de criticité requièrent le calcul de certaines grandeurs neutroniques d'intérêt. Il existe deux types de code : les codes déterministes et les codes stochastiques. Ces derniers sont réputés simuler la physique de la configuration traitée de manière exacte. Toutefois, le temps de calcul nécessaire peut s'avérer très élevé.Le travail réalisé dans cette thèse a pour but de bâtir une stratégie d'accélération de la convergence de la criticité dans le code de calcul TRIPOLI-4®. Nous souhaitons mettre en œuvre le jeu à variance nulle. Pour ce faire, il est nécessaire de calculer le flux adjoint. L'originalité de cette thèse est de calculer directement le flux adjoint par une simulation directe Monte-Carlo sans passer par un code externe, grâce à la méthode de la matrice de fission. Ce flux adjoint est ensuite utilisé comme carte d'importance afin d'accélérer la convergence de la simulation. / Fields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used : deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4®. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation.
15

Monte Carlo Methods for Multifactor Portfolio Credit Risk

Lee, Yi-hsi 08 February 2010 (has links)
This study develops a dynamic importance sampling method (DIS) for numerical simulations of rare events. The DIS method is flexible, fast, and accurate. The most importance is that it is very easy to implement. It could be applied to any multifactor copula models, which conduct by arbitrary independent random variables. First, the key common factor (KCF) is determined by the maximum value among the coefficients of factor loadings. Second, searching the indicator by the order statistics and applying the truncated sampling techniques, the probability of large losses (PLL) and the expected excess loss above threshold (EELAT) can be estimated precisely. Except for the assumption that the factor loadings of KCF do not exit zero elements, we do not impose any restrictions on the composition of the portfolio. The DIS method developed in this study can therefore be applied to a very wide range of credit risk models. Comparison of the numerical experiment between the method of Glasserman, Kang and Shahabuddin (2008) and the DIS method developed in this study, under the multifactor Gaussian copula model and the high market impact condition (the factor loadings of marketwide factor of 0.8), both variance reduction ratio and efficient ratio of the DIS model are much better than that of Glasserman et al. (2008)¡¦s. And both results approximate when the factor loadings of marketwide factor decreases to the range of 0.5 to 0.25. However, the DIS method is superior to the method of Glasserman et al. (2008) in terms of the practicability. Numerical simulation results demonstrate that the DIS method is not only feasible to the general market conditions, but also particularly to the high market impact condition, especially in credit contagion or market collapse environments. It is also noted that the numerical results indicate that the DIS estimators exit bounded relative error.
16

A study on the parameter estimation based on rounded data

Li, Gen-liang 21 January 2011 (has links)
Most recorded data are rounded to the nearest decimal place due to the precision of the recording mechanism. This rounding entails errors in estimation and measurement. In this paper, we compare the performances of three types of estimators based on rounded data from time series models, namely A-K corrected estimator, approximate MLE and the SOS estimator. In order to perform the comparison, the A-K corrected estimators for the MA(1) model are derived theoretically. To improve the efficiency of the estimation, two types of variance-reduction estimators are further proposed, which are based on linear combinations of aforementioned three estimators. Simulation results show the proposed variance reduction estimators significantly improve the estimation efficiency.
17

On the estimation of time series regression coefficients with long range dependence

Chiou, Hai-Tang 28 June 2011 (has links)
In this paper, we study the parameter estimation of the multiple linear time series regression model with long memory stochastic regressors and innovations. Robinson and Hidalgo (1997) and Hidalgo and Robinson (2002) proposed a class of frequency-domain weighted least squares estimates. Their estimates are shown to achieve the Gauss-Markov bound with standard convergence rate. In this study, we proposed a time-domain generalized LSE approach, in which the inverse autocovariance matrix of the innovations is estimated via autoregressive coefficients. Simulation studies are performed to compare the proposed estimates with Robinson and Hidalgo (1997) and Hidalgo and Robinson (2002). The results show the time-domain generalized LSE is comparable to Robinson and Hidalgo (1997) and Hidalgo and Robinson (2002) and attains higher efficiencies when the autoregressive or moving average coefficients of the FARIMA models have larger values. A variance reduction estimator, called TF estimator, based on linear combination of the proposed estimator and Hidalgo and Robinson (2002)'s estimator is further proposed to improve the efficiency. Bootstrap method is applied to estimate the weights of the linear combination. Simulation results show the TF estimator outperforms the frequency-domain as well as the time-domain approaches.
18

Mixing Processes for Ground Improvement by Deep Mixing

Larsson, Stefan January 2003 (has links)
<p>The thesis is dealing with mixing processes havingapplication to ground improvement by deep mixing. The mainobjectives of the thesis is to make a contribution to knowledgeof the basic mechanisms in mixing binding agents into soil andimprove the knowledge concerning factors that influence theuniformity of stabilised soil.</p><p>A great part of the work consists of a literature surveywith particular emphasis on literature on the processindustries. This review forms a basis for a profounddescription and discussion of the mixing process and factorsaffecting the process in connection with deep mixingmethods.</p><p>The thesis presents a method for a simple field test for thestudy of influential factors in the mixing process. A number offactors in the installation process of lime-cement columns havebeen studied in two field tests using statistical multifactorexperiment design. The effects of retrieval rate, number ofmixing blades, rotation speed, air pressure in the storagetank, and diameter of the binder outlet on the stabilisationeffect and the coefficient of variation determined byhand-operated penetrometer tests for excavated lime-cementcolumns, were studied.</p><p>The literature review, the description of the mixingprocess, and the results from the field tests provide a morebalanced picture of the mixing process and are expected to beuseful in connection to ground improvement projects and thedevelopment of mixing equipments.</p><p>The concept of sufficient mixture quality, i.e. theinteraction between the mixing process and the mechanicalsystem, is discussed in the last section. By means ofgeostatistical methods, the analysis considers thevolume-variability relationship with reference to strengthproperties. According to the analysis, the design values forstrength properties depends on the mechanical system, the scaleof scrutiny, the spatial correlation structure, and the conceptof safety, i.e. the concept of sufficient mixture quality isproblem specific.</p><p><b>Key words:</b>Deep Mixing, Lime cement columns, Mixingmechanisms, Mixture quality, Field test, ANOVA, Variancereduction.</p>
19

Bias and Variance Reduction in Assessing Solution Quality for Stochastic Programs

Stockbridge, Rebecca January 2013 (has links)
Stochastic programming combines ideas from deterministic optimization with probability and statistics to produce more accurate models of optimization problems involving uncertainty. However, due to their size, stochastic programming problems can be extremely difficult to solve and instead approximate solutions are used. Therefore, there is a need for methods that can accurately identify optimal or near optimal solutions. In this dissertation, we focus on improving Monte-Carlo sampling-based methods that assess the quality of potential solutions to stochastic programs by estimating optimality gaps. In particular, we aim to reduce the bias and/or variance of these estimators. We first propose a technique to reduce the bias of optimality gap estimators which is based on probability metrics and stability results in stochastic programming. This method, which requires the solution of a minimum-weight perfect matching problem, can be run in polynomial time in sample size. We establish asymptotic properties and present computational results. We then investigate the use of sampling schemes to reduce the variance of optimality gap estimators, and in particular focus on antithetic variates and Latin hypercube sampling. We also combine these methods with the bias reduction technique discussed above. Asymptotic properties of the resultant estimators are presented, and computational results on a range of test problems are discussed. Finally, we apply methods of assessing solution quality using antithetic variates and Latin hypercube sampling to a sequential sampling procedure to solve stochastic programs. In this setting, we use Latin hypercube sampling when generating a sequence of candidate solutions that is input to the procedure. We prove that these procedures produce a high-quality solution with high probability, asymptotically, and terminate in a finite number of iterations. Computational results are presented.
20

Mixing Processes for Ground Improvement by Deep Mixing

Larsson, Stefan January 2003 (has links)
The thesis is dealing with mixing processes havingapplication to ground improvement by deep mixing. The mainobjectives of the thesis is to make a contribution to knowledgeof the basic mechanisms in mixing binding agents into soil andimprove the knowledge concerning factors that influence theuniformity of stabilised soil. A great part of the work consists of a literature surveywith particular emphasis on literature on the processindustries. This review forms a basis for a profounddescription and discussion of the mixing process and factorsaffecting the process in connection with deep mixingmethods. The thesis presents a method for a simple field test for thestudy of influential factors in the mixing process. A number offactors in the installation process of lime-cement columns havebeen studied in two field tests using statistical multifactorexperiment design. The effects of retrieval rate, number ofmixing blades, rotation speed, air pressure in the storagetank, and diameter of the binder outlet on the stabilisationeffect and the coefficient of variation determined byhand-operated penetrometer tests for excavated lime-cementcolumns, were studied. The literature review, the description of the mixingprocess, and the results from the field tests provide a morebalanced picture of the mixing process and are expected to beuseful in connection to ground improvement projects and thedevelopment of mixing equipments. The concept of sufficient mixture quality, i.e. theinteraction between the mixing process and the mechanicalsystem, is discussed in the last section. By means ofgeostatistical methods, the analysis considers thevolume-variability relationship with reference to strengthproperties. According to the analysis, the design values forstrength properties depends on the mechanical system, the scaleof scrutiny, the spatial correlation structure, and the conceptof safety, i.e. the concept of sufficient mixture quality isproblem specific. Key words:Deep Mixing, Lime cement columns, Mixingmechanisms, Mixture quality, Field test, ANOVA, Variancereduction.

Page generated in 0.0282 seconds