Spelling suggestions: "subject:"rate events"" "subject:"race events""
1 |
Importance Sampling of Rare Events in Chaotic SystemsLeitão, Jorge C. 30 August 2016 (has links) (PDF)
Rare events play a crucial role in our society and a great effort has been dedicated to numerically study them in different contexts. This thesis proposes a numerical methodology based on Monte Carlo Metropolis-Hastings algorithm to efficiently sample rare events in chaotic systems. It starts by reviewing the relevance of rare events in chaotic systems, focusing in two types of rare events: states in closed systems with rare chaoticities, characterised by a finite-time Lyapunov exponent on a tail of its distribution, and states in transiently chaotic systems, characterised by a escape time on the tail of its distribution.
This thesis argues that these two problems can be interpreted as a traditional problem of statistical physics: sampling exponentially rare states in the phase-space - states in the tail of the density of states - with an increasing parameter - the system size. This is used as the starting point to review Metropolis-Hastings algorithm, a traditional and flexible methodology of importance sampling in statistical physics. By an analytical argument, it is shown that the chaoticity of the system hinders direct application of Metropolis-Hastings techniques to efficiently sample these states because the acceptance is low. It is argued that a crucial step to overcome low acceptance rate is to construct a proposal distribution that uses information about the system to bound the acceptance rate. Using generic properties of chaotic systems, such as exponential divergence of initial conditions and fractals embedded in their phase-spaces, a proposal distribution that guarantees a bounded acceptance rate is derived for each type of rare events. This proposal is numerically tested in simple chaotic systems, and the efficiency of the resulting algorithm is measured in numerous examples in both types of rare events.
The results confirm the dramatic improvement of using Monte Carlo importance sampling with the derived proposals against traditional methodologies:
the number of samples required to sample an exponentially rare state increases polynomially, as opposed to an exponential increase observed in uniform sampling. This thesis then analyses the sub-optimal (polynomial) efficiency of this algorithm in a simple system and shows analytically how the correlations induced by the proposal distribution can be detrimental to the efficiency of the algorithm. This thesis also analyses the effect of high-dimensional chaos in the proposal distribution and concludes that an anisotropic proposal that takes advantage of the different rates of expansion along the different unstable directions, is able to efficiently find rare states.
The applicability of this methodology is also discussed to sample rare states in non-hyperbolic systems, with focus on three systems: the logistic map, the Pomeau-Manneville map, and the standard map. Here, it is argued that the different origins of non-hyperbolicity require different proposal distributions. Overall, the results show that by incorporating specific information about the system in the proposal distribution of Metropolis-Hastings algorithm, it is possible to efficiently find and sample rare events of chaotic systems. This improved methodology should be useful to a large class of problems where the numerical characterisation of rare events is important.
|
2 |
Driving efficiency in design for rare events using metamodeling and optimizationMorrison, Paul 08 April 2016 (has links)
Rare events have very low probability of occurrence but can have significant impact. Earthquakes, volcanoes, and stock market crashes can have devastating impact on those affected. In industry, engineers evaluate rare events to design better high-reliability systems. The objective of this work is to increase efficiency in design optimization for rare events using metamodeling and variance reduction techniques. Opportunity exists to increase deterministic optimization efficiency by leveraging Design of Experiments to build an accurate metamodel of the system which is less resource intensive to evaluate than the real system. For computationally expensive models, running many trials will impede fast design iteration. Accurate metamodels can be used in place of these expensive models to probabilistically optimize the system for efficient quantification of rare event risk. Monte Carlo is traditionally used for this risk quantification but variance reduction techniques such as importance sampling allow accurate quantification with fewer model evaluations.
Metamodel techniques are the thread that tie together deterministic optimization using Design of Experiments and probabilistic optimization using Monte Carlo and variance reduction. This work will explore metamodeling theory and implementation, and outline a framework for efficient deterministic and probabilistic system optimization. The overall conclusion is that deterministic and probabilistic simulation can be combined through metamodeling and used to drive efficiency in design optimization.
Applications are demonstrated on a gas turbine combustion autoignition application where user controllable independent variables are optimized in mean and variance to maximize system performance while observing a constraint on allowable probability of a rare autoignition event.
|
3 |
Advances in Cross-Entropy MethodsThomas Taimre Unknown Date (has links)
The cross-entropy method is an established technique for solving difficult estimation, simulation, and optimisation problems. The method has its origins in an adaptive importance sampling procedure for rare-event estimation published by R. Y. Rubinstein in 1997. In that publication, the adaptive procedure produces a parametric probability density function whose parameters minimise the variance of the associated likelihood ratio estimator. This variance minimisation can also be viewed as minimising a measure of divergence to the minimum-variance importance sampling density over all members of the parametric family in question. Soon thereafter it was realised that the same adaptive importance sampling procedure could be used to solve combinatorial optimisation problems by viewing the set of solutions to the optimisation problem as a rare-event. This realisation led to the debut of the cross-entropy method in 1999, where it was introduced as a modification to the existing adaptive importance sampling procedure, with a different choice of directed divergence measure, in particular, the Kullback-Leibler cross-entropy. The contributions of this thesis are threefold. Firstly, in a review capacity, it provides an up-to-date consolidation of material on the cross-entropy method and its generalisations, as well as a collation of background material on importance sampling and Monte Carlo methods. The reviews are elucidated with original commentary and examples. Secondly, two new major applications of the cross-entropy methodology to optimisation problems are presented, advancing the boundary of knowledge on cross-entropy in the applied arena. Thirdly, two contributions to the methodological front are (a) an original extension of the generalised cross-entropy framework which enables one to construct state- and time-dependent importance sampling algorithms, and (b) a new algorithm for counting solutions to difficult binary-encoded problems.
|
4 |
Advances in Cross-Entropy MethodsThomas Taimre Unknown Date (has links)
The cross-entropy method is an established technique for solving difficult estimation, simulation, and optimisation problems. The method has its origins in an adaptive importance sampling procedure for rare-event estimation published by R. Y. Rubinstein in 1997. In that publication, the adaptive procedure produces a parametric probability density function whose parameters minimise the variance of the associated likelihood ratio estimator. This variance minimisation can also be viewed as minimising a measure of divergence to the minimum-variance importance sampling density over all members of the parametric family in question. Soon thereafter it was realised that the same adaptive importance sampling procedure could be used to solve combinatorial optimisation problems by viewing the set of solutions to the optimisation problem as a rare-event. This realisation led to the debut of the cross-entropy method in 1999, where it was introduced as a modification to the existing adaptive importance sampling procedure, with a different choice of directed divergence measure, in particular, the Kullback-Leibler cross-entropy. The contributions of this thesis are threefold. Firstly, in a review capacity, it provides an up-to-date consolidation of material on the cross-entropy method and its generalisations, as well as a collation of background material on importance sampling and Monte Carlo methods. The reviews are elucidated with original commentary and examples. Secondly, two new major applications of the cross-entropy methodology to optimisation problems are presented, advancing the boundary of knowledge on cross-entropy in the applied arena. Thirdly, two contributions to the methodological front are (a) an original extension of the generalised cross-entropy framework which enables one to construct state- and time-dependent importance sampling algorithms, and (b) a new algorithm for counting solutions to difficult binary-encoded problems.
|
5 |
Path Properties of Rare EventsCollingwood, Jesse January 2015 (has links)
Simulation of rare events can be costly with respect to time and computational resources. For certain processes it may be more efficient to begin at the rare event and simulate a kind of reversal of the process. This approach is particularly
well suited to reversible Markov processes, but holds much more generally. This more general result is formulated precisely in the language of stationary point processes, proven, and applied to some examples. An interesting question is whether this technique can be applied to Markov processes which are substochastic, i.e. processes which may die if a graveyard state is ever reached. First, some of the theory of substochastic processes is developed; in particular a slightly surprising result about the rate of convergence of the distribution pi(n) at time n of the process conditioned to stay alive to the quasi-stationary distribution, or Yaglom limit, is proved. This result is then verified with some illustrative examples. Next, it is demonstrated with an explicit example that on infinite state spaces the reversal approach to analyzing both the rate of convergence to the Yaglom limit and the likely path of rare events can fail due to transience.
|
6 |
Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication SystemsBen Rached, Nadhir 08 October 2018 (has links)
The probability that a sum of random variables (RVs) exceeds (respectively falls below) a given threshold, is often encountered in the performance analysis of wireless communication systems. Generally, a closed-form expression of the sum distribution does not exist and a naive Monte Carlo (MC) simulation is computationally expensive when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement.
For the right-tail region, we develop a unified hazard rate twisting importance sampling (IS) technique that presents the advantage of being logarithmic efficient for arbitrary distributions under the independence assumption. A further improvement of this technique is then developed wherein the twisting is applied only to the components having more impacts on the probability of interest than others. Another challenging problem is when the components are correlated and distributed according to the Log-normal distribution. In this setting, we develop a generalized hybrid IS scheme based on a mean shifting and covariance matrix scaling techniques and we prove that the logarithmic efficiency holds again for two particular instances.
We also propose two unified IS approaches to estimate the left-tail of sums of independent positive RVs. The first applies to arbitrary distributions and enjoys the logarithmic efficiency criterion, whereas the second satisfies the bounded relative error criterion under a mild assumption but is only applicable to the case of independent and identically distributed RVs. The left-tail of correlated Log-normal variates is also considered. In fact, we construct an estimator combining an existing mean shifting IS approach with a control variate technique and prove that it possess the asymptotically vanishing relative error property. A further interesting problem is the left-tail estimation of sums of ordered RVs. Two estimators are presented. The first is based on IS and achieves the bounded relative error under a mild assumption. The second is based on conditional MC approach and achieves the bounded relative error property for the Generalized Gamma case and the logarithmic efficiency for the Log-normal case.
|
7 |
Beyond Risk Factors: The Theoretical Contextualization of Illicit ADHD Medication Use Among High School StudentsWatkins, William Christopher 31 October 2008 (has links)
Prescription ADHD medication has been shown to be on the rise as a drug of abuse among young people. Unlike other drugs that serve only the purpose of achieving a high, this particular substance can also be perceived as useful and beneficial by those who abuse it. It is these positive attributes given to the illicit use of these drugs that make them so dangerous, especially in the hands of youths. To date extant research has made little effort to contextualize this type of drug use within theories of deviance. This study looks to fill that void as well as bridge the gap between current epidemiological studies on this topic and future etiological studies looking to assess causation within a theoretical context. Examining a national sample of 12th grade students (N=2,384), this study looks at what risk factors and predictors exist for the illicit use of ADHD medication. By testing aspects of social bonding and social learning theories, the goal is to assess which theory can best predict this type of drug use. Due to the low proportion of users, a rare events logistic regression is utilized in the analysis. While social learning items were able to account for the greatest level of variance in use, many of the findings contradict the theory, and therefore no theoretically based conclusions can be made at this time. Overall, more research needed on this topic using better fitting data tailored for theoretical interpretation. Considerations for future studies are also discussed.
|
8 |
Effcient Monte Carlo Simulations for the Estimation of Rare Events Probabilities in Wireless Communication SystemsBen Issaid, Chaouki 12 November 2019 (has links)
Simulation methods are used when closed-form solutions do not exist. An interesting simulation method that has been widely used in many scientific fields is the Monte Carlo method. Not only it is a simple technique that enables to estimate the
quantity of interest, but it can also provide relevant information about the value to be
estimated through its confidence interval. However, the use of classical Monte Carlo
method is not a reasonable choice when dealing with rare event probabilities. In fact,
very small probabilities require a huge number of simulation runs, and thus, the computational time of the simulation increases significantly. This observation lies behind the main motivation of the present work. In this thesis, we propose efficient importance sampling estimators to evaluate rare events probabilities. In the first part of
the thesis, we consider a variety of turbulence regimes, and we study the outage probability of free-space optics communication systems under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach,based on the exponential twisting technique to offer fast and accurate results. We also
show that our approach extends to the multihop scenario. In the second part of the
thesis, we are interested in assessing the outage probability achieved by some diversity techniques over generalized fading channels. In many circumstances, this is related to the difficult question of analyzing the statistics of the sum of random variables.
More specifically, we propose robust importance sampling schemes that efficiently evaluate the outage probability of diversity receivers over Gamma-Gamma, α − µ, κ − µ, and η − µ fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain
combining cases. We show the accuracy and the efficiency of our approach compared
to naive Monte Carlo via some selected numerical simulations in both case studies.
In the last part of this thesis, we propose efficient importance sampling estimators
for the left tail of positive Gaussian quadratic forms in both real and complex settings. We show that these estimators possess the bounded relative error property.
These estimators are then used to estimate the outage probability of maximum ratio
combining diversity receivers over correlated Nakagami-m or correlated Rician fading
channels
|
9 |
Essays on Asset Pricing and EconometricsJin, Tao 06 June 2014 (has links)
This dissertation presents three essays on asset pricing and econometrics. The first chapter identifies rare events and long-run risks simultaneously from a rich data set (the Barro-Ursua macroeconomic data set) and evaluates their contributions to asset pricing in a unified framework. The proposed model of rare events and long-run risks is estimated using a Bayesian Markov-chain Monte-Carlo method, and the estimates for the disaster process are closer to the data than those in the previous studies. Major evaluation results in asset pricing include: (1) for the unleveraged annual equity premium, the predicted values are 4.8%, 4.2%, and 1.0%, respectively; (2) for the Sharpe ratio, the values are 0.72, 0.66, and 0.15, respectively. / Economics
|
10 |
Importance Sampling of Rare Events in Chaotic SystemsLeitão, Jorge C. 19 August 2016 (has links)
Rare events play a crucial role in our society and a great effort has been dedicated to numerically study them in different contexts. This thesis proposes a numerical methodology based on Monte Carlo Metropolis-Hastings algorithm to efficiently sample rare events in chaotic systems. It starts by reviewing the relevance of rare events in chaotic systems, focusing in two types of rare events: states in closed systems with rare chaoticities, characterised by a finite-time Lyapunov exponent on a tail of its distribution, and states in transiently chaotic systems, characterised by a escape time on the tail of its distribution.
This thesis argues that these two problems can be interpreted as a traditional problem of statistical physics: sampling exponentially rare states in the phase-space - states in the tail of the density of states - with an increasing parameter - the system size. This is used as the starting point to review Metropolis-Hastings algorithm, a traditional and flexible methodology of importance sampling in statistical physics. By an analytical argument, it is shown that the chaoticity of the system hinders direct application of Metropolis-Hastings techniques to efficiently sample these states because the acceptance is low. It is argued that a crucial step to overcome low acceptance rate is to construct a proposal distribution that uses information about the system to bound the acceptance rate. Using generic properties of chaotic systems, such as exponential divergence of initial conditions and fractals embedded in their phase-spaces, a proposal distribution that guarantees a bounded acceptance rate is derived for each type of rare events. This proposal is numerically tested in simple chaotic systems, and the efficiency of the resulting algorithm is measured in numerous examples in both types of rare events.
The results confirm the dramatic improvement of using Monte Carlo importance sampling with the derived proposals against traditional methodologies:
the number of samples required to sample an exponentially rare state increases polynomially, as opposed to an exponential increase observed in uniform sampling. This thesis then analyses the sub-optimal (polynomial) efficiency of this algorithm in a simple system and shows analytically how the correlations induced by the proposal distribution can be detrimental to the efficiency of the algorithm. This thesis also analyses the effect of high-dimensional chaos in the proposal distribution and concludes that an anisotropic proposal that takes advantage of the different rates of expansion along the different unstable directions, is able to efficiently find rare states.
The applicability of this methodology is also discussed to sample rare states in non-hyperbolic systems, with focus on three systems: the logistic map, the Pomeau-Manneville map, and the standard map. Here, it is argued that the different origins of non-hyperbolicity require different proposal distributions. Overall, the results show that by incorporating specific information about the system in the proposal distribution of Metropolis-Hastings algorithm, it is possible to efficiently find and sample rare events of chaotic systems. This improved methodology should be useful to a large class of problems where the numerical characterisation of rare events is important.
|
Page generated in 0.0898 seconds