• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1286
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 12
  • 12
  • 10
  • 10
  • Tagged with
  • 2848
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 163
  • 156
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

The resampling weights in sampling-importance resampling algorithm.

January 2006 (has links)
Au Siu Chun Brian. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 54-57). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Related sampling methods --- p.4 / Chapter 2.1 --- Introduction --- p.4 / Chapter 2.2 --- Gibbs sampler --- p.4 / Chapter 2.3 --- Importance sampling --- p.5 / Chapter 2.4 --- Sampling-importance resampling (SIR) --- p.7 / Chapter 2.5 --- Inverse Bayes formulae sampling (IBF sampling) --- p.10 / Chapter 3 --- Resampling weights in the SIR algorithm --- p.13 / Chapter 3.1 --- Resampling weights --- p.13 / Chapter 3.2 --- Problem in IBF sampling --- p.18 / Chapter 3.3 --- Adaptive finite mixture of distributions --- p.18 / Chapter 3.4 --- Allowing general distribution of 9 --- p.21 / Chapter 3.5 --- Examples and graphical comparison --- p.24 / Chapter 4 --- Resampling weight in Gibbs sampling --- p.32 / Chapter 4.1 --- Introduction --- p.32 / Chapter 4.2 --- Use Gibbs sampler to obtain ISF --- p.33 / Chapter 4.3 --- How many iterations? --- p.36 / Chapter 4.4 --- Applications --- p.41 / Chapter 4.4.1 --- The genetic linkage model --- p.41 / Chapter 4.4.2 --- Example --- p.43 / Chapter 4.4.3 --- The probit binary regression model --- p.44 / Chapter 5 --- Conclusion and discussion --- p.49 / Appendix A: Exact bias of the SIR --- p.52 / References --- p.54
502

Exact simulation and importance sampling of diffusion process. / CUHK electronic theses & dissertations collection

January 2012 (has links)
随着全球金融市场的日益创新和不断加剧的竞争,金融产品也变得越来越结构复杂。这些复杂的金融产品,从定价,对冲到风险管理,都对相应的数学技术提出越来越高的要求。在目前运用的技术中,蒙特卡洛模拟方法由于其广泛的适用性而备受欢迎。本篇论文对于在金融工程和工业界都受到广泛关注的两个问题进行研究:局部化以及对于受布朗运动驱动的随机微分方程的精确抽样;布朗河曲,重要性抽样已经对于扩散过程极值的无偏估计。 / 第一篇文章考虑了使用蒙特卡洛模拟方法产生随机微分方程的样本路径。离散化方法是此前普遍使用的近似产生路径的方法:这种方法很容易实施,但是会产生抽样偏差。本篇文章提出一种模拟方法,可用于随机微分方程路径的精确抽样。一个至关重要的发现是:随机微分方程的概率分布可以被分解为两部分的乘积,一部分是标准布朗运动的概率分布,另外一部分是双重随机的泊松过程。基于这样的分解和局部化技术,本篇文章提出一种接受-拒绝算法。数值试验可以验证,这种方法的均方误差-计算时间的收敛速度可以达到O(t⁻¹[superscript /]²),优于传统的离散化方法。更进一步的优点是:这种方法可以对带边界的随机微分方程进行精确抽样,而带边界的微分方程正是传统离散方法经常遇到困难的情形。 / 第二篇文章研究了如何计算基于扩散过程极值的泛函。传统的离散化方法收率速度很慢。本篇文章提出了一种基于维纳测度分解的无偏蒙特卡洛估计。运用重要性抽样技术和对于布朗运动路径的威廉分解,本篇文章将对于一般性扩散过程的极值的抽样化简为对于两个布朗河曲的抽样。数值试验部分也验证了本篇文章所提方法的准确性和计算上的高效率。 / With increased innovation and competition in the current financial market, financial product has become more and more complicated, which requires advanced techniques in pricing, hedging and risk management. Monte Carlo simulation is among the most popular ones due to its great °exibility. This dissertation contains two problems recently arises and receives much attention from both the financial engineering and simulation communities: Localization and Exact Simulation of Brownian Motion Driven Stochastic Differential Equations; And Brownian Meanders, Importance Sampling and Un-biased Simulation of Diffusion Extremes. / The first essay considers generating sample paths of stochastic differential equations (SDE) using the Monte Carlo method. Discretization is a popular approximate approach to generating those paths: it is easy to implement but prone to simulation bias. This essay presents a new simulation scheme to exactly generate samples for SDEs. The key observation is that the law of a general SDE can be decomposed into a product of the law of standard Brownian motion and the law of a doubly stochastic Poisson process. An acceptance-rejection algorithm is devised based on the combination of this decomposition and a localization technique. The numerical results corroborates that the mean-square error of the proposed method is in the order of O(t⁻¹[superscript /]²), which is superior to the conventional discretization schemes. Furthermore, the proposed method also can generate exact samples for SDE with boundaries which the discretization schemes usually find difficulty in dealing with. / The second essay considers computing expected values of functions involving extreme values of diffusion processes. The conventional discretization Monte Carlo simulation schemes often converge very slowly. In this paper, we propose a Wiener measure decomposition-based approach to construct unbiased Monte Carlo estimators. Combined with the importance sampling technique and the celebrated Williams' path decomposition of Brownian motion, this approach transforms the task of simulating extreme values of a general diffusion process to the simulation of two Brownian meanders. The numerical experiments show the accuracy and efficiency of our Poisson-kernel unbiased estimators. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Huang, Zhengyu. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 107-115). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- SDEs and Discretization Methods --- p.4 / Chapter 1.3 --- The Beskos-Roberts Exact Simulation --- p.15 / Chapter 1.4 --- Major Contributions --- p.19 / Chapter 1.5 --- Organization --- p.26 / Chapter 2 --- Localization and Exact Simulation of SDEs --- p.27 / Chapter 2.1 --- Main Result: A Localization Technique --- p.27 / Chapter 2.1.1 --- Sampling of ζ --- p.33 / Chapter 2.1.2 --- Sampling of Wζ^(T-t) --- p.35 / Chapter 2.1.3 --- Sampling of the Bernoulli I --- p.38 / Chapter 2.1.4 --- Comparison Involving Infinite Sums --- p.40 / Chapter 2.2 --- Discussions --- p.43 / Chapter 2.2.1 --- One Extension: SDEs with Boundaries --- p.43 / Chapter 2.2.2 --- Simulation Efficiency --- p.45 / Chapter 2.2.3 --- Extension to Multi-dimensional SDE --- p.48 / Chapter 2.3 --- Numerical Examples --- p.52 / Chapter 2.3.1 --- Ornstein-Uhlenbeck Mean-Reverting Process --- p.52 / Chapter 2.3.2 --- A Double-Well Potential Model --- p.56 / Chapter 2.3.3 --- Cox-Ingersoll-Ross Square-Root Process --- p.56 / Chapter 2.3.4 --- Linear-Drift CEV-Type-Diffusion Model --- p.62 / Chapter 2.4 --- Appendix --- p.62 / Chapter 2.4.1 --- Simulation of Brownian Bridges --- p.62 / Chapter 2.4.2 --- Proofs of Main Results --- p.64 / Chapter 2.4.3 --- The Oscillating Property of the Series --- p.71 / Chapter 3 --- Unbiased Simulation of Diffusion Extremes --- p.79 / Chapter 3.1 --- A Wiener Measure Decomposition --- p.79 / Chapter 3.2 --- Brownian Meanders and Importance Sampler of Diffusion Extremes --- p.81 / Chapter 3.2.1 --- Exact Simulation of (θT, KT, WT) --- p.83 / Chapter 3.2.2 --- Simulating Importance Sampling Weight --- p.84 / Chapter 3.3 --- Some Extensions --- p.88 / Chapter 3.3.1 --- Variance Reduction --- p.88 / Chapter 3.3.2 --- Double Barrier Options --- p.90 / Chapter 3.4 --- Numerical Examples --- p.94 / Chapter 3.5 --- Appendix --- p.98 / Chapter 3.5.1 --- Brownian Bridges and Meanders --- p.98 / Chapter 3.5.2 --- Proofs of Main Results --- p.101 / Bibliography --- p.107
503

Measuring Data Abstraction Quality in Multiresolution Visualizations

Cui, Qingguang 11 April 2007 (has links)
Data abstraction techniques are widely used in multiresolution visualization systems to reduce visual clutter and facilitate analysis from overview to detail. However, analysts are usually unaware of how well the abstracted data represent the original dataset, which can impact the reliability of results gleaned from the abstractions. In this thesis, we define three types of data abstraction quality measures for computing the degree to which the abstraction conveys the original dataset: the Histogram Difference Measure, the Nearest Neighbor Measure and Statistical Measure. They have been integrated within XmdvTool, a public-domain multiresolution visualization system for multivariate data analysis that supports sampling as well as clustering to simplify data. Several interactive operations are provided, including adjusting the data abstraction level, changing selected regions, and setting the acceptable data abstraction quality level. Conducting these operations, analysts can select an optimal data abstraction level. We did an evaluation to check how well the data abstraction measures conform to the data abstraction quality perceived by users. We adjusted the data abstraction measures based on the results of the evaluation. We also experimented on the measures with different distance methods and different computing mechanisms, in order to find the optimal variation from many variations of each type of measure. Finally, we developed two case studies to demonstrate how analysts can compare different abstraction methods using the measures to see how well relative data density and outliers are maintained, and then select an abstraction method that meets the requirement of their analytic tasks.
504

Bayesian Inference of a Finite Population under Selection Bias

Xu, Zhiqing 01 May 2014 (has links)
Length-biased sampling method gives the samples from a weighted distribution. With the underlying distribution of the population, one can estimate the attributes of the population by converting the weighted samples. In this thesis, generalized gamma distribution is considered as the underlying distribution of the population and the inference of the weighted distribution is made. Both the models with known and unknown finite population size are considered. In the modes with known finite population size, maximum likelihood estimation and bootstrapping methods are attempted to derive the distributions of the parameters and population mean. For the sake of comparison, both the models with and without the selection bias are built. The computer simulation results show the model with selection bias gives better prediction for the population mean. In the model with unknown finite population size, the distributions of the population size as well as the sample complements are derived. Bayesian analysis is performed using numerical methods. Both the Gibbs sampler and random sampling method are employed to generate the parameters from their joint posterior distribution. The fitness of the size-biased samples are checked by utilizing conditional predictive ordinate.
505

Effect fusion using model-based clustering

Malsiner-Walli, Gertraud, Pauger, Daniela, Wagner, Helga 01 April 2018 (has links) (PDF)
In social and economic studies many of the collected variables are measured on a nominal scale, often with a large number of categories. The definition of categories can be ambiguous and different classification schemes using either a finer or a coarser grid are possible. Categorization has an impact when such a variable is included as covariate in a regression model: a too fine grid will result in imprecise estimates of the corresponding effects, whereas with a too coarse grid important effects will be missed, resulting in biased effect estimates and poor predictive performance. To achieve an automatic grouping of the levels of a categorical covariate with essentially the same effect, we adopt a Bayesian approach and specify the prior on the level effects as a location mixture of spiky Normal components. Model-based clustering of the effects during MCMC sampling allows to simultaneously detect categories which have essentially the same effect size and identify variables with no effect at all. Fusion of level effects is induced by a prior on the mixture weights which encourages empty components. The properties of this approach are investigated in simulation studies. Finally, the method is applied to analyse effects of high-dimensional categorical predictors on income in Austria.
506

Driving efficiency in design for rare events using metamodeling and optimization

Morrison, Paul 08 April 2016 (has links)
Rare events have very low probability of occurrence but can have significant impact. Earthquakes, volcanoes, and stock market crashes can have devastating impact on those affected. In industry, engineers evaluate rare events to design better high-reliability systems. The objective of this work is to increase efficiency in design optimization for rare events using metamodeling and variance reduction techniques. Opportunity exists to increase deterministic optimization efficiency by leveraging Design of Experiments to build an accurate metamodel of the system which is less resource intensive to evaluate than the real system. For computationally expensive models, running many trials will impede fast design iteration. Accurate metamodels can be used in place of these expensive models to probabilistically optimize the system for efficient quantification of rare event risk. Monte Carlo is traditionally used for this risk quantification but variance reduction techniques such as importance sampling allow accurate quantification with fewer model evaluations. Metamodel techniques are the thread that tie together deterministic optimization using Design of Experiments and probabilistic optimization using Monte Carlo and variance reduction. This work will explore metamodeling theory and implementation, and outline a framework for efficient deterministic and probabilistic system optimization. The overall conclusion is that deterministic and probabilistic simulation can be combined through metamodeling and used to drive efficiency in design optimization. Applications are demonstrated on a gas turbine combustion autoignition application where user controllable independent variables are optimized in mean and variance to maximize system performance while observing a constraint on allowable probability of a rare autoignition event.
507

Comparison studies of Dowex MSA-1 resin and Scott impregnated charcoal for iodine adsorbents in an iodine air monitor system

Green, Daniel George January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
508

Some developments of local quasi-likelihood estimation and optimal Bayesian sampling plans for censored data. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 1999 (has links)
by Jian Wei Chen. / "May 1999." / Thesis (Ph.D.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (p. 178-180). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
509

Improving inspection performance

Joshi, Arun Shridhar January 2011 (has links)
Digitized by Kansas Correctional Industries
510

The optional sampling theorem for partially ordered time processes and multiparameter stochastic calculus

Washburn, Robert Buchanan January 1979 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1979. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND SCIENCE. / Vita. / Bibliography: leaves 364-373. / by Robert Buchanan Washburn, Jr. / Ph.D.

Page generated in 0.055 seconds