621 |
Some aspects on Bayesian analysis of the LISREL model.January 2002 (has links)
Tse Ka Ling Carol. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 72-76). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Factor Analysis Model --- p.1 / Chapter 1.2 --- Main Objectives --- p.2 / Chapter 1.2.1 --- Investigate the distribution of the estimated Factor Scores --- p.2 / Chapter 1.2.2 --- Propose an alternative method for getting the estimates of the LISREL model --- p.4 / Chapter 1.3 --- Summary --- p.4 / Chapter 2 --- Joint Bayesian Approach of the Factor Analysis Model --- p.6 / Chapter 2.1 --- Conditional Distribution --- p.7 / Chapter 2.1.1 --- Conditional distribution of Z given Y and θ --- p.7 / Chapter 2.1.2 --- Conditional distribution of θ given Y and Z --- p.7 / Chapter 2.2 --- Implementation of the Gibbs sampler for generating the random observations --- p.11 / Chapter 2.3 --- Bayesian Estimates and their Statistical Properties --- p.13 / Chapter 2.3.1 --- Estimates of unknown parameter --- p.13 / Chapter 2.3.2 --- Estimates of Factor Score --- p.14 / Chapter 3 --- Examine the distribution of the estimated factor scores --- p.15 / Chapter 3.1 --- The 1st Simulation Study --- p.15 / Chapter 3.2 --- The 2nd Simulation Study --- p.30 / Chapter 3.3 --- The 3rd Simulation Study --- p.31 / Chapter 4 --- An Alternative method for getting the parameter estimatesin the LISREL Model --- p.44 / Chapter 4.1 --- Full LISREL model --- p.44 / Chapter 4.2 --- Our proposed method --- p.46 / Chapter 4.3 --- Simulation Studies --- p.49 / Chapter 4.3.1 --- The 1st Simulation Study --- p.49 / Chapter 4.3.2 --- The 3rd Simulation Study --- p.50 / Chapter 4.4 --- Conclusion --- p.53 / Appendix --- p.56 / Bibliography --- p.72
|
622 |
Studies on the minority game and traffic flow models.January 2002 (has links)
Lee Kuen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 123-128). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- The Minority Game: A Review --- p.3 / Chapter 2.1 --- The basic MG --- p.9 / Chapter 2.2 --- The basic features of MG --- p.11 / Chapter 2.3 --- Crowd-Anticrowd Theory --- p.15 / Chapter 2.4 --- Some variation on the Minority Game --- p.20 / Chapter 2.4.1 --- The Thermal Minority Game (TMG) --- p.20 / Chapter 2.4.2 --- The Evolutionary Minority Game (EMG) --- p.21 / Chapter 3 --- The Minority Game with different payoff functions --- p.23 / Chapter 3.1 --- Review --- p.24 / Chapter 3.1.1 --- Models of Savit et al [48] --- p.24 / Chapter 3.1.2 --- Results --- p.25 / Chapter 3.2 --- Applying Crowd-anticrowd theory to the models --- p.27 / Chapter 4 --- The Minority Game with k-sided imitation in regular net- works --- p.33 / Chapter 4.1 --- Review --- p.34 / Chapter 4.1.1 --- 1-sided follow-action model --- p.34 / Chapter 4.1.2 --- Results --- p.36 / Chapter 4.2 --- Follow-action model --- p.37 / Chapter 4.2.1 --- 2-sided model --- p.37 / Chapter 4.2.2 --- Results --- p.38 / Chapter 4.2.3 --- k-sided model and results --- p.40 / Chapter 4.3 --- Follow-strategy model --- p.43 / Chapter 4.3.1 --- 1-sided and 2-sided models --- p.43 / Chapter 4.3.2 --- Results --- p.45 / Chapter 4.3.3 --- k-sided model and results --- p.47 / Chapter 4.4 --- Summary --- p.51 / Chapter 5 --- One-lane traffic flow models --- p.53 / Chapter 5.1 --- Introduction --- p.54 / Chapter 5.2 --- NS dynamics --- p.56 / Chapter 5.3 --- FI dynamics --- p.60 / Chapter 6 --- One-lane traffic flow models with anticipation effects --- p.63 / Chapter 6.1 --- Review --- p.64 / Chapter 6.1.1 --- Model using NS dynamics --- p.64 / Chapter 6.1.2 --- Results --- p.65 / Chapter 6.2 --- Models using FI dynamics --- p.65 / Chapter 6.2.1 --- Models --- p.65 / Chapter 6.2.2 --- Results and Discussion --- p.68 / Chapter 6.2.3 --- Mean Field Theory --- p.76 / Chapter 6.3 --- Summary --- p.89 / Chapter 7 --- Two-route Models with Global Information --- p.91 / Chapter 7.1 --- Review: Two-route model with global information using NS dynamics --- p.92 / Chapter 7.1.1 --- Announcing transit time as global information --- p.92 / Chapter 7.1.2 --- Results --- p.93 / Chapter 7.2 --- Announcing instantaneous average speed model using NS dy- namics [87] --- p.95 / Chapter 7.2.1 --- Model --- p.95 / Chapter 7.2.2 --- Results --- p.95 / Chapter 7.2.3 --- Discussion --- p.99 / Chapter 7.3 --- Two-route models with global information using FI dynamics --- p.103 / Chapter 7.3.1 --- Models --- p.103 / Chapter 7.3.2 --- Results --- p.105 / Chapter 7.3.3 --- Discussion --- p.110 / Chapter 7.4 --- Summary --- p.120 / Chapter 8 --- Conclusion --- p.121 / Bibliography --- p.123
|
623 |
Estimation of value at risk using parametric regression techniques.January 2003 (has links)
Chan Wing-Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 43-45). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Estimation of Volatility --- p.5 / Chapter 2.1 --- A revisit to the RiskMetrics --- p.6 / Chapter 2.2 --- Predicting Multiple-period of Volatilities --- p.7 / Chapter 2.3 --- Performance Measures --- p.11 / Chapter 2.4 --- Nonparametric Estimation of Quantiles --- p.13 / Chapter 3 --- Univariate Prediction --- p.15 / Chapter 3.1 --- Piecewise Constant Technique --- p.16 / Chapter 3.2 --- Piecewise Linear Technique --- p.22 / Chapter 4 --- Bivariate Prediction --- p.27 / Chapter 4.1 --- Model Selection --- p.28 / Chapter 4.2 --- Piecewise Linear with Discontinuity --- p.29 / Chapter 4.3 --- Piecewise Linear Technique --- p.35 / Chapter 5 --- Conclusions --- p.41 / Bibliography --- p.43
|
624 |
Discriminant feature pursuit: from statistical learning to informative learning.January 2006 (has links)
Lin Dahua. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 233-250). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Problem We are Facing --- p.1 / Chapter 1.2 --- Generative vs. Discriminative Models --- p.2 / Chapter 1.3 --- Statistical Feature Extraction: Success and Challenge --- p.3 / Chapter 1.4 --- Overview of Our Works --- p.5 / Chapter 1.4.1 --- New Linear Discriminant Methods: Generalized LDA Formulation and Performance-Driven Sub space Learning --- p.5 / Chapter 1.4.2 --- Coupled Learning Models: Coupled Space Learning and Inter Modality Recognition --- p.6 / Chapter 1.4.3 --- Informative Learning Approaches: Conditional Infomax Learning and Information Chan- nel Model --- p.6 / Chapter 1.5 --- Organization of the Thesis --- p.8 / Chapter I --- History and Background --- p.10 / Chapter 2 --- Statistical Pattern Recognition --- p.11 / Chapter 2.1 --- Patterns and Classifiers --- p.11 / Chapter 2.2 --- Bayes Theory --- p.12 / Chapter 2.3 --- Statistical Modeling --- p.14 / Chapter 2.3.1 --- Maximum Likelihood Estimation --- p.14 / Chapter 2.3.2 --- Gaussian Model --- p.15 / Chapter 2.3.3 --- Expectation-Maximization --- p.17 / Chapter 2.3.4 --- Finite Mixture Model --- p.18 / Chapter 2.3.5 --- A Nonparametric Technique: Parzen Windows --- p.21 / Chapter 3 --- Statistical Learning Theory --- p.24 / Chapter 3.1 --- Formulation of Learning Model --- p.24 / Chapter 3.1.1 --- Learning: Functional Estimation Model --- p.24 / Chapter 3.1.2 --- Representative Learning Problems --- p.25 / Chapter 3.1.3 --- Empirical Risk Minimization --- p.26 / Chapter 3.2 --- Consistency and Convergence of Learning --- p.27 / Chapter 3.2.1 --- Concept of Consistency --- p.27 / Chapter 3.2.2 --- The Key Theorem of Learning Theory --- p.28 / Chapter 3.2.3 --- VC Entropy --- p.29 / Chapter 3.2.4 --- Bounds on Convergence --- p.30 / Chapter 3.2.5 --- VC Dimension --- p.35 / Chapter 4 --- History of Statistical Feature Extraction --- p.38 / Chapter 4.1 --- Linear Feature Extraction --- p.38 / Chapter 4.1.1 --- Principal Component Analysis (PCA) --- p.38 / Chapter 4.1.2 --- Linear Discriminant Analysis (LDA) --- p.41 / Chapter 4.1.3 --- Other Linear Feature Extraction Methods --- p.46 / Chapter 4.1.4 --- Comparison of Different Methods --- p.48 / Chapter 4.2 --- Enhanced Models --- p.49 / Chapter 4.2.1 --- Stochastic Discrimination and Random Subspace --- p.49 / Chapter 4.2.2 --- Hierarchical Feature Extraction --- p.51 / Chapter 4.2.3 --- Multilinear Analysis and Tensor-based Representation --- p.52 / Chapter 4.3 --- Nonlinear Feature Extraction --- p.54 / Chapter 4.3.1 --- Kernelization --- p.54 / Chapter 4.3.2 --- Dimension reduction by Manifold Embedding --- p.56 / Chapter 5 --- Related Works in Feature Extraction --- p.59 / Chapter 5.1 --- Dimension Reduction --- p.59 / Chapter 5.1.1 --- Feature Selection --- p.60 / Chapter 5.1.2 --- Feature Extraction --- p.60 / Chapter 5.2 --- Kernel Learning --- p.61 / Chapter 5.2.1 --- Basic Concepts of Kernel --- p.61 / Chapter 5.2.2 --- The Reproducing Kernel Map --- p.62 / Chapter 5.2.3 --- The Mercer Kernel Map --- p.64 / Chapter 5.2.4 --- The Empirical Kernel Map --- p.65 / Chapter 5.2.5 --- Kernel Trick and Kernelized Feature Extraction --- p.66 / Chapter 5.3 --- Subspace Analysis --- p.68 / Chapter 5.3.1 --- Basis and Subspace --- p.68 / Chapter 5.3.2 --- Orthogonal Projection --- p.69 / Chapter 5.3.3 --- Orthonormal Basis --- p.70 / Chapter 5.3.4 --- Subspace Decomposition --- p.70 / Chapter 5.4 --- Principal Component Analysis --- p.73 / Chapter 5.4.1 --- PCA Formulation --- p.73 / Chapter 5.4.2 --- Solution to PCA --- p.75 / Chapter 5.4.3 --- Energy Structure of PCA --- p.76 / Chapter 5.4.4 --- Probabilistic Principal Component Analysis --- p.78 / Chapter 5.4.5 --- Kernel Principal Component Analysis --- p.81 / Chapter 5.5 --- Independent Component Analysis --- p.83 / Chapter 5.5.1 --- ICA Formulation --- p.83 / Chapter 5.5.2 --- Measurement of Statistical Independence --- p.84 / Chapter 5.6 --- Linear Discriminant Analysis --- p.85 / Chapter 5.6.1 --- Fisher's Linear Discriminant Analysis --- p.85 / Chapter 5.6.2 --- Improved Algorithms for Small Sample Size Problem . --- p.89 / Chapter 5.6.3 --- Kernel Discriminant Analysis --- p.92 / Chapter II --- Improvement in Linear Discriminant Analysis --- p.100 / Chapter 6 --- Generalized LDA --- p.101 / Chapter 6.1 --- Regularized LDA --- p.101 / Chapter 6.1.1 --- Generalized LDA Implementation Procedure --- p.101 / Chapter 6.1.2 --- Optimal Nonsingular Approximation --- p.103 / Chapter 6.1.3 --- Regularized LDA algorithm --- p.104 / Chapter 6.2 --- A Statistical View: When is LDA optimal? --- p.105 / Chapter 6.2.1 --- Two-class Gaussian Case --- p.106 / Chapter 6.2.2 --- Multi-class Cases --- p.107 / Chapter 6.3 --- Generalized LDA Formulation --- p.108 / Chapter 6.3.1 --- Mathematical Preparation --- p.108 / Chapter 6.3.2 --- Generalized Formulation --- p.110 / Chapter 7 --- Dynamic Feedback Generalized LDA --- p.112 / Chapter 7.1 --- Basic Principle --- p.112 / Chapter 7.2 --- Dynamic Feedback Framework --- p.113 / Chapter 7.2.1 --- Initialization: K-Nearest Construction --- p.113 / Chapter 7.2.2 --- Dynamic Procedure --- p.115 / Chapter 7.3 --- Experiments --- p.115 / Chapter 7.3.1 --- Performance in Training Stage --- p.116 / Chapter 7.3.2 --- Performance on Testing set --- p.118 / Chapter 8 --- Performance-Driven Subspace Learning --- p.119 / Chapter 8.1 --- Motivation and Principle --- p.119 / Chapter 8.2 --- Performance-Based Criteria --- p.121 / Chapter 8.2.1 --- The Verification Problem and Generalized Average Margin --- p.122 / Chapter 8.2.2 --- Performance Driven Criteria based on Generalized Average Margin --- p.123 / Chapter 8.3 --- Optimal Subspace Pursuit --- p.125 / Chapter 8.3.1 --- Optimal threshold --- p.125 / Chapter 8.3.2 --- Optimal projection matrix --- p.125 / Chapter 8.3.3 --- Overall procedure --- p.129 / Chapter 8.3.4 --- Discussion of the Algorithm --- p.129 / Chapter 8.4 --- Optimal Classifier Fusion --- p.130 / Chapter 8.5 --- Experiments --- p.131 / Chapter 8.5.1 --- Performance Measurement --- p.131 / Chapter 8.5.2 --- Experiment Setting --- p.131 / Chapter 8.5.3 --- Experiment Results --- p.133 / Chapter 8.5.4 --- Discussion --- p.139 / Chapter III --- Coupled Learning of Feature Transforms --- p.140 / Chapter 9 --- Coupled Space Learning --- p.141 / Chapter 9.1 --- Introduction --- p.142 / Chapter 9.1.1 --- What is Image Style Transform --- p.142 / Chapter 9.1.2 --- Overview of our Framework --- p.143 / Chapter 9.2 --- Coupled Space Learning --- p.143 / Chapter 9.2.1 --- Framework of Coupled Modelling --- p.143 / Chapter 9.2.2 --- Correlative Component Analysis --- p.145 / Chapter 9.2.3 --- Coupled Bidirectional Transform --- p.148 / Chapter 9.2.4 --- Procedure of Coupled Space Learning --- p.151 / Chapter 9.3 --- Generalization to Mixture Model --- p.152 / Chapter 9.3.1 --- Coupled Gaussian Mixture Model --- p.152 / Chapter 9.3.2 --- Optimization by EM Algorithm --- p.152 / Chapter 9.4 --- Integrated Framework for Image Style Transform --- p.154 / Chapter 9.5 --- Experiments --- p.156 / Chapter 9.5.1 --- Face Super-resolution --- p.156 / Chapter 9.5.2 --- Portrait Style Transforms --- p.157 / Chapter 10 --- Inter-Modality Recognition --- p.162 / Chapter 10.1 --- Introduction to the Inter-Modality Recognition Problem . . . --- p.163 / Chapter 10.1.1 --- What is Inter-Modality Recognition --- p.163 / Chapter 10.1.2 --- Overview of Our Feature Extraction Framework . . . . --- p.163 / Chapter 10.2 --- Common Discriminant Feature Extraction --- p.165 / Chapter 10.2.1 --- Formulation of the Learning Problem --- p.165 / Chapter 10.2.2 --- Matrix-Form of the Objective --- p.168 / Chapter 10.2.3 --- Solving the Linear Transforms --- p.169 / Chapter 10.3 --- Kernelized Common Discriminant Feature Extraction --- p.170 / Chapter 10.4 --- Multi-Mode Framework --- p.172 / Chapter 10.4.1 --- Multi-Mode Formulation --- p.172 / Chapter 10.4.2 --- Optimization Scheme --- p.174 / Chapter 10.5 --- Experiments --- p.176 / Chapter 10.5.1 --- Experiment Settings --- p.176 / Chapter 10.5.2 --- Experiment Results --- p.177 / Chapter IV --- A New Perspective: Informative Learning --- p.180 / Chapter 11 --- Toward Information Theory --- p.181 / Chapter 11.1 --- Entropy and Mutual Information --- p.181 / Chapter 11.1.1 --- Entropy --- p.182 / Chapter 11.1.2 --- Relative Entropy (Kullback Leibler Divergence) --- p.184 / Chapter 11.2 --- Mutual Information --- p.184 / Chapter 11.2.1 --- Definition of Mutual Information --- p.184 / Chapter 11.2.2 --- Chain rules --- p.186 / Chapter 11.2.3 --- Information in Data Processing --- p.188 / Chapter 11.3 --- Differential Entropy --- p.189 / Chapter 11.3.1 --- Differential Entropy of Continuous Random Variable . --- p.189 / Chapter 11.3.2 --- Mutual Information of Continuous Random Variable . --- p.190 / Chapter 12 --- Conditional Infomax Learning --- p.191 / Chapter 12.1 --- An Overview --- p.192 / Chapter 12.2 --- Conditional Informative Feature Extraction --- p.193 / Chapter 12.2.1 --- Problem Formulation and Features --- p.193 / Chapter 12.2.2 --- The Information Maximization Principle --- p.194 / Chapter 12.2.3 --- The Information Decomposition and the Conditional Objective --- p.195 / Chapter 12.3 --- The Efficient Optimization --- p.197 / Chapter 12.3.1 --- Discrete Approximation Based on AEP --- p.197 / Chapter 12.3.2 --- Analysis of Terms and Their Derivatives --- p.198 / Chapter 12.3.3 --- Local Active Region Method --- p.200 / Chapter 12.4 --- Bayesian Feature Fusion with Sparse Prior --- p.201 / Chapter 12.5 --- The Integrated Framework for Feature Learning --- p.202 / Chapter 12.6 --- Experiments --- p.203 / Chapter 12.6.1 --- A Toy Problem --- p.203 / Chapter 12.6.2 --- Face Recognition --- p.204 / Chapter 13 --- Channel-based Maximum Effective Information --- p.209 / Chapter 13.1 --- Motivation and Overview --- p.209 / Chapter 13.2 --- Maximizing Effective Information --- p.211 / Chapter 13.2.1 --- Relation between Mutual Information and Classification --- p.211 / Chapter 13.2.2 --- Linear Projection and Metric --- p.212 / Chapter 13.2.3 --- Channel Model and Effective Information --- p.213 / Chapter 13.2.4 --- Parzen Window Approximation --- p.216 / Chapter 13.3 --- Parameter Optimization on Grassmann Manifold --- p.217 / Chapter 13.3.1 --- Grassmann Manifold --- p.217 / Chapter 13.3.2 --- Conjugate Gradient Optimization on Grassmann Manifold --- p.219 / Chapter 13.3.3 --- Computation of Gradient --- p.221 / Chapter 13.4 --- Experiments --- p.222 / Chapter 13.4.1 --- A Toy Problem --- p.222 / Chapter 13.4.2 --- Face Recognition --- p.223 / Chapter 14 --- Conclusion --- p.230
|
625 |
Bayesian analysis in censored rank-ordered probit model with applications. / CUHK electronic theses & dissertations collectionJanuary 2013 (has links)
在日常生活和科学研究中产生大量偏好数据,其反应一组被关注对象受偏好的程度。通常用排序数据或多元选择数据来记录观察结果。有时候关于两个对象的偏好没有明显强弱之分,导致排序产生节点,也就是所谓的删失排序。为了研究带有删失的排序数据,基于Thurstone的随机效用假设理论我们建立了一个对称贝叶斯probit模型。然而,参数识别是probit模型必须解决的问题,即确定一组潜在效用的位置和尺度。通常方法是选择其中一个对象为基,然后用其它对象的效用减去这个基的效用,最后我们关于这些效用差来建模。问题是,在用贝叶斯方法处理多元选择数据时,其预测结果对基的选择有敏感性,即选不同对象为基预测结果是不一样的。本文,我们虚构一个基,即一组对象偏好的平均。依靠这个基,我们为多元选择probit模型给出一个不依赖于对象标号的识别方法,即对称识别法。进一步,我们设计一种贝叶斯算法来估计这个模型。通过仿真研究和真实数据分析,我们发现这个贝叶斯probit模型被完全识别,而且消除通常识别法所存在的敏感性。接下来,我们把这个关于多元选择数据建立的probit模型推广到处理一般删失排序数据,即得到对称贝叶斯删失排序probit 模型。最后,我们用这个模型很好的分析了香港赌马数据。 / Vast amount of preference data arise from daily life or scientific research, where observations consist of preferences on a set of available objects. The observations are usually recorded by ranking data or multinomial data. Sometimes, there is not a clear preference between two objects, which will result in ranking data with ties, also called censored rank-ordered data. To study such kind of data, we develop a symmetric Bayesian probit model based on Thurstone's random utility (discriminal process) assumption. However, parameter identification is always an unavoidable problem for probit model, i.e., determining the location and scale of latent utilities. The standard identification method need to specify one of the utilities as a base, and then model the differences of the other utilities subtracted by the base. However, Bayesian predictions have been verified to be sensitive to specification of the base in the case of multinomial data. In this thesis, we set the average of the whole set of utilities as a base which is symmetric to any relabeling of objects. Based on this new base, we propose a symmetric identification approach to fully identify multinomial probit model. Furthermore, we design a Bayesian algorithm to fit that model. By simulation study and real data analysis, we find that this new probit model not only can be identifed well, but also remove sensitivities mentioned above. In what follows, we generalize this probit model to fit general censored rank-ordered data. Correspondingly, we get the symmetric Bayesian censored rank-ordered probit model. At last, we apply this model to analyze Hong Kong horse racing data successfully. / Detailed summary in vernacular field only. / Pan, Maolin. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 50-55). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.2 / Chapter 1.1.1 --- The Ranking Model --- p.2 / Chapter 1.1.2 --- Discrete Choice Model --- p.4 / Chapter 1.2 --- Methodology --- p.7 / Chapter 1.2.1 --- Data Augmentation --- p.8 / Chapter 1.2.2 --- Marginal Data Augmentation --- p.8 / Chapter 1.3 --- An Outline --- p.9 / Chapter 2 --- Bayesian Multinomial Probit Model Based On Symmetric I-denti cation --- p.11 / Chapter 2.1 --- Introduction --- p.11 / Chapter 2.2 --- The MNP Model --- p.14 / Chapter 2.3 --- Symmetric Identification and Bayesian Analysis --- p.17 / Chapter 2.3.1 --- Symmetric Identification --- p.18 / Chapter 2.3.2 --- Bayesian Analysis --- p.21 / Chapter 2.4 --- Case Studies --- p.25 / Chapter 2.4.1 --- Simulation Study --- p.25 / Chapter 2.4.2 --- Clothes Detergent Purchases Data --- p.27 / Chapter 2.5 --- Summary --- p.29 / Chapter 3 --- Symmetric Bayesian Censored Rank-Ordered Probit Model --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Ranking Model --- p.33 / Chapter 3.2.1 --- Ranking Data --- p.33 / Chapter 3.2.2 --- Censored Rank-Ordered Probit Model --- p.35 / Chapter 3.2.3 --- Symmetrically Identified CROP Model --- p.36 / Chapter 3.3 --- Bayesian Analysis on Symmetrically Identified CROP Model --- p.37 / Chapter 3.3.1 --- Model Estimation --- p.38 / Chapter 3.4 --- Application: Hong Kong Horse Racing --- p.41 / Chapter 3.5 --- Summary --- p.44 / Chapter 4 --- Conclusion and Further Studies --- p.45 / Chapter A --- Prior for covariance matrix with trace augmented restriction --- p.47 / Chapter B --- Derivation of sampling intervals --- p.49 / Bibliography --- p.50
|
626 |
Monte Carlo simulation in risk estimation. / CUHK electronic theses & dissertations collectionJanuary 2013 (has links)
本论文主要研究两类风险估计问题:一类是美式期权价格关于模型参数的敏感性估计, 另一类是投资组合的风险估计。针对这两类问题,我们相应地提出了高效的蒙特卡洛模拟方法。这构成了本文的两个主要部分。 / 第二章是本文的第一部分。在这章中,我们将美式期权的敏感性估计问题提成了更具一般性的估计问题:如果一个随机最优化问题依赖于某些模型参数, 我们该如何估计其最优目标函数关于参数的敏感性。在该问题中, 由于最优决策关于模型参数可能不连续,传统的无穷小扰动分析方法不能直接应用。针对这个困难,我们提出了一种广义的无穷小扰动分析方法,得到敏感性的无偏估计。 我们的方法显示, 在估计敏感性时, 其实并不需要样本路径关于参数的可微性。这是我们在理论上的新发现。另一方面, 该方法可以非常容易的应用于美式期权的敏感性估计。在实际应用中敏感性的无偏估计可以直接嵌入流行的美式期权定价算法,从而同时得到期权价格和价格关于模型参数的敏感性。包括高维问题和多种不同的随机过程模型在内的数值实验, 均显示该估计在计算上具有显著的优越性。最后,我们还从理论上刻画了美式期权的近似最优执行策略对敏感性估计的影响,给出了误差上界。 / 第三章是本文的第二部分。在本章中,我们研究投资组合的风险估计问题。该问题也可被推广成一个一般性的估计问题:如何估计条件期望在作用上一个非线性泛函之后的期望。针对该类估计问题,我们提出了一种多层模拟方法。我们的估计量实际上是一些简单嵌套估计量的线性组合。我们的方法非常容易实现,并且可以被广泛应用于不同的问题结构。理论分析表明我们的方法适用于不同维度的问题并且算法复杂性低于文献中现有的方法。包括低维和高维的数值实验验证了我们的理论分析。 / This dissertation mainly consists of two parts: a generalized infinitesimal perturbation analysis (IPA) approach for American option sensitivities estimation and a multilevel Monte Carlo simulation approach for portfolio risk estimation. / In the first part, we develop efficient Monte Carlo methods for estimating American option sensitivities. The problem can be re-formulated as how to perform sensitivity analysis for a stochastic optimization problem when it has model uncertainty. We introduce a generalized IPA approach to resolve the difficulty caused by discontinuity of the optimal decision with respect to the underlying parameter. The unbiased price-sensitivity estimators yielded from this approach demonstrate significant advantages numerically in both high dimensional environments and various process settings. We can easily embed them into many of the most popular pricing algorithms without extra simulation effort to obtain sensitivities as a by-product of the option price. This generalized approach also casts new insights on how to perform sensitivity analysis using IPA: we do not need pathwise differentiability to apply it. Another contribution of this chapter is to investigate how the estimation quality of sensitivities will be affected by the quality of approximated exercise times. / In the second part, we propose a multilevel nested simulation approach to estimate the expectation of a nonlinear function of a conditional expectation, which has a direct application in portfolio risk estimation problems under various risk measures. Our estimator consists of a linear combination of several standard nested estimators. It is very simple to implement and universally applicable across various problem settings. The results of theoretical analysis show that the algorithmic complexities of our estimators are independent of the problem dimensionality and are better than other alternatives in the literature. Numerical experiments, in both low and high dimensional settings, verify our theoretical analysis. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Liu, Yanchu. / "December 2012." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 89-96). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Abstract in Chinese --- p.iii / Acknowledgements --- p.v / Contents --- p.vii / List of Tables --- p.ix / List of Figures --- p.xii / Chapter 1. --- Overview --- p.1 / Chapter 2. --- American Option Sensitivities Estimation via a Generalized IPA Approach --- p.4 / Chapter 2.1. --- Introduction --- p.4 / Chapter 2.2. --- Formulation of the American Option Pricing Problem --- p.10 / Chapter 2.3. --- Main Results --- p.14 / Chapter 2.3.1. --- A Generalized IPA Approach in the Presence of a Decision Variable --- p.16 / Chapter 2.3.2. --- Unbiased First-Order Sensitivity Estimators --- p.21 / Chapter 2.4. --- Implementation Issues and Error Analysis --- p.23 / Chapter 2.5. --- Numerical Results --- p.26 / Chapter 2.5.1. --- Effects of Dimensionality --- p.27 / Chapter 2.5.2. --- Performance under Various Underlying Processes --- p.29 / Chapter 2.5.3. --- Effects of Exercising Policies --- p.31 / Chapter 2.6. --- Conclusion Remarks and Future Work --- p.33 / Chapter 2.7. --- Appendix --- p.35 / Chapter 2.7.1. --- Proofs of the Main Results --- p.35 / Chapter 2.7.2. --- Likelihood Ratio Estimators --- p.43 / Chapter 2.7.3. --- Derivation of Example 2.3 --- p.49 / Chapter 3. --- Multilevel Monte Carlo Nested Simulation for Risk Estimation --- p.52 / Chapter 3.1. --- Introduction --- p.52 / Chapter 3.1.1. --- Examples --- p.53 / Risk Measurement of Financial Portfolios --- p.53 / Derivatives Pricing --- p.55 / Partial Expected Value of Perfect Information --- p.56 / Chapter 3.1.2. --- A Standard Nested Estimator --- p.57 / Chapter 3.1.3. --- Literature Review --- p.59 / Chapter 3.1.4. --- Summary of Our Contributions --- p.61 / Chapter 3.2. --- The Multilevel Approach --- p.63 / Chapter 3.2.1. --- Motivation --- p.63 / Chapter 3.2.2. --- Multilevel Construction --- p.65 / Chapter 3.2.3. --- Theoretical Analysis --- p.67 / Chapter 3.2.4. --- Further Improvement by Extrapolation --- p.69 / Chapter 3.3. --- Numerical Experiments --- p.72 / Chapter 3.3.1. --- Single Asset Setting --- p.73 / Chapter 3.3.2. --- Multiple Asset Setting --- p.74 / Chapter 3.4. --- Concluding Remarks --- p.77 / Chapter 3.5. --- Appendix: Technical Assumptions and Proofs of the Main Results --- p.79 / Bibliography --- p.89
|
627 |
A SNP-based method for determining the origin of MRSA isolatesSciberras, James January 2016 (has links)
The advancements in Whole Genome Sequencing (WGS) have increased the amount of genomic information available for epidemiological analyses. WGS opens many avenues for investigation into the tracking of pathogens, but the rapid advancements in WGS could soon lead to a situation where traditional analytical techniques might become computationally impractical. For example, the traditional method to determine the origin of an isolate is to use phylogenetic analyses. However, phylogenetic analyses become computationally prohibitive with larger datasets and are best for retrospective epidemiology. Therefore, I investigated if there might be less computationally demanding methods of analysing the same data to obtain similar conclusions. This thesis describes a proof-of-principle method for evaluating if such alternative analysis techniques might be viable. In this thesis Methicillin resistant Staphylococcus aureus (MRSA) was used, and single nucleotide polymorphism (SNP) and insertion/deletion (indel) genomic variation. I move away from whole genome analysis techniques, such as phylogenetic analysis, and instead focus on individual SNPs. I showed that genetic signals (such as SNPs and indels) can be utilised in novel ways to rapidly produce a summary of the possible geographic origin of an isolate with a minimal demand on computational power. The methods described could be added to the suite of analytical epidemiological tools and are a promising indication of the viability of developing cheap, rapid diagnostic tools to be implemented in healthcare institutions. Furthermore, the principles behind the development of the methods described in this thesis could have much wider applications than just MRSA. This implies that further work based on the principles described in this thesis on alternative pathogens could prove to be promising avenues of investigation.
|
628 |
A comparison of the power of the Wilcoxon test to that of the t-test under Lehmann's alternativesHwang, Chern-Hwang January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
629 |
Statistical and probabilistic methods for design of reinforced concrete structuresKumar, T. S. S January 2010 (has links)
Digitized by Kansas Correctional Industries
|
630 |
Approximation methods and inference for stochastic biochemical kineticsSchnoerr, David Benjamin January 2016 (has links)
Recent experiments have shown the fundamental role that random fluctuations play in many chemical systems in living cells, such as gene regulatory networks. Mathematical models are thus indispensable to describe such systems and to extract relevant biological information from experimental data. Recent decades have seen a considerable amount of modelling effort devoted to this task. However, current methodologies still present outstanding mathematical and computational hurdles. In particular, models which retain the discrete nature of particle numbers incur necessarily severe computational overheads, greatly complicating the tasks of characterising statistically the noise in cells and inferring parameters from data. In this thesis we study analytical approximations and inference methods for stochastic reaction dynamics. The chemical master equation is the accepted description of stochastic chemical reaction networks whenever spatial effects can be ignored. Unfortunately, for most systems no analytic solutions are known and stochastic simulations are computationally expensive, making analytic approximations appealing alternatives. In the case where spatial effects cannot be ignored, such systems are typically modelled by means of stochastic reaction-diffusion processes. As in the non-spatial case an analytic treatment is rarely possible and simulations quickly become infeasible. In particular, the calibration of models to data constitutes a fundamental unsolved problem. In the first part of this thesis we study two approximation methods of the chemical master equation; the chemical Langevin equation and moment closure approximations. The chemical Langevin equation approximates the discrete-valued process described by the chemical master equation by a continuous diffusion process. Despite being frequently used in the literature, it remains unclear how the boundary conditions behave under this transition from discrete to continuous variables. We show that this boundary problem results in the chemical Langevin equation being mathematically ill-defined if defined in real space due to the occurrence of square roots of negative expressions. We show that this problem can be avoided by extending the state space from real to complex variables. We prove that this approach gives rise to real-valued moments and thus admits a probabilistic interpretation. Numerical examples demonstrate better accuracy of the developed complex chemical Langevin equation than various real-valued implementations proposed in the literature. Moment closure approximations aim at directly approximating the moments of a process, rather then its distribution. The chemical master equation gives rise to an infinite system of ordinary differential equations for the moments of a process. Moment closure approximations close this infinite hierarchy of equations by expressing moments above a certain order in terms of lower order moments. This is an ad hoc approximation without any systematic justification, and the question arises if the resulting equations always lead to physically meaningful results. We find that this is indeed not always the case. Rather, moment closure approximations may give rise to diverging time trajectories or otherwise unphysical behaviour, such as negative mean values or unphysical oscillations. They thus fail to admit a probabilistic interpretation in these cases, and care is needed when using them to not draw wrong conclusions. In the second part of this work we consider systems where spatial effects have to be taken into account. In general, such stochastic reaction-diffusion processes are only defined in an algorithmic sense without any analytic description, and it is hence not even conceptually clear how to define likelihoods for experimental data for such processes. Calibration of such models to experimental data thus constitutes a highly non-trivial task. We derive here a novel inference method by establishing a basic relationship between stochastic reaction-diffusion processes and spatio-temporal Cox processes, two classes of models that were considered to be distinct to each other to this date. This novel connection naturally allows to compute approximate likelihoods and thus to perform inference tasks for stochastic reaction-diffusion processes. The accuracy and efficiency of this approach is demonstrated by means of several examples. Overall, this thesis advances the state of the art of modelling methods for stochastic reaction systems. It advances the understanding of several existing methods by elucidating fundamental limitations of these methods, and several novel approximation and inference methods are developed.
|
Page generated in 0.2495 seconds