• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 539
  • 93
  • 78
  • 58
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 9
  • 4
  • 3
  • Tagged with
  • 914
  • 914
  • 203
  • 160
  • 133
  • 123
  • 95
  • 90
  • 87
  • 74
  • 71
  • 69
  • 66
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

An econometric estimation of the demand for clothing in South Africa

11 September 2012 (has links)
M.A. / The purpose of this study is to document and build an econometric model of the demand in the South African Clothing industry. It is important to study the clothing industry because it is labour intensive and thus its growth and development could contribute positively toward eradicating the unemployment problem in South Africa. With globalization of world economies and South Africa being a signatory to the GATT/WTO, the implications for this industry are manifold. The opening chapter lists the problem statement, identifies the method of research utilised and the relevance of the study. Chapter two looks at demand theory, particularly with regard to the quantitative techniques involved in its estimation. It focusses on regression theory and the evaluation of results generated. The third chapter gives a background to the South African clothing industry, and touches on amongst others aspects of current importance such as trade reform, international best practice and the key issues the industry has to deal with. Chapter four looks at the econometrics aspects of the study. A near perfect forecast was obtained, which attests to the stability and superiority of the model which is presented. The main findings of this study are that it is supply considerations such as the wage bill, costs of inputs (eg textile materials) etc which play an important part in the survival and prosperity of the industry. It is also reveals the fact that low productivity levels could be easily and quickly rectified through the introduction of new organizational practices and human resource development, development of quick response relationships and training to support new organizational practices. The study further and finally asserts that, while trade reform could necessitate painful adjustments the industry could actually come out a stronger world player
122

Data Mining for Car Insurance Claims Prediction

Huangfu, Dan 27 April 2015 (has links)
A key challenge for the insurance industry is to charge each customer an appropriate price for the risk they represent. Risk varies widely from customer to customer, and a deep understanding of different risk factors helps predict the likelihood and cost of insurance claims. The goal of this project is to see how well various statistical methods perform in predicting bodily injury liability Insurance claim payments based on the characteristics of the insured customer’s vehicles for this particular dataset from Allstate Insurance Company.We tried several statistical methods, including logistic regression, Tweedie’s compound gamma-Poisson model, principal component analysis (PCA), response averaging, and regression and decision trees. From all the models we tried, PCA combined with a with a Regression Tree produced the best results. This is somewhat surprising given the widespread use of the Tweedie model for insurance claim prediction problems.
123

Applications of statistical techniques to mine valuation problems: brief review of the background

Krige, D G January 1963 (has links)
Thesis (D.Sc.)--University of the Witwatersrand, 1963
124

Statistics and structures in turbulent thermal convection. / 热对流湍流中的统计特性与结构 / CUHK electronic theses & dissertations collection / Statistics and structures in turbulent thermal convection. / Re dui liu tuan liu zhong de tong ji te xing yu jie gou

January 2007 (has links)
In this thesis, we attempt to address some of these questions. First, we have devised a scheme to extract information of the plumes from simultaneous velocity and temperature measurements. Our method makes explicit use of the physical intuition that the velocity of the buoyant structures, e.g. plumes, should be related to the temperature fluctuation, in some apriori unknown manner as they are generated by buoyancy. Our scheme involves a decomposition of the local velocity measurement into two parts. The part that is correlated with some function of the temperature fluctuation measured at the same time is taken as the velocity of the plumes. Applying this scheme to measurements taken at the center and near the sidewall of the convection cell where the dominant buoyant structures are plumes, we have found the temperature dependence of the plume velocity at these two locations and understood our results from the equations of motion. Using these results of the temperature dependence of the plume velocity, we (i) conclude that heat is not mainly transported through the central region of the convection cell and (ii) obtain a relation between the scaling behavior of the plume velocity structure functions and the temperature structure functions that is different from what is implied by Bolgiano-Obukhov scaling. Then we have studied the possible effects of the large-scale mean circulation on the velocity and temperature statistics using simplified shell models of turbulent convection. We have introduced a large-scale mean flow into two shell models and found that its presence does not change the scaling behavior of velocity and temperature. / In turbulent thermal convection, velocity and temperature measurements taken at a point display complex fluctuations in time. On the other hand, visualization of the flow reveals recurring coherent structures. One prominent flow structure is a plume, which is generated from the thermal boundary layers by buoyancy. Another flow structure is a large-scale mean circulation that spans the entire convection cell. At least two strategies can be employed to study turbulent thermal convection or turbulent flows in general. One is to analyze and understand the fluctuations of the local measurements. The other is to characterize the coherent structures and study and understand their dynamics. These two approaches are not independent but provide complementary knowledge of the flows. Interesting questions hence include whether and how information about the ordered flow structures can be extracted from the fluctuating local measurements and how the presence of the ordered flow structures might affect the statistics of the fluctuations. / Guo, Hao = 热对流湍流中的统计特性与结构 / 郭昊. / "January 2007." / Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6036. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 62-66). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Title and abstract in English and Chinese. / School code: 1307. / Guo, Hao = Re dui liu tuan liu zhong de tong ji te xing yu jie gou / Guo Hao.
125

Analysis of square tables with ordered categories.

January 1993 (has links)
by Vincent Hung Hin Yan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 76-77). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter §1.1 --- Classical approaches and their limitations --- p.1 / Chapter §1.2 --- New approach --- p.3 / Chapter Chapter 2 --- Two-dimensional Ordinal Square Tables --- p.5 / Chapter §2.1 --- Model --- p.5 / Chapter §2.2 --- Maximum likelihood estimator --- p.7 / Chapter §2.3 --- Optimization procedure --- p.8 / Chapter §2.4 --- Useful hypotheses --- p.9 / Chapter §2.5 --- Simulation study --- p.11 / Chapter §2.6 --- A real example --- p.18 / Chapter §2.7 --- Comparison of new and classical approaches --- p.22 / Chapter Chapter 3 --- Multi-dimensional Ordinal Tables --- p.25 / Chapter §3.1 --- Partition maximum likelihood estimator --- p.26 / Chapter §3.2 --- Optimization procedure --- p.28 / Chapter §3.3 --- Useful hypotheses --- p.37 / Chapter §3.4 --- Simulation study --- p.39 / Chapter Chapter 4 --- Conclusion --- p.45 / Tables --- p.48 / Appendix --- p.74 / References --- p.77
126

Estimation of value at risk using parametric regression techniques.

January 2003 (has links)
Chan Wing-Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 43-45). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Estimation of Volatility --- p.5 / Chapter 2.1 --- A revisit to the RiskMetrics --- p.6 / Chapter 2.2 --- Predicting Multiple-period of Volatilities --- p.7 / Chapter 2.3 --- Performance Measures --- p.11 / Chapter 2.4 --- Nonparametric Estimation of Quantiles --- p.13 / Chapter 3 --- Univariate Prediction --- p.15 / Chapter 3.1 --- Piecewise Constant Technique --- p.16 / Chapter 3.2 --- Piecewise Linear Technique --- p.22 / Chapter 4 --- Bivariate Prediction --- p.27 / Chapter 4.1 --- Model Selection --- p.28 / Chapter 4.2 --- Piecewise Linear with Discontinuity --- p.29 / Chapter 4.3 --- Piecewise Linear Technique --- p.35 / Chapter 5 --- Conclusions --- p.41 / Bibliography --- p.43
127

Discriminant feature pursuit: from statistical learning to informative learning.

January 2006 (has links)
Lin Dahua. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 233-250). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Problem We are Facing --- p.1 / Chapter 1.2 --- Generative vs. Discriminative Models --- p.2 / Chapter 1.3 --- Statistical Feature Extraction: Success and Challenge --- p.3 / Chapter 1.4 --- Overview of Our Works --- p.5 / Chapter 1.4.1 --- New Linear Discriminant Methods: Generalized LDA Formulation and Performance-Driven Sub space Learning --- p.5 / Chapter 1.4.2 --- Coupled Learning Models: Coupled Space Learning and Inter Modality Recognition --- p.6 / Chapter 1.4.3 --- Informative Learning Approaches: Conditional Infomax Learning and Information Chan- nel Model --- p.6 / Chapter 1.5 --- Organization of the Thesis --- p.8 / Chapter I --- History and Background --- p.10 / Chapter 2 --- Statistical Pattern Recognition --- p.11 / Chapter 2.1 --- Patterns and Classifiers --- p.11 / Chapter 2.2 --- Bayes Theory --- p.12 / Chapter 2.3 --- Statistical Modeling --- p.14 / Chapter 2.3.1 --- Maximum Likelihood Estimation --- p.14 / Chapter 2.3.2 --- Gaussian Model --- p.15 / Chapter 2.3.3 --- Expectation-Maximization --- p.17 / Chapter 2.3.4 --- Finite Mixture Model --- p.18 / Chapter 2.3.5 --- A Nonparametric Technique: Parzen Windows --- p.21 / Chapter 3 --- Statistical Learning Theory --- p.24 / Chapter 3.1 --- Formulation of Learning Model --- p.24 / Chapter 3.1.1 --- Learning: Functional Estimation Model --- p.24 / Chapter 3.1.2 --- Representative Learning Problems --- p.25 / Chapter 3.1.3 --- Empirical Risk Minimization --- p.26 / Chapter 3.2 --- Consistency and Convergence of Learning --- p.27 / Chapter 3.2.1 --- Concept of Consistency --- p.27 / Chapter 3.2.2 --- The Key Theorem of Learning Theory --- p.28 / Chapter 3.2.3 --- VC Entropy --- p.29 / Chapter 3.2.4 --- Bounds on Convergence --- p.30 / Chapter 3.2.5 --- VC Dimension --- p.35 / Chapter 4 --- History of Statistical Feature Extraction --- p.38 / Chapter 4.1 --- Linear Feature Extraction --- p.38 / Chapter 4.1.1 --- Principal Component Analysis (PCA) --- p.38 / Chapter 4.1.2 --- Linear Discriminant Analysis (LDA) --- p.41 / Chapter 4.1.3 --- Other Linear Feature Extraction Methods --- p.46 / Chapter 4.1.4 --- Comparison of Different Methods --- p.48 / Chapter 4.2 --- Enhanced Models --- p.49 / Chapter 4.2.1 --- Stochastic Discrimination and Random Subspace --- p.49 / Chapter 4.2.2 --- Hierarchical Feature Extraction --- p.51 / Chapter 4.2.3 --- Multilinear Analysis and Tensor-based Representation --- p.52 / Chapter 4.3 --- Nonlinear Feature Extraction --- p.54 / Chapter 4.3.1 --- Kernelization --- p.54 / Chapter 4.3.2 --- Dimension reduction by Manifold Embedding --- p.56 / Chapter 5 --- Related Works in Feature Extraction --- p.59 / Chapter 5.1 --- Dimension Reduction --- p.59 / Chapter 5.1.1 --- Feature Selection --- p.60 / Chapter 5.1.2 --- Feature Extraction --- p.60 / Chapter 5.2 --- Kernel Learning --- p.61 / Chapter 5.2.1 --- Basic Concepts of Kernel --- p.61 / Chapter 5.2.2 --- The Reproducing Kernel Map --- p.62 / Chapter 5.2.3 --- The Mercer Kernel Map --- p.64 / Chapter 5.2.4 --- The Empirical Kernel Map --- p.65 / Chapter 5.2.5 --- Kernel Trick and Kernelized Feature Extraction --- p.66 / Chapter 5.3 --- Subspace Analysis --- p.68 / Chapter 5.3.1 --- Basis and Subspace --- p.68 / Chapter 5.3.2 --- Orthogonal Projection --- p.69 / Chapter 5.3.3 --- Orthonormal Basis --- p.70 / Chapter 5.3.4 --- Subspace Decomposition --- p.70 / Chapter 5.4 --- Principal Component Analysis --- p.73 / Chapter 5.4.1 --- PCA Formulation --- p.73 / Chapter 5.4.2 --- Solution to PCA --- p.75 / Chapter 5.4.3 --- Energy Structure of PCA --- p.76 / Chapter 5.4.4 --- Probabilistic Principal Component Analysis --- p.78 / Chapter 5.4.5 --- Kernel Principal Component Analysis --- p.81 / Chapter 5.5 --- Independent Component Analysis --- p.83 / Chapter 5.5.1 --- ICA Formulation --- p.83 / Chapter 5.5.2 --- Measurement of Statistical Independence --- p.84 / Chapter 5.6 --- Linear Discriminant Analysis --- p.85 / Chapter 5.6.1 --- Fisher's Linear Discriminant Analysis --- p.85 / Chapter 5.6.2 --- Improved Algorithms for Small Sample Size Problem . --- p.89 / Chapter 5.6.3 --- Kernel Discriminant Analysis --- p.92 / Chapter II --- Improvement in Linear Discriminant Analysis --- p.100 / Chapter 6 --- Generalized LDA --- p.101 / Chapter 6.1 --- Regularized LDA --- p.101 / Chapter 6.1.1 --- Generalized LDA Implementation Procedure --- p.101 / Chapter 6.1.2 --- Optimal Nonsingular Approximation --- p.103 / Chapter 6.1.3 --- Regularized LDA algorithm --- p.104 / Chapter 6.2 --- A Statistical View: When is LDA optimal? --- p.105 / Chapter 6.2.1 --- Two-class Gaussian Case --- p.106 / Chapter 6.2.2 --- Multi-class Cases --- p.107 / Chapter 6.3 --- Generalized LDA Formulation --- p.108 / Chapter 6.3.1 --- Mathematical Preparation --- p.108 / Chapter 6.3.2 --- Generalized Formulation --- p.110 / Chapter 7 --- Dynamic Feedback Generalized LDA --- p.112 / Chapter 7.1 --- Basic Principle --- p.112 / Chapter 7.2 --- Dynamic Feedback Framework --- p.113 / Chapter 7.2.1 --- Initialization: K-Nearest Construction --- p.113 / Chapter 7.2.2 --- Dynamic Procedure --- p.115 / Chapter 7.3 --- Experiments --- p.115 / Chapter 7.3.1 --- Performance in Training Stage --- p.116 / Chapter 7.3.2 --- Performance on Testing set --- p.118 / Chapter 8 --- Performance-Driven Subspace Learning --- p.119 / Chapter 8.1 --- Motivation and Principle --- p.119 / Chapter 8.2 --- Performance-Based Criteria --- p.121 / Chapter 8.2.1 --- The Verification Problem and Generalized Average Margin --- p.122 / Chapter 8.2.2 --- Performance Driven Criteria based on Generalized Average Margin --- p.123 / Chapter 8.3 --- Optimal Subspace Pursuit --- p.125 / Chapter 8.3.1 --- Optimal threshold --- p.125 / Chapter 8.3.2 --- Optimal projection matrix --- p.125 / Chapter 8.3.3 --- Overall procedure --- p.129 / Chapter 8.3.4 --- Discussion of the Algorithm --- p.129 / Chapter 8.4 --- Optimal Classifier Fusion --- p.130 / Chapter 8.5 --- Experiments --- p.131 / Chapter 8.5.1 --- Performance Measurement --- p.131 / Chapter 8.5.2 --- Experiment Setting --- p.131 / Chapter 8.5.3 --- Experiment Results --- p.133 / Chapter 8.5.4 --- Discussion --- p.139 / Chapter III --- Coupled Learning of Feature Transforms --- p.140 / Chapter 9 --- Coupled Space Learning --- p.141 / Chapter 9.1 --- Introduction --- p.142 / Chapter 9.1.1 --- What is Image Style Transform --- p.142 / Chapter 9.1.2 --- Overview of our Framework --- p.143 / Chapter 9.2 --- Coupled Space Learning --- p.143 / Chapter 9.2.1 --- Framework of Coupled Modelling --- p.143 / Chapter 9.2.2 --- Correlative Component Analysis --- p.145 / Chapter 9.2.3 --- Coupled Bidirectional Transform --- p.148 / Chapter 9.2.4 --- Procedure of Coupled Space Learning --- p.151 / Chapter 9.3 --- Generalization to Mixture Model --- p.152 / Chapter 9.3.1 --- Coupled Gaussian Mixture Model --- p.152 / Chapter 9.3.2 --- Optimization by EM Algorithm --- p.152 / Chapter 9.4 --- Integrated Framework for Image Style Transform --- p.154 / Chapter 9.5 --- Experiments --- p.156 / Chapter 9.5.1 --- Face Super-resolution --- p.156 / Chapter 9.5.2 --- Portrait Style Transforms --- p.157 / Chapter 10 --- Inter-Modality Recognition --- p.162 / Chapter 10.1 --- Introduction to the Inter-Modality Recognition Problem . . . --- p.163 / Chapter 10.1.1 --- What is Inter-Modality Recognition --- p.163 / Chapter 10.1.2 --- Overview of Our Feature Extraction Framework . . . . --- p.163 / Chapter 10.2 --- Common Discriminant Feature Extraction --- p.165 / Chapter 10.2.1 --- Formulation of the Learning Problem --- p.165 / Chapter 10.2.2 --- Matrix-Form of the Objective --- p.168 / Chapter 10.2.3 --- Solving the Linear Transforms --- p.169 / Chapter 10.3 --- Kernelized Common Discriminant Feature Extraction --- p.170 / Chapter 10.4 --- Multi-Mode Framework --- p.172 / Chapter 10.4.1 --- Multi-Mode Formulation --- p.172 / Chapter 10.4.2 --- Optimization Scheme --- p.174 / Chapter 10.5 --- Experiments --- p.176 / Chapter 10.5.1 --- Experiment Settings --- p.176 / Chapter 10.5.2 --- Experiment Results --- p.177 / Chapter IV --- A New Perspective: Informative Learning --- p.180 / Chapter 11 --- Toward Information Theory --- p.181 / Chapter 11.1 --- Entropy and Mutual Information --- p.181 / Chapter 11.1.1 --- Entropy --- p.182 / Chapter 11.1.2 --- Relative Entropy (Kullback Leibler Divergence) --- p.184 / Chapter 11.2 --- Mutual Information --- p.184 / Chapter 11.2.1 --- Definition of Mutual Information --- p.184 / Chapter 11.2.2 --- Chain rules --- p.186 / Chapter 11.2.3 --- Information in Data Processing --- p.188 / Chapter 11.3 --- Differential Entropy --- p.189 / Chapter 11.3.1 --- Differential Entropy of Continuous Random Variable . --- p.189 / Chapter 11.3.2 --- Mutual Information of Continuous Random Variable . --- p.190 / Chapter 12 --- Conditional Infomax Learning --- p.191 / Chapter 12.1 --- An Overview --- p.192 / Chapter 12.2 --- Conditional Informative Feature Extraction --- p.193 / Chapter 12.2.1 --- Problem Formulation and Features --- p.193 / Chapter 12.2.2 --- The Information Maximization Principle --- p.194 / Chapter 12.2.3 --- The Information Decomposition and the Conditional Objective --- p.195 / Chapter 12.3 --- The Efficient Optimization --- p.197 / Chapter 12.3.1 --- Discrete Approximation Based on AEP --- p.197 / Chapter 12.3.2 --- Analysis of Terms and Their Derivatives --- p.198 / Chapter 12.3.3 --- Local Active Region Method --- p.200 / Chapter 12.4 --- Bayesian Feature Fusion with Sparse Prior --- p.201 / Chapter 12.5 --- The Integrated Framework for Feature Learning --- p.202 / Chapter 12.6 --- Experiments --- p.203 / Chapter 12.6.1 --- A Toy Problem --- p.203 / Chapter 12.6.2 --- Face Recognition --- p.204 / Chapter 13 --- Channel-based Maximum Effective Information --- p.209 / Chapter 13.1 --- Motivation and Overview --- p.209 / Chapter 13.2 --- Maximizing Effective Information --- p.211 / Chapter 13.2.1 --- Relation between Mutual Information and Classification --- p.211 / Chapter 13.2.2 --- Linear Projection and Metric --- p.212 / Chapter 13.2.3 --- Channel Model and Effective Information --- p.213 / Chapter 13.2.4 --- Parzen Window Approximation --- p.216 / Chapter 13.3 --- Parameter Optimization on Grassmann Manifold --- p.217 / Chapter 13.3.1 --- Grassmann Manifold --- p.217 / Chapter 13.3.2 --- Conjugate Gradient Optimization on Grassmann Manifold --- p.219 / Chapter 13.3.3 --- Computation of Gradient --- p.221 / Chapter 13.4 --- Experiments --- p.222 / Chapter 13.4.1 --- A Toy Problem --- p.222 / Chapter 13.4.2 --- Face Recognition --- p.223 / Chapter 14 --- Conclusion --- p.230
128

Monte Carlo simulation in risk estimation. / CUHK electronic theses & dissertations collection

January 2013 (has links)
本论文主要研究两类风险估计问题:一类是美式期权价格关于模型参数的敏感性估计, 另一类是投资组合的风险估计。针对这两类问题,我们相应地提出了高效的蒙特卡洛模拟方法。这构成了本文的两个主要部分。 / 第二章是本文的第一部分。在这章中,我们将美式期权的敏感性估计问题提成了更具一般性的估计问题:如果一个随机最优化问题依赖于某些模型参数, 我们该如何估计其最优目标函数关于参数的敏感性。在该问题中, 由于最优决策关于模型参数可能不连续,传统的无穷小扰动分析方法不能直接应用。针对这个困难,我们提出了一种广义的无穷小扰动分析方法,得到敏感性的无偏估计。 我们的方法显示, 在估计敏感性时, 其实并不需要样本路径关于参数的可微性。这是我们在理论上的新发现。另一方面, 该方法可以非常容易的应用于美式期权的敏感性估计。在实际应用中敏感性的无偏估计可以直接嵌入流行的美式期权定价算法,从而同时得到期权价格和价格关于模型参数的敏感性。包括高维问题和多种不同的随机过程模型在内的数值实验, 均显示该估计在计算上具有显著的优越性。最后,我们还从理论上刻画了美式期权的近似最优执行策略对敏感性估计的影响,给出了误差上界。 / 第三章是本文的第二部分。在本章中,我们研究投资组合的风险估计问题。该问题也可被推广成一个一般性的估计问题:如何估计条件期望在作用上一个非线性泛函之后的期望。针对该类估计问题,我们提出了一种多层模拟方法。我们的估计量实际上是一些简单嵌套估计量的线性组合。我们的方法非常容易实现,并且可以被广泛应用于不同的问题结构。理论分析表明我们的方法适用于不同维度的问题并且算法复杂性低于文献中现有的方法。包括低维和高维的数值实验验证了我们的理论分析。 / This dissertation mainly consists of two parts: a generalized infinitesimal perturbation analysis (IPA) approach for American option sensitivities estimation and a multilevel Monte Carlo simulation approach for portfolio risk estimation. / In the first part, we develop efficient Monte Carlo methods for estimating American option sensitivities. The problem can be re-formulated as how to perform sensitivity analysis for a stochastic optimization problem when it has model uncertainty. We introduce a generalized IPA approach to resolve the difficulty caused by discontinuity of the optimal decision with respect to the underlying parameter. The unbiased price-sensitivity estimators yielded from this approach demonstrate significant advantages numerically in both high dimensional environments and various process settings. We can easily embed them into many of the most popular pricing algorithms without extra simulation effort to obtain sensitivities as a by-product of the option price. This generalized approach also casts new insights on how to perform sensitivity analysis using IPA: we do not need pathwise differentiability to apply it. Another contribution of this chapter is to investigate how the estimation quality of sensitivities will be affected by the quality of approximated exercise times. / In the second part, we propose a multilevel nested simulation approach to estimate the expectation of a nonlinear function of a conditional expectation, which has a direct application in portfolio risk estimation problems under various risk measures. Our estimator consists of a linear combination of several standard nested estimators. It is very simple to implement and universally applicable across various problem settings. The results of theoretical analysis show that the algorithmic complexities of our estimators are independent of the problem dimensionality and are better than other alternatives in the literature. Numerical experiments, in both low and high dimensional settings, verify our theoretical analysis. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Liu, Yanchu. / "December 2012." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 89-96). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Abstract in Chinese --- p.iii / Acknowledgements --- p.v / Contents --- p.vii / List of Tables --- p.ix / List of Figures --- p.xii / Chapter 1. --- Overview --- p.1 / Chapter 2. --- American Option Sensitivities Estimation via a Generalized IPA Approach --- p.4 / Chapter 2.1. --- Introduction --- p.4 / Chapter 2.2. --- Formulation of the American Option Pricing Problem --- p.10 / Chapter 2.3. --- Main Results --- p.14 / Chapter 2.3.1. --- A Generalized IPA Approach in the Presence of a Decision Variable --- p.16 / Chapter 2.3.2. --- Unbiased First-Order Sensitivity Estimators --- p.21 / Chapter 2.4. --- Implementation Issues and Error Analysis --- p.23 / Chapter 2.5. --- Numerical Results --- p.26 / Chapter 2.5.1. --- Effects of Dimensionality --- p.27 / Chapter 2.5.2. --- Performance under Various Underlying Processes --- p.29 / Chapter 2.5.3. --- Effects of Exercising Policies --- p.31 / Chapter 2.6. --- Conclusion Remarks and Future Work --- p.33 / Chapter 2.7. --- Appendix --- p.35 / Chapter 2.7.1. --- Proofs of the Main Results --- p.35 / Chapter 2.7.2. --- Likelihood Ratio Estimators --- p.43 / Chapter 2.7.3. --- Derivation of Example 2.3 --- p.49 / Chapter 3. --- Multilevel Monte Carlo Nested Simulation for Risk Estimation --- p.52 / Chapter 3.1. --- Introduction --- p.52 / Chapter 3.1.1. --- Examples --- p.53 / Risk Measurement of Financial Portfolios --- p.53 / Derivatives Pricing --- p.55 / Partial Expected Value of Perfect Information --- p.56 / Chapter 3.1.2. --- A Standard Nested Estimator --- p.57 / Chapter 3.1.3. --- Literature Review --- p.59 / Chapter 3.1.4. --- Summary of Our Contributions --- p.61 / Chapter 3.2. --- The Multilevel Approach --- p.63 / Chapter 3.2.1. --- Motivation --- p.63 / Chapter 3.2.2. --- Multilevel Construction --- p.65 / Chapter 3.2.3. --- Theoretical Analysis --- p.67 / Chapter 3.2.4. --- Further Improvement by Extrapolation --- p.69 / Chapter 3.3. --- Numerical Experiments --- p.72 / Chapter 3.3.1. --- Single Asset Setting --- p.73 / Chapter 3.3.2. --- Multiple Asset Setting --- p.74 / Chapter 3.4. --- Concluding Remarks --- p.77 / Chapter 3.5. --- Appendix: Technical Assumptions and Proofs of the Main Results --- p.79 / Bibliography --- p.89
129

Statistical and probabilistic methods for design of reinforced concrete structures

Kumar, T. S. S January 2010 (has links)
Digitized by Kansas Correctional Industries
130

Statistical methods to account for different sources of bias in Genome-Wide association studies / Méthodes statistiques pour la prise en compte de différentes sources de biais dans les études d'association à grande échelle

Bouaziz, Matthieu 22 November 2012 (has links)
Les études d'association à grande échelle sont devenus un outil très performant pour détecter les variants génétiques associés aux maladies. Ce manuscrit de doctorat s'intéresse à plusieurs des aspects clés des nouvelles problématiques informatiques et statistiques qui ont émergé grâce à de telles recherches. Les résultats des études d'association à grande échelle sont critiqués, en partie, à cause du biais induit par la stratification des populations. Nous proposons une étude de comparaison des stratégies qui existent pour prendre en compte ce problème. Leurs avantages et limites sont discutés en s'appuyant sur divers scénarios de structure des populations dans le but de proposer des conseils et indications pratiques. Nous nous intéressons ensuite à l'interférence de la structure des populations dans la recherche génétique. Nous avons développé au cours de cette thèse un nouvel algorithme appelé SHIPS (Spectral Hierarchical clustering for the Inference of Population Structure). Cet algorithme a été appliqué à un ensemble de jeux de données simulés et réels, ainsi que de nombreux autres algorithmes utilisés en pratique à titre de comparaison. Enfin, la question du test multiple dans ces études d'association est abordée à plusieurs niveaux. Nous proposons une présentation générale des méthodes de tests multiples et discutons leur validité pour différents designs d'études. Nous nous concertons ensuite sur l'obtention de résultats interprétables aux niveaux de gènes, ce qui correspond à une problématique de tests multiples avec des tests dépendants. Nous discutons et analysons les différentes approches dédiées à cette fin. / Genome-Wide association studies have become powerful tools to detect genetic variants associated with diseases. This PhD thesis focuses on several key aspects of the new computational and methodological problematics that have arisen with such research. The results of Genome-Wide association studies have been questioned, in part because of the bias induced by population stratification. Many stratégies are available to account for population stratification scenarios are highlighted in order to propose pratical guidelines to account for population stratification. We then focus on the inference of population structure that has many applications for genetic research. We have developed and present in this manuscript a new clustering algoritm called Spectral Hierarchical clustering for the Inference of Population Structure (SHIPS). This algorithm in the field to propose a comparison of their performances. Finally, the issue of multiple-testing in Genome-Wide association studies is discussed on several levels. We propose a review of the multiple-testing corrections and discuss their validity for different study settings. We then focus on deriving gene-wise interpretation of the findings that corresponds to multiple-stategy to obtain valid gene-disease association measures.

Page generated in 0.0335 seconds