• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Application of partial consistency for the semi-parametric models

Zhao, Jingxin 30 August 2017 (has links)
The semi-parametric model enjoys a relatively flexible structure and keeps some of the simplicity in the statistical analysis. Hence, there are abundance discussions on semi-parametric models in the literature. The concept of partial consistency was firstly brought up in Neyman and Scott (1948). It was said the in cases where infinite parameters are involved, consistent estimators are always attainable for those "structural" parameters. The "structural' parameters are finite and govern infinite samples. Since the nonparametric model can be regarded as a parametric model with infinite parameters, then the semi-parametric model can be easily transformed into a infinite-parametric model with some "structural" parameters. Therefore, based on this idea, we develop several new methods for the estimating and model checking problems in semi-parametric models. The implementation of applying partial consistency is through the method "local average". We consider the nonparametric part as piecewise constant so that infinite parameters are created. The "structural" parameters shall be the parametric part, the model residual variance and so on. Due to the partial consistency phenomena, classical statistic tools can then be applied to obtain consistent estimators for those "structural" parameters. Furthermore, we can take advantage of the rest of parameters to estimate the nonparametric part. In this thesis, we take the varying coefficient model as the example. The estimation of the functional coefficient is discussed and relative model checking methods are presented. The proposed new methods, no matter for the estimation or the test, have remarkably lessened the computation complexity. At the same time, the estimators and the tests get satisfactory asymptotic statistical properties. The simulations we conducted for the new methods also support the asymptotic results, giving a relatively efficient and accurate performance. What's more, the local average method is easy to understand and can be flexibly applied to other type of models. Further developments could be done on this potential method. In Chapter 2, we introduce a local average method to estimate the functional coefficients in the varying coefficient model. As a typical semi-parametric model, the varying coefficient model is widely applied in many areas. The varying coefficient model could be seen as a more flexible version of classical linear model, while it explains well when the regression coefficients do not stay constant. In addition, we extend this local average method to the semi-varying coefficient model, which consists of a linear part and a varying coefficient part. The procedures of the estimations are developed, and their statistical properties are investigated. Plenty of simulations and a real data application are conducted to study the performance of the proposed method. Chapter 3 is about the local average method in variance estimation. Variance estimation is a fundamental problem in statistical modeling and plays an important role in the inferences in model selection and estimation. In this chapter, we have discussed the problem in several nonparametric and semi-parametric models. The proposed method has the advantages of avoiding the estimation of the nonparametric function and reducing the computational cost, and can be easily extended to more complex settings. Asymptotic normality is established for the proposed local average estimators. Numerical simulations and a real data analysis are presented to illustrate the finite sample performance of the proposed method. Naturally, we move to the model checking problem in Chapter 4, still taking varying coefficient models as an example. One important and frequently asked question is whether an estimated coefficient is significant or really "varying". In the literature, the relative hypothesis tests usually require fitting the whole model, including the nuisance coefficients. Consequently, the estimation procedure could be very compute-intensive and time-consuming. Thus, we bring up several tests which can avoid unnecessary functions estimation. The proposed tests are very easy to implement and their asymptotic distributions under null hypothesis have been deduced. Simulations are also studied to show the properties of the tests.
2

Model selection criteria based on Kullback information measures for Weibull, logistic, and nonlinear regression frameworks /

Kim, Hyun-Joo, January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 104-107). Also available on the Internet.
3

Model selection criteria based on Kullback information measures for Weibull, logistic, and nonlinear regression frameworks

Kim, Hyun-Joo, January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 104-107). Also available on the Internet.
4

Finding the optimal dynamic anisotropy resolution for grade estimation improvement at Driefontein Gold Mine, South Africa

Mandava, Senzeni Maggie January 2016 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, in partial fulfilment of the requirements for the degree of Master of Science in Mining Engineering. February, 2016 / Mineral Resource estimation provides an assessment of the quantity, quality, shape and grade distribution of a mineralised deposit. The resource estimation process involves; the assessment of data available, creation of geological and/or grade models for the deposit, statistical and geostatistical analyses of the data, as well as determination of the appropriate grade interpolation methods. In the grade estimation process, grades are interpolated/extrapolated into a two or three – dimensional resource block model of a deposit. The process uses a search volume ellipsoid, centred on each block, to select samples used for estimation. Traditionally, a global orientated search ellipsoid is used during the estimation process. An improvement in the estimation process can be achieved if the direction and continuity of mineralisation is acknowledged by aligning the search ellipsoid accordingly. The misalignment of the search ellipsoid by just a few degrees can impact the estimation results. Representing grade continuity in undulating and folded structures can be a challenge to correct grade estimation. One solution to this problem is to apply the method of Dynamic Anisotropy in the estimation process. This method allows for the anisotropy rotation angles defining the search ellipsoid and variogram model, to directly follow the trend of the mineralisation for each cell within a block model. This research report will describe the application of Dynamic Anisotropy to a slightly undulating area which lies on a gently folded limb of a syncline at Driefontein gold mine and where Ordinary Kriging is used as the method of estimation. In addition, the optimal Dynamic Anisotropy resolution that will provide an improvement in grade estimates will be determined. This will be achieved by executing the estimation process on various block model grid sizes. The geostatistical literature research carried out for this research report highlights the importance of Dynamic Anisotropy in resource estimation. Through the application and analysis on a real-life dataset, this research report will put theories and opinions about Dynamic Anisotropy to the test.
5

Application of indicator kriging and conditional simulation in assessment of grade uncertainty in Hunters road magmatic sulphide nickel deposit in Zimbabwe

Chiwundura, Phillip January 2017 (has links)
A research project report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, in fulfilment of the requirements for the degree of Masters of Science in Engineering, 2017 / The assessment of local and spatial uncertainty associated with a regionalised variable such as nickel grade at Hunters Road magmatic sulphide deposit is one of the critical elements in the resource estimation. The study focused on the application of Multiple Indicator Kriging (MIK) and Sequential Gaussian Simulation (SGS) in the estimation of recoverable resources and the assessment of grade uncertainty at Hunters Road’s Western orebody. The Hunters Road Western orebody was divided into two domains namely the Eastern and the Western domains and was evaluated based on 172 drill holes. MIK and SGS were performed using Datamine Studio RM module. The combined Mineral Resources estimate for the Western orebody at a cut-off grade of 0.40%Ni is 32.30Mt at an average grade of 0.57%Ni, equivalent to 183kt of contained nickel metal. SGS results indicated low uncertainty associated with Hunters Road nickel project with 90% probability of an average true grade above cut-off, lying within +/-3% of the estimated block grade. The estimate of the mean based on SGS was 0.55%Ni and 0.57% Ni for the Western and Eastern domains respectively. MIK results were highly comparable with SGS E-type estimates while the most recent Ordinary Kriging (OK) based estimates by BNC dated May 2006, overstated the resources tonnage and underestimated the grade compared to the MIK estimates. It was concluded that MIK produced better estimates of recoverable resources than OK. However, since only E-type estimates were produced by MIK, post processing of “composite” conditional cumulative distribution function (ccdf) results using a relevant change of support algorithm such as affine correction is recommended. Although SGS produced a good measure of uncertainty around nickel grades, post processing of realisations using a different software such as Isatis has been recommended together with combined simulation of both grade and tonnage. / XL2018
6

Searching for the contemporary and temporal causal relations from data. / 数据中的时间因果关联分析 / CUHK electronic theses & dissertations collection / Shu ju zhong de shi jian yin guo guan lian fen xi

January 2012 (has links)
因果分析由于可以刻画随机事件之间的关系而被关注,而图模型则是描述因果关系的重要工具。在图模型框架中,数据集中隐含的因果关系被表示为定义在这个数据集上的贝叶斯网络,通过贝叶斯网络学习就可以完成数据集上的因果关系挖掘。因此,贝叶斯网络学习在因果分析中具有非常重要的作用。在本文中,我们提出了一种二段式的贝叶斯网络学习算法。在第一阶段,此算法从数据中构建出马尔可夫随机场。在第二阶段,此算法根据学习到的条件随机场构造出贝叶斯网络。本文中提出的二段式贝叶斯网络学习算法具有比现有算法更高的准确率,而且这种二段式算法中的一些技术可以很容易的被应用于其他贝叶斯网络学习算法当中。此外,通过与其他的时间序列中的因果分析模型(例如向量自回归和结构向量自回归模型)做比较,我们可以看出二段式的贝叶斯网络学习算法可以被用于时间序列的因果分析。 通过在真实数据集上的实验,我们证明的二段式贝叶斯网络学习算法在实际问题中的可用性。 / 本文开始介绍了基于约束的贝叶斯网络学习框架,其中的代表作是SGS 算法。在基于约束的贝叶斯网络学习框架中,如何减小测试条件独立的搜索空间是提高算法性能的关键步骤。二段式贝叶斯网络学习算法的核心即是研究如何减小条件独立测试的搜索空间。为此,我们证明了通过马尔可夫随机场来确定贝叶斯网络的结构可以有效的减小条件独立测试的计算复杂性以及增加算法的稳定性。在本文中,偏相关系数被用来度量条件独立。这种方法可用于基于约束的贝叶斯网络学习算法。具体来说,本文证明了在给定数据集的生成模型为线性的条件下,偏相关系数可被用于度量条件独立。而且本文还证明了高斯模型是线性结构方程模型的一个特例。本文比较了二段式的贝叶斯网络学习算法与当前性能最佳的贝叶斯算法在一系列真实贝叶斯网络上的表现。 / 文章的最后一部分研究了二段式的贝叶斯网络学习算法在时间序列因果分析中的应用。在这部分工作中,我们首先证明了结构向量自回归模型模型在高斯过程中不能发现同时期的因果关系。失败的原因是结构向量自回归模型不能满足贝叶斯网络的忠实性条件。因此,本文的最后一部分提出了一种区别于现有工作的基于贝叶斯网络的向量自回归和结构向量自回归模型学习算法。并且通过实验证明的算法在实际问题中的可用性。 / Causal analysis has drawn a lot of attention because it provides with deep insight of relations between random events. Graphical model is a dominant tool to represent causal relations. Under graphical model framework, causal relations implied in a data set are captured by a Bayesian network defined on this data set and causal discovery is achieved by constructing a Bayesian network from the data set. Therefore, Bayesian network learning plays an important role in causal relation discovery. In this thesis, we develop a Two-Phase Bayesian network learning algorithm that learns Bayesian network from data. Phase one of the algorithm learns Markov random fields from data, and phase two constructs Bayesian networks based on Markov random fields obtained. We show that the Two-Phase algorithm provides state-of-the-art accuracy, and the techniques proposed in this work can be easily adopted by other Bayesian network learning algorithms. Furthermore, we present that Two-Phase algorithm can be used for time series analysis by evaluating it against a series of time series causal learning algorithms, including VAR and SVAR. Its practical applicability is also demonstrated through empirical evaluation on real world data set. / We start by presenting a constraint-based Bayesian network learning framework that is a generalization of SGS algorithm [86]. We show that the key step in making Bayesian networks to learn efficiently is restricting the search space of conditioning sets. This leads to the core of this thesis: Two-Phase Bayesian network learning algorithm. Here we show that by learning Bayesian networks fromMarkov random fields, we efficiently reduce the computational complexity and enhance the reliability of the algorithm. Besides the proposal of this Bayesian network learning algorithm, we use zero partial correlation as an indicator of conditional independence. We show that partial correlation can be applied to arbitrary distributions given that data are generated by linear models. In addition, we prove that Gaussian distribution is a special case of linear structure equation model. We then compare our Two-Phase algorithm to other state-of-the-art Bayesian network algorithms on several real world Bayesian networks that are used as benchmark by many related works. / Having built an efficient and accurate Bayesian network learning algorithm, we then apply the algorithm for causal relation discovering on time series. First we show that SVAR model is incapable of identifying contemporaneous causal orders for Gaussian process because it fails to discover the structures faithful to the underlying distributions. We also develop a framework to learn true SVAR and VAR using Bayesian network, which is distinct from existing works. Finally, we show its applicability to a real world problem. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Wang, Zhenxing. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 184-195). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Causal Relation and Directed Graphical Model --- p.1 / Chapter 1.2 --- A Brief History of Bayesian Network Learning --- p.3 / Chapter 1.3 --- Some Important Issues for Causal BayesianNetwork Learning --- p.5 / Chapter 1.3.1 --- Learning Bayesian network locally --- p.6 / Chapter 1.3.2 --- Conditional independence test --- p.7 / Chapter 1.3.3 --- Causation discovery for time series --- p.8 / Chapter 1.4 --- Road Map of the Thesis --- p.10 / Chapter 1.5 --- Summary of the Remaining Chapters --- p.12 / Chapter 2 --- Background Study --- p.14 / Chapter 2.1 --- Notations --- p.14 / Chapter 2.2 --- Formal Preliminaries --- p.15 / Chapter 2.3 --- Constraint-Based Bayesian Network Learning --- p.24 / Chapter 3 --- Two-Phase Bayesian Network Learning --- p.33 / Chapter 3.1 --- Two-Phase Bayesian Network Learning Algorithm --- p.35 / Chapter 3.1.1 --- Basic Two-Phase algorithm --- p.37 / Chapter 3.1.2 --- Two-Phase algorithm with Markov blanket information --- p.59 / Chapter 3.2 --- Correctness Proof and Complexity Analysis --- p.73 / Chapter 3.2.1 --- Correctness proof --- p.73 / Chapter 3.2.2 --- Complexity analysis --- p.81 / Chapter 3.3 --- Related Works --- p.83 / Chapter 3.3.1 --- Search-and-score algorithms --- p.84 / Chapter 3.3.2 --- Constraint-based algorithms --- p.85 / Chapter 3.3.3 --- Other algorithms --- p.86 / Chapter 4 --- Measuring Conditional Independence --- p.88 / Chapter 4.1 --- Formal Definition of Conditional Independence --- p.88 / Chapter 4.2 --- Measuring Conditional Independence --- p.96 / Chapter 4.2.1 --- Measuring independence with partial correlation --- p.96 / Chapter 4.2.2 --- Measuring independence with mutual information --- p.104 / Chapter 4.3 --- Non-Gaussian Distributions and Equivalent Class --- p.108 / Chapter 4.4 --- Heuristic CI Tests UnderMonotone Faithfulness Condition --- p.116 / Chapter 5 --- Empirical Results of Two-Phase Algorithms --- p.125 / Chapter 5.1 --- Experimental Setup --- p.126 / Chapter 5.2 --- Structure Error After Each Phase of Two-Phase Algorithms --- p.129 / Chapter 5.3 --- Maximal and Average Sizes of Conditioning Sets --- p.131 / Chapter 5.4 --- Comparison of the Number of CI Tests Required by Dependency Analysis Approaches --- p.133 / Chapter 5.5 --- Reason forWhich Number of CI Tests Required Grow with Sample Size --- p.135 / Chapter 5.6 --- Two-Phase Algorithms on Linear Gaussian Data --- p.136 / Chapter 5.7 --- Two-phase Algorithms on Linear Non-Gaussian Data --- p.139 / Chapter 5.8 --- Compare Two-phase Algorithms with Search-and-Score Algorithms and Lasso Regression --- p.142 / Chapter 6 --- Causal Mining in Time Series Data --- p.146 / Chapter 6.1 --- A Brief Review of Causation Discovery in Time Series --- p.146 / Chapter 6.2 --- Limitations of Constructing SVAR from VAR --- p.150 / Chapter 6.3 --- SVAR Being Incapability of Identifying Contemporaneous Causal Order for Gaussian Process --- p.152 / Chapter 6.4 --- Estimating the SVARs by Bayesian Network Learning Algorithm --- p.157 / Chapter 6.4.1 --- Represent SVARs by Bayesian networks --- p.158 / Chapter 6.4.2 --- Getting back SVARs and VARs fromBayesian networks --- p.159 / Chapter 6.5 --- Experimental Results --- p.162 / Chapter 6.5.1 --- Experiment on artificial data --- p.162 / Chapter 6.5.2 --- Application in finance --- p.172 / Chapter 6.6 --- Comparison with Related Works --- p.174 / Chapter 7 --- Concluding Remarks --- p.178 / Bibliography --- p.184
7

Estimating the growth rate of harmful algal blooms using a model averaged method

Cohen, Margaret A. January 2009 (has links) (PDF)
Thesis (M.S.)--University of North Carolina Wilmington, 2009. / Title from PDF title page (January 19, 2010) Includes bibliographical references (p. 32-33)
8

Comparative analysis of ordinary kriging and sequential Gaussian simulation for recoverable reserve estimation at Kayelekera Mine

Gulule, Ellasy Priscilla 16 September 2016 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, 2016 / It is of great importance to minimize misclassification of ore and waste during grade control for a mine operation. This research report compares two recoverable reserve estimation techniques for ore classification for Kayelekera Uranium Mine. The research was performed on two data sets taken from the pit with different grade distributions. The two techniques evaluated were Sequential Gaussian Simulation and Ordinary Kriging. A comparison of the estimates from these techniques was done to investigate which method gives more accurate estimates. Based on the results from profits and loss, grade tonnage curves the difference between the techniques is very low. It was concluded that similarity in the estimates were due to Sequential Gaussian Simulation estimates were from an average of 100 simulation which turned out to be similar to Ordinary Kriging. Additionally, similarities in the estimates were due to the close spaced intervals of the blast hole/sample data used. Whilst OK generally produced acceptable results like SGS, the local variability of grades was not adequately reproduced by the technique. Subsequently, if variability is not much of a concern, like if large blocks were to be mined, then either technique can be used and yield similar results. / M T 2016

Page generated in 0.1571 seconds