Spelling suggestions: "subject:"maximumlikelihood destimation"" "subject:"maximumlikelihood coestimation""
1 |
Using the piecewise exponential distribution to model the length of stay in a manpower planning systemGillan, Catherine C. January 1997 (has links)
No description available.
|
2 |
The use of sample spacings in parameter estimation with applicationsThornton, K. M. January 1989 (has links)
No description available.
|
3 |
USE OF COMPUTER GENERATED HOLOGRAMS FOR OPTICAL ALIGNMENTZehnder, Rene January 2011 (has links)
The necessity to align a multi component null corrector that is used to test the 8.4 [m] off axis parabola segments of the primary mirror of the Giant Magellan Telescope (GMT) initiated this work. Computer Generated Holograms (CGHs) are often a component of these null correctors and their capability to have multiplefunctionality allows them not only to contribute to the measurement wavefront but also support the alignment. The CGH can also be used as an external tool to support the alignment of complex optical systems, although, for the applications shown in this work, the CGH is always a component of the optical system. In general CGHs change the shape of the illuminating wavefront that then can produce optical references. The uncertainty of position of those references not only depends on the uncertainty of position of the CGH with respect to the illuminating wavefront but also on the uncertainty on the shape of the illuminating wavefront. A complete analysis of the uncertainty on the position of the projected references therefore includes the illuminating optical system, that is typically an interferometer. This work provides the relationships needed to calculate the combined propagation of uncertainties on the projected optical references. This includes a geometrical optical description how light carries information of position and how diffraction may alter it. Any optical reference must be transferred to a mechanically tangible quantity for the alignment. The process to obtain the position of spheres relative to the CGH pattern where, the spheres are attached to the CGH, is provided and applied to the GMT null corrector. Knowing the location of the spheres relative to the CGH pattern is equivalent to know the location of the spheres with respect to the wavefront the pattern generates. This work provides various tools for the design and analysis to use CGHs for optical alignment including the statistical foundation that goes with it.
|
4 |
Optimal designs for maximum likelihood estimation and factorial structure designChowdhury, Monsur 06 September 2016 (has links)
This thesis develops methodologies for the construction of various types of optimal designs with applications in maximum likelihood estimation and factorial structure design. The methodologies are applied to some real data sets throughout the thesis.
We start with a broad review of optimal design theory including various types of optimal designs along with some fundamental concepts. We then consider a class of optimization problems and determine the optimality conditions. An important tool is the directional derivative of a criterion function. We study extensively the properties of the directional derivatives. In order to determine the optimal designs, we consider a class of multiplicative algorithms indexed by a function, which satisfies certain conditions. The most important and popular design criterion in applications is D-optimality. We construct such designs for various regression models and develop some useful strategies for better convergence of the algorithms.
The remaining thesis is devoted to some important applications of optimal design theory. We first consider the problem of determining maximum likelihood estimates of the cell probabilities under the hypothesis of marginal homogeneity in a square contingency table. We formulate the Lagrangian function and remove the Lagrange parameters by substitution. We then transform the problem to one of maximizing some functions of the cell probabilities simultaneously. We apply this problem to some real data sets, namely, a US Migration data, and a data on grading of unaided distance vision. We solve another estimation problem to determine the maximum likelihood estimation of the parameters of the latent variable models such as Bradley-Terry model where the data come from a paired comparisons experiment. We approach this problem by considering the observed frequency having a binomial distribution and then replacing the binomial parameters in terms of optimal design weights. We apply this problem to a data set from American League Baseball Teams.
Finally, we construct some optimal structure designs for comparing test treatments with a control. We introduce different structure designs and establish their properties using the incidence and characteristic matrices. We also develop methods of obtaining optimal R-type structure designs and show how such designs are trace, A- and MV-optimal. / October 2016
|
5 |
A comparably robust approach to estimate the left-censored data of trace elements in Swedish groundwaterLi, Cong January 2012 (has links)
Groundwater data in this thesis, which is taken from the database of Sveriges Geologiska Undersökning, characterizes chemical and quantitative status of groundwater in Sweden. The data usually is recorded with only quantification limits when it is below certain values. Accordingly, this thesis is aiming at handling such kind of data. The thesis considers this topic by using the EM algorithm to get the results from maximum likelihood estimation. Consequently, estimations of distributions on censored data of trace elements are expounded on. Related simulations show that the estimation is acceptable.
|
6 |
Estimation of long-range dependenceVivero, Oskar January 2010 (has links)
A set of observations from a random process which exhibit correlations that decay slower than an exponential rate is regarded as long-range dependent. This phenomenon has stimulated great interest in the scientific community as it appears in a wide range of areas of knowledge. For example, this property has been observed in data pertaining to electronics, econometrics, hydrology and biomedical signals.There exist several estimation methods for finding model parameters that help explain the set of observations exhibiting long-range dependence. Among these methods, maximum likelihood is attractive, given its desirable statistical properties such as asymptotic consistency and efficiency. However, its computational complexity makes the implementation of maximum likelihood prohibitive.This thesis presents a group of computationally efficient estimators based on the maximum likelihood framework. The thesis consists of two main parts. The first part is devoted to developing a computationally efficient alternative to the maximum likelihood estimate. This alternative is based on the circulant embedding concept and it is shown to maintain the desirable statistical properties of maximum likelihood.Interesting results are obtained by analysing the circulant embedding estimate. In particular, this thesis shows that the maximum likelihood based methods are ill-conditioned; the estimators' performance will deteriorate significantly when the set of observations is corrupted by errors. The second part of this thesis focuses on developing computationally efficient estimators with improved performance under the presence of errors in the observations.
|
7 |
The covariance structure of conditional maximum likelihood estimatesStrasser, Helmut 11 1900 (has links) (PDF)
In this paper we consider conditional maximum likelihood (cml) estimates for
item parameters in the Rasch model under random subject parameters. We give
a simple approximation for the asymptotic covariance matrix of the cml-estimates.
The approximation is stated as a limit theorem when the number of item parameters
goes to infinity. The results contain precise mathematical information on the order
of approximation.
The results enable the analysis of the covariance structure of cml-estimates when
the number of items is large. Let us give a rough picture. The covariance matrix has
a dominating main diagonal containing the asymptotic variances of the estimators.
These variances are almost equal to the efficient variances under ml-estimation when
the distribution of the subject parameter is known. Apart from very small numbers
n of item parameters the variances are almost not affected by the number n. The
covariances are more or less negligible when the number of item parameters is large.
Although this picture intuitively is not surprising it has to be established in precise
mathematical terms. This has been done in the present paper.
The paper is based on previous results [5] of the author concerning conditional
distributions of non-identical replications of Bernoulli trials. The mathematical background
are Edgeworth expansions for the central limit theorem. These previous results
are the basis of approximations for the Fisher information matrices of cmlestimates.
The main results of the present paper are concerned with the approximation
of the covariance matrices.
Numerical illustrations of the results and numerical experiments based on the
results are presented in Strasser, [6].
|
8 |
Maximum likelihood estimation of phylogenetic tree with evolutionary parametersWang, Qiang 19 May 2004 (has links)
No description available.
|
9 |
Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl LiebenbergLiebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
|
10 |
Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl LiebenbergLiebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
|
Page generated in 0.1136 seconds