• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 24
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 138
  • 45
  • 29
  • 26
  • 26
  • 23
  • 22
  • 20
  • 20
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Using the piecewise exponential distribution to model the length of stay in a manpower planning system

Gillan, Catherine C. January 1997 (has links)
No description available.
2

The use of sample spacings in parameter estimation with applications

Thornton, K. M. January 1989 (has links)
No description available.
3

USE OF COMPUTER GENERATED HOLOGRAMS FOR OPTICAL ALIGNMENT

Zehnder, Rene January 2011 (has links)
The necessity to align a multi component null corrector that is used to test the 8.4 [m] off axis parabola segments of the primary mirror of the Giant Magellan Telescope (GMT) initiated this work. Computer Generated Holograms (CGHs) are often a component of these null correctors and their capability to have multiplefunctionality allows them not only to contribute to the measurement wavefront but also support the alignment. The CGH can also be used as an external tool to support the alignment of complex optical systems, although, for the applications shown in this work, the CGH is always a component of the optical system. In general CGHs change the shape of the illuminating wavefront that then can produce optical references. The uncertainty of position of those references not only depends on the uncertainty of position of the CGH with respect to the illuminating wavefront but also on the uncertainty on the shape of the illuminating wavefront. A complete analysis of the uncertainty on the position of the projected references therefore includes the illuminating optical system, that is typically an interferometer. This work provides the relationships needed to calculate the combined propagation of uncertainties on the projected optical references. This includes a geometrical optical description how light carries information of position and how diffraction may alter it. Any optical reference must be transferred to a mechanically tangible quantity for the alignment. The process to obtain the position of spheres relative to the CGH pattern where, the spheres are attached to the CGH, is provided and applied to the GMT null corrector. Knowing the location of the spheres relative to the CGH pattern is equivalent to know the location of the spheres with respect to the wavefront the pattern generates. This work provides various tools for the design and analysis to use CGHs for optical alignment including the statistical foundation that goes with it.
4

Optimal designs for maximum likelihood estimation and factorial structure design

Chowdhury, Monsur 06 September 2016 (has links)
This thesis develops methodologies for the construction of various types of optimal designs with applications in maximum likelihood estimation and factorial structure design. The methodologies are applied to some real data sets throughout the thesis. We start with a broad review of optimal design theory including various types of optimal designs along with some fundamental concepts. We then consider a class of optimization problems and determine the optimality conditions. An important tool is the directional derivative of a criterion function. We study extensively the properties of the directional derivatives. In order to determine the optimal designs, we consider a class of multiplicative algorithms indexed by a function, which satisfies certain conditions. The most important and popular design criterion in applications is D-optimality. We construct such designs for various regression models and develop some useful strategies for better convergence of the algorithms. The remaining thesis is devoted to some important applications of optimal design theory. We first consider the problem of determining maximum likelihood estimates of the cell probabilities under the hypothesis of marginal homogeneity in a square contingency table. We formulate the Lagrangian function and remove the Lagrange parameters by substitution. We then transform the problem to one of maximizing some functions of the cell probabilities simultaneously. We apply this problem to some real data sets, namely, a US Migration data, and a data on grading of unaided distance vision. We solve another estimation problem to determine the maximum likelihood estimation of the parameters of the latent variable models such as Bradley-Terry model where the data come from a paired comparisons experiment. We approach this problem by considering the observed frequency having a binomial distribution and then replacing the binomial parameters in terms of optimal design weights. We apply this problem to a data set from American League Baseball Teams. Finally, we construct some optimal structure designs for comparing test treatments with a control. We introduce different structure designs and establish their properties using the incidence and characteristic matrices. We also develop methods of obtaining optimal R-type structure designs and show how such designs are trace, A- and MV-optimal. / October 2016
5

A comparably robust approach to estimate the left-censored data of trace elements in Swedish groundwater

Li, Cong January 2012 (has links)
Groundwater data in this thesis, which is taken from the database of Sveriges Geologiska Undersökning, characterizes chemical and quantitative status of groundwater in Sweden. The data usually is recorded with only quantification limits when it is below certain values. Accordingly, this thesis is aiming at handling such kind of data. The thesis considers this topic by using the EM algorithm to get the results from maximum likelihood estimation. Consequently, estimations of distributions on censored data of trace elements are expounded on. Related simulations show that the estimation is acceptable.
6

Towards smooth particle filters for likelihood estimation with multivariate latent variables

Lee, Anthony 11 1900 (has links)
In parametrized continuous state-space models, one can obtain estimates of the likelihood of the data for fixed parameters via the Sequential Monte Carlo methodology. Unfortunately, even if the likelihood is continuous in the parameters, the estimates produced by practical particle filters are not, even when common random numbers are used for each filter. This is because the same resampling step which drastically reduces the variance of the estimates also introduces discontinuities in the particles that are selected across filters when the parameters change. When the state variables are univariate, a method exists that gives an estimator of the log-likelihood that is continuous in the parameters. We present a non-trivial generalization of this method using tree-based o(N²) (and as low as O(N log N)) resampling schemes that induce significant correlation amongst the selected particles across filters. In turn, this reduces the variance of the difference between the likelihood evaluated for different values of the parameters and the resulting estimator is considerably smoother than naively running the filters with common random numbers. Importantly, in practice our methods require only a change to the resample operation in the SMC framework without the addition of any extra parameters and can therefore be used for any application in which particle filters are already used. In addition, excepting the optional use of interpolation in the schemes, there are no regularity conditions for their use although certain conditions make them more advantageous. In this thesis, we first introduce the relevant aspects of the SMC methodology to the task of likelihood estimation in continuous state-space models and present an overview of work related to the task of smooth likelihood estimation. Following this, we introduce theoretically correct resampling schemes that cannot be implemented and the practical tree-based resampling schemes that were developed instead. After presenting the performance of our schemes in various applications, we show that two of the schemes are asymptotically consistent with the theoretically correct but unimplementable methods introduced earlier. Finally, we conclude the thesis with a discussion.
7

Towards smooth particle filters for likelihood estimation with multivariate latent variables

Lee, Anthony 11 1900 (has links)
In parametrized continuous state-space models, one can obtain estimates of the likelihood of the data for fixed parameters via the Sequential Monte Carlo methodology. Unfortunately, even if the likelihood is continuous in the parameters, the estimates produced by practical particle filters are not, even when common random numbers are used for each filter. This is because the same resampling step which drastically reduces the variance of the estimates also introduces discontinuities in the particles that are selected across filters when the parameters change. When the state variables are univariate, a method exists that gives an estimator of the log-likelihood that is continuous in the parameters. We present a non-trivial generalization of this method using tree-based o(N²) (and as low as O(N log N)) resampling schemes that induce significant correlation amongst the selected particles across filters. In turn, this reduces the variance of the difference between the likelihood evaluated for different values of the parameters and the resulting estimator is considerably smoother than naively running the filters with common random numbers. Importantly, in practice our methods require only a change to the resample operation in the SMC framework without the addition of any extra parameters and can therefore be used for any application in which particle filters are already used. In addition, excepting the optional use of interpolation in the schemes, there are no regularity conditions for their use although certain conditions make them more advantageous. In this thesis, we first introduce the relevant aspects of the SMC methodology to the task of likelihood estimation in continuous state-space models and present an overview of work related to the task of smooth likelihood estimation. Following this, we introduce theoretically correct resampling schemes that cannot be implemented and the practical tree-based resampling schemes that were developed instead. After presenting the performance of our schemes in various applications, we show that two of the schemes are asymptotically consistent with the theoretically correct but unimplementable methods introduced earlier. Finally, we conclude the thesis with a discussion.
8

Communicating risk in intelligence forecasts: The consumer's perspective

Dieckmann, Nathan F. 12 1900 (has links)
xv, 178 p. : ill. A print copy of this title is available through the UO Libraries under the call number: KNIGHT HM1101 .D54 2007 / The main goal of many political and intelligence forecasts is to effectively communicate risk information to decision makers (i.e. consumers). Standard reporting most often consists of a narrative discussion of relevant evidence concerning a threat, and rarely involves numerical estimates of uncertainty (e.g. a 5% chance). It is argued that numerical estimates of uncertainty will lead to more accurate representations of risk and improved decision making on the part of intelligence consumers. Little work has focused on how well consumers understand and use forecasts that include numerical estimates of uncertainty. Participants were presented with simulated intelligence forecasts describing potential terrorist attacks. These forecasts consisted of a narrative summary of the evidence related to the attack and numerical estimates of likelihood and potential harm. The primary goals were to explore how the structure of the narrative summary, the format of likelihood information, and the numerical ability (numeracy) of consumers affected perceptions of intelligence forecasts. Consumers perceived forecasts with numerical estimates of likelihood and potential harm as more useful than forecasts with only a narrative evidence summary. However, consumer's risk and likelihood perceptions were more greatly affected by the narrative evidence summary than the stated likelihood information. These results show that even "precise" numerical estimates of likelihood are not necessarily evaluable by consumers and that perceptions of likelihood are affected by supporting narrative information. Numeracy also moderated the effects of stated likelihood and the narrative evidence summary. Consumers higher in numeracy were more likely to use the stated likelihood information and consumers lower in numeracy were more likely to use the narrative evidence to inform their judgments. The moderating effect of likelihood format and consumer's perceptions of forecasts in hindsight are also explored. Explicit estimates of uncertainty are not necessarily useful to all intelligence consumers, particularly when presented with supporting narrative evidence. How consumers respond to intelligence forecasts depends on the structure of any supporting narrative information, the format of the explicit uncertainty information, and the numerical ability of the individual consumer. Forecasters should be sensitive to these three issues when presenting forecasts to consumers. / Adviser: Paul Slovic
9

Towards smooth particle filters for likelihood estimation with multivariate latent variables

Lee, Anthony 11 1900 (has links)
In parametrized continuous state-space models, one can obtain estimates of the likelihood of the data for fixed parameters via the Sequential Monte Carlo methodology. Unfortunately, even if the likelihood is continuous in the parameters, the estimates produced by practical particle filters are not, even when common random numbers are used for each filter. This is because the same resampling step which drastically reduces the variance of the estimates also introduces discontinuities in the particles that are selected across filters when the parameters change. When the state variables are univariate, a method exists that gives an estimator of the log-likelihood that is continuous in the parameters. We present a non-trivial generalization of this method using tree-based o(N²) (and as low as O(N log N)) resampling schemes that induce significant correlation amongst the selected particles across filters. In turn, this reduces the variance of the difference between the likelihood evaluated for different values of the parameters and the resulting estimator is considerably smoother than naively running the filters with common random numbers. Importantly, in practice our methods require only a change to the resample operation in the SMC framework without the addition of any extra parameters and can therefore be used for any application in which particle filters are already used. In addition, excepting the optional use of interpolation in the schemes, there are no regularity conditions for their use although certain conditions make them more advantageous. In this thesis, we first introduce the relevant aspects of the SMC methodology to the task of likelihood estimation in continuous state-space models and present an overview of work related to the task of smooth likelihood estimation. Following this, we introduce theoretically correct resampling schemes that cannot be implemented and the practical tree-based resampling schemes that were developed instead. After presenting the performance of our schemes in various applications, we show that two of the schemes are asymptotically consistent with the theoretically correct but unimplementable methods introduced earlier. Finally, we conclude the thesis with a discussion. / Science, Faculty of / Computer Science, Department of / Graduate
10

Estimation of long-range dependence

Vivero, Oskar January 2010 (has links)
A set of observations from a random process which exhibit correlations that decay slower than an exponential rate is regarded as long-range dependent. This phenomenon has stimulated great interest in the scientific community as it appears in a wide range of areas of knowledge. For example, this property has been observed in data pertaining to electronics, econometrics, hydrology and biomedical signals.There exist several estimation methods for finding model parameters that help explain the set of observations exhibiting long-range dependence. Among these methods, maximum likelihood is attractive, given its desirable statistical properties such as asymptotic consistency and efficiency. However, its computational complexity makes the implementation of maximum likelihood prohibitive.This thesis presents a group of computationally efficient estimators based on the maximum likelihood framework. The thesis consists of two main parts. The first part is devoted to developing a computationally efficient alternative to the maximum likelihood estimate. This alternative is based on the circulant embedding concept and it is shown to maintain the desirable statistical properties of maximum likelihood.Interesting results are obtained by analysing the circulant embedding estimate. In particular, this thesis shows that the maximum likelihood based methods are ill-conditioned; the estimators' performance will deteriorate significantly when the set of observations is corrupted by errors. The second part of this thesis focuses on developing computationally efficient estimators with improved performance under the presence of errors in the observations.

Page generated in 0.1238 seconds