• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 240
  • 58
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1270
  • 617
  • 312
  • 268
  • 196
  • 195
  • 191
  • 177
  • 171
  • 166
  • 150
  • 122
  • 121
  • 106
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Using the piecewise exponential distribution to model the length of stay in a manpower planning system

Gillan, Catherine C. January 1997 (has links)
No description available.
2

The use of sample spacings in parameter estimation with applications

Thornton, K. M. January 1989 (has links)
No description available.
3

Aspects of Composite Likelihood Estimation And Prediction

Xu, Ximing 08 January 2013 (has links)
A composite likelihood is usually constructed by multiplying a collection of lower dimensional marginal or conditional densities. In recent years, composite likelihood methods have received increasing interest for modeling complex data arising from various application areas, where the full likelihood function is analytically unknown or computationally prohibitive due to the structure of dependence, the dimension of data or the presence of nuisance parameters. In this thesis we investigate some theoretical properties of the maximum composite likelihood estimator (MCLE). In particular, we obtain the limit of the MCLE in a general setting, and set out a framework for understanding the notion of robustness in the context of composite likelihood inference. We also study the improvement of the efficiency of a composite likelihood by incorporating additional component likelihoods, or by using component likelihoods with higher dimension. We show through some illustrative examples that such strategies do not always work and may impair the efficiency. We also show that the MCLE of the parameter of interest can be less efficient when the nuisance parameters are known than when they are unknown. In addition to the theoretical study on composite likelihood estimation, we also explore the possibility of using composite likelihood to make predictive inference in computer experiments. The Gaussian process model is widely used to build statistical emulators for computer experiments. However, when the number of trials is large, both estimation and prediction based on a Gaussian process can be computationally intractable due to the dimension of the covariance matrix. To address this problem, we propose prediction methods based on different composite likelihood functions, which do not require the evaluation of the large covariance matrix and hence alleviate the computational burden. Simulation studies show that the blockwise composite likelihood-based predictors perform well and are competitive with the optimal predictor based on the full likelihood.
4

Aspects of Composite Likelihood Estimation And Prediction

Xu, Ximing 08 January 2013 (has links)
A composite likelihood is usually constructed by multiplying a collection of lower dimensional marginal or conditional densities. In recent years, composite likelihood methods have received increasing interest for modeling complex data arising from various application areas, where the full likelihood function is analytically unknown or computationally prohibitive due to the structure of dependence, the dimension of data or the presence of nuisance parameters. In this thesis we investigate some theoretical properties of the maximum composite likelihood estimator (MCLE). In particular, we obtain the limit of the MCLE in a general setting, and set out a framework for understanding the notion of robustness in the context of composite likelihood inference. We also study the improvement of the efficiency of a composite likelihood by incorporating additional component likelihoods, or by using component likelihoods with higher dimension. We show through some illustrative examples that such strategies do not always work and may impair the efficiency. We also show that the MCLE of the parameter of interest can be less efficient when the nuisance parameters are known than when they are unknown. In addition to the theoretical study on composite likelihood estimation, we also explore the possibility of using composite likelihood to make predictive inference in computer experiments. The Gaussian process model is widely used to build statistical emulators for computer experiments. However, when the number of trials is large, both estimation and prediction based on a Gaussian process can be computationally intractable due to the dimension of the covariance matrix. To address this problem, we propose prediction methods based on different composite likelihood functions, which do not require the evaluation of the large covariance matrix and hence alleviate the computational burden. Simulation studies show that the blockwise composite likelihood-based predictors perform well and are competitive with the optimal predictor based on the full likelihood.
5

Distributed and parallel algorithms and systems for inference of huge phylogenetic trees based on the maximum likelihood method

Stamatakis, Alexandros. January 2004 (has links) (PDF)
München, Techn. University, Diss., 2004.
6

Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative Grenzwert

Johannes, Jan. January 2002 (has links) (PDF)
Berlin, Humboldt-Universiẗat, Diss., 2002.
7

On approximate likelihood in survival models

Läuter, Henning January 2006 (has links)
We give a common frame for different estimates in survival models. For models with nuisance parameters we approximate the profile likelihood and find estimates especially for the proportional hazard model.
8

Diagnostics after a Signal from Control Charts in a Normal Process

Lou, Jianying 03 October 2008 (has links)
Control charts are fundamental SPC tools for process monitoring. When a control chart or combination of charts signals, knowing the change point, which distributional parameter changed, and/or the change size helps to identify the cause of the change, remove it from the process or adjust the process back in control correctly and immediately. In this study, we proposed using maximum likelihood (ML) estimation of the current process parameters and their ML confidence intervals after a signal to identify and estimate the changed parameters. The performance of this ML diagnostic procedure is evaluated for several different charts or chart combinations for the cases of sample sizes and , and compared to the traditional approaches to diagnostics. None of the ML and the traditional estimators performs well for all patterns of shifts, but the ML estimator has the best overall performance. The ML confidence interval diagnostics are overall better at determining which parameter has shifted than the traditional diagnostics based on which chart signals. The performance of the generalized likelihood ratio (GLR) chart in shift detection and in ML diagnostics is comparable to the best EWMA chart combination. With the application of the ML diagnostics naturally following a GLR chart compared to the traditional control charts, the studies of a GLR chart during process monitoring can be further deepened in the future. / Ph. D.
9

Towards a Bayesian framework for optical tomography

Kwee, Ivo Widjaja January 2000 (has links)
No description available.
10

Tests of Independence in a Single 2x2 Contingency Table with Random Margins

Yu, Yuan 01 May 2014 (has links)
In analysis of the contingency tables, the Fisher's exact test is a very important statistical significant test that is commonly used to test independence between the two variables. However, the Fisher' s exact test is based upon the assumption of the fixed margins. That is, the Fisher's exact test uses information beyond the table so that it is conservative. To solve this problem, we allow the margins to be random. This means that instead of fitting the count data to the hypergeometric distribution as in the Fisher's exact test, we model the margins and one cell using multinomial distribution, and then we use the likelihood ratio to test the hypothesis of independence. Furthermore, using Bayesian inference, we consider the Bayes factor as another test statistic. In order to judge the test performance, we compare the power of the likelihood ratio test, the Bayes factor test and the Fisher's exact test. In addition, we use our methodology to analyse data gathered from the Worcester Heart Attack Study to assess gender difference in the therapeutic management of patients with acute myocardial infarction (AMI) by selected demographic and clinical characteristics.

Page generated in 0.0502 seconds