• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 580
  • 240
  • 59
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1281
  • 621
  • 315
  • 272
  • 197
  • 195
  • 193
  • 180
  • 172
  • 167
  • 151
  • 122
  • 122
  • 108
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Statistical Methods for Dealing with Outcome Misclassification in Studies with Competing Risks Survival Outcomes

Mpofu, Philani Brian 02 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In studies with competing risks outcomes, misidentifying the event-type responsible for the observed failure is, by definition, an act of misclassification. Several authors have established that such misclassification can bias competing risks statistical analyses, and have proposed statistical remedies to aid correct modeling. Generally, these rely on adjusting the estimation process using information about outcome misclassification, but invariably assume that outcome misclassification is non-differential among study subjects regardless of their individual characteristics. In addition, current methods tend to adjust for the misclassification within a semi-parametric framework of modeling competing risks data. Building on the existing literature, in this dissertation, we explore the parametric modeling of competing risks data in the presence of outcome misclassification, be it differential or non-differential. Specifically, we develop parametric pseudo-likelihood-based approaches for modeling cause-specific hazards while adjusting for misclassification information that is obtained either through data internal or external to the current study (respectively, internal or external-validation sampling). Data from either type of validation sampling are used to model predictive values or misclassification probabilities, which, in turn, are used to adjust the cause-specific hazard models. We show that the resulting pseudo-likelihood estimates are consistent and asymptotically normal, and verify these theoretical properties using simulation studies. Lastly, we illustrate the proposed methods using data from a study involving people living with HIV/AIDS (PLWH)in the East-African consortium of the International Epidemiologic Databases for the Evaluation of HIV/AIDS (IeDEA EA). In this example, death is frequently misclassified as disengagement from care as many deaths go unreported to health facilities caring for these patients. In this application, we model the cause-specific hazards of death and disengagement from care among PLWH after they initiate anti-retroviral treatment, while adjusting for death misclassification. / 2021-03-10
152

Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model / 定量的連続性モデルに基づくDNA混合試料解析用オープンソースソフトウェアの開発と検証

Manabe, Sho 26 March 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医科学) / 甲第21024号 / 医科博第85号 / 新制||医科||6(附属図書館) / 京都大学大学院医学研究科医科学専攻 / (主査)教授 川上 浩司, 教授 黒田 知宏, 教授 森田 智視 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
153

The Effect of Counterfactual Potency on Behavioral Intentions

Kim, Woo J. 28 October 2019 (has links)
No description available.
154

Local Distance Correlation: An Extension of Local Gaussian Correlation

Hamdi, Walaa Ahmed 06 August 2020 (has links)
No description available.
155

Applications of Empirical Likelihood to Zero-Inflated Data and Epidemic Change Point

Pailden, Junvie Montealto 07 May 2013 (has links)
No description available.
156

Logspline Density Estimation with an Application to the Study of Survival Data of Lung Cancer Patients.

Chen, Yong 18 August 2004 (has links) (PDF)
A Logspline method of estimating an unknown density function f based on sample data is studied. Our approach is to use maximum likelihood estimation to estimate the unknown density function from a space of linear splines that have a finite number of fixed uniform knots. In the end of this thesis, the method is applied to a real survival data set of lung cancer patients.
157

Food Shelf Life: Estimation and Experimental Design

Larsen, Ross Allen Andrew 15 May 2006 (has links) (PDF)
Shelf life is a parameter of the lifetime distribution of a food product, usually the time until a specified proportion (1-50%) of the product has spoiled according to taste. The data used to estimate shelf life typically come from a planned experiment with sampled food items observed at specified times. The observation times are usually selected adaptively using ‘staggered sampling.’ Ad-hoc methods based on linear regression have been recommended to estimate shelf life. However, other methods based on maximizing a likelihood (MLE) have been proposed, studied, and used. Both methods assume the Weibull distribution. The observed lifetimes in shelf life studies are censored, a fact that the ad-hoc methods largely ignore. One purpose of this project is to compare the statistical properties of the ad-hoc estimators and the maximum likelihood estimator. The simulation study showed that the MLE methods have higher coverage than the regression methods, better asymptotic properties in regards to bias, and have lower median squared errors (mese) values, especially when shelf life is defined by smaller percentiles. Thus, they should be used in practice. A genetic algorithm (Hamada et al. 2001) was used to find near-optimal sampling designs. This was successfully programmed for general shelf life estimation. The genetic algorithm generally produced designs that had much smaller median squared errors than the staggered design that is used commonly in practice. These designs were radically different than the standard designs. Thus, the genetic algorithm may be used to plan studies in the future that have good estimation properties.
158

Modeling Distributions of Test Scores with Mixtures of Beta Distributions

Feng, Jingyu 08 November 2005 (has links) (PDF)
Test score distributions are used to make important instructional decisions about students. The test scores usually do not follow a normal distribution. In some cases, the scores appear to follow a bimodal distribution that can be modeled with a mixture of beta distributions. This bimodality may be due different levels of students' ability. The purpose of this study was to develop and apply statistical techniques for fitting beta mixtures and detecting bimodality in test score distributions. Maximum likelihood and Bayesian methods were used to estimate the five parameters of the beta mixture distribution for scores in four quizzes in a cell biology class at Brigham Young University. The mixing proportion was examined to draw conclusions about bimodality. We were successful in fitting the beta mixture to the data, but the methods were only partially successful in detecting bimodality.
159

Parameter Estimation for the Beta Distribution

Owen, Claire Elayne Bangerter 20 November 2008 (has links) (PDF)
The beta distribution is useful in modeling continuous random variables that lie between 0 and 1, such as proportions and percentages. The beta distribution takes on many different shapes and may be described by two shape parameters, alpha and beta, that can be difficult to estimate. Maximum likelihood and method of moments estimation are possible, though method of moments is much more straightforward. We examine both of these methods here, and compare them to three more proposed methods of parameter estimation: 1) a method used in the Program Evaluation and Review Technique (PERT), 2) a modification of the two-sided power distribution (TSP), and 3) a quantile estimator based on the first and third quartiles of the beta distribution. We find the quantile estimator performs as well as maximum likelihood and method of moments estimators for most beta distributions. The PERT and TSP estimators do well for a smaller subset of beta distributions, though they never outperform the maximum likelihood, method of moments, or quantile estimators. We apply these estimation techniques to two data sets to see how well they approximate real data from Major League Baseball (batting averages) and the U.S. Department of Energy (radiation exposure). We find the maximum likelihood, method of moments, and quantile estimators perform well with batting averages (sample size 160), and the method of moments and quantile estimators perform well with radiation exposure proportions (sample size 20). Maximum likelihood estimators would likely do fine with such a small sample size were it not for the iterative method needed to solve for alpha and beta, which is quite sensitive to starting values. The PERT and TSP estimators do more poorly in both situations. We conclude that in addition to maximum likelihood and method of moments estimation, our method of quantile estimation is efficient and accurate in estimating parameters of the beta distribution.
160

Parameter Estimation for the Lognormal Distribution

Ginos, Brenda Faith 13 November 2009 (has links) (PDF)
The lognormal distribution is useful in modeling continuous random variables which are greater than or equal to zero. Example scenarios in which the lognormal distribution is used include, among many others: in medicine, latent periods of infectious diseases; in environmental science, the distribution of particles, chemicals, and organisms in the environment; in linguistics, the number of letters per word and the number of words per sentence; and in economics, age of marriage, farm size, and income. The lognormal distribution is also useful in modeling data which would be considered normally distributed except for the fact that it may be more or less skewed (Limpert, Stahel, and Abbt 2001). Appropriately estimating the parameters of the lognormal distribution is vital for the study of these and other subjects. Depending on the values of its parameters, the lognormal distribution takes on various shapes, including a bell-curve similar to the normal distribution. This paper contains a simulation study concerning the effectiveness of various estimators for the parameters of the lognormal distribution. A comparison is made between such parameter estimators as Maximum Likelihood estimators, Method of Moments estimators, estimators by Serfling (2002), as well as estimators by Finney (1941). A simulation is conducted to determine which parameter estimators work better in various parameter combinations and sample sizes of the lognormal distribution. We find that the Maximum Likelihood and Finney estimators perform the best overall, with a preference given to Maximum Likelihood over the Finney estimators because of its vast simplicity. The Method of Moments estimators seem to perform best when σ is less than or equal to one, and the Serfling estimators are quite accurate in estimating μ but not σ in all regions studied. Finally, these parameter estimators are applied to a data set counting the number of words in each sentence for various documents, following which a review of each estimator's performance is conducted. Again, we find that the Maximum Likelihood estimators perform best for the given application, but that Serfling's estimators are preferred when outliers are present.

Page generated in 0.053 seconds