• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 53
  • 9
  • 7
  • 5
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 113
  • 51
  • 49
  • 43
  • 43
  • 41
  • 41
  • 40
  • 36
  • 35
  • 35
  • 34
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Generalized rank tests for univariate and bivariate interval-censored failure time data

Sun, De-Yu 20 June 2003 (has links)
In Part 1 of this paper, we adapt Turnbull¡¦s algorithm to estimate the distribution function of univariate interval-censored and truncated failure time data. We also propose four non-parametric tests to test whether two groups of the data come from the same distribution. The powers of proposed test statistics are compared by simulation under different distributions. The proposed tests are then used to analyze an AIDS study. In Part 2, for bivariate interval-censored data, we propose some models of how to generate the data and several methods to measure the correlation between the two variates. We also propose several nonparametric tests to determine whether the two variates are mutually independent or whether they have the same distribution. We demonstrate the performance of these tests by simulation and give an application to AIDS study¡]ACTG 181¡^.
12

Discrete Weibull regression model for count data

Kalktawi, Hadeel Saleh January 2017 (has links)
Data can be collected in the form of counts in many situations. In other words, the number of deaths from an accident, the number of days until a machine stops working or the number of annual visitors to a city may all be considered as interesting variables for study. This study is motivated by two facts; first, the vital role of the continuous Weibull distribution in survival analyses and failure time studies. Hence, the discrete Weibull (DW) is introduced analogously to the continuous Weibull distribution, (see, Nakagawa and Osaki (1975) and Kulasekera (1994)). Second, researchers usually focus on modeling count data, which take only non-negative integer values as a function of other variables. Therefore, the DW, introduced by Nakagawa and Osaki (1975), is considered to investigate the relationship between count data and a set of covariates. Particularly, this DW is generalised by allowing one of its parameters to be a function of covariates. Although the Poisson regression can be considered as the most common model for count data, it is constrained by its equi-dispersion (the assumption of equal mean and variance). Thus, the negative binomial (NB) regression has become the most widely used method for count data regression. However, even though the NB can be suitable for the over-dispersion cases, it cannot be considered as the best choice for modeling the under-dispersed data. Hence, it is required to have some models that deal with the problem of under-dispersion, such as the generalized Poisson regression model (Efron (1986) and Famoye (1993)) and COM-Poisson regression (Sellers and Shmueli (2010) and Sáez-Castillo and Conde-Sánchez (2013)). Generally, all of these models can be considered as modifications and developments of Poisson models. However, this thesis develops a model based on a simple distribution with no modification. Thus, if the data are not following the dispersion system of Poisson or NB, the true structure generating this data should be detected. Applying a model that has the ability to handle different dispersions would be of great interest. Thus, in this study, the DW regression model is introduced. Besides the exibility of the DW to model under- and over-dispersion, it is a good model for inhomogeneous and highly skewed data, such as those with excessive zero counts, which are more disperse than Poisson. Although these data can be fitted well using some developed models, namely, the zero-inated and hurdle models, the DW demonstrates a good fit and has less complexity than these modifed models. However, there could be some cases when a special model that separates the probability of zeros from that of the other positive counts must be applied. Then, to cope with the problem of too many observed zeros, two modifications of the DW regression are developed, namely, zero-inated discrete Weibull (ZIDW) and hurdle discrete Weibull (HDW) models. Furthermore, this thesis considers another type of data, where the response count variable is censored from the right, which is observed in many experiments. Applying the standard models for these types of data without considering the censoring may yield misleading results. Thus, the censored discrete Weibull (CDW) model is employed for this case. On the other hand, this thesis introduces the median discrete Weibull (MDW) regression model for investigating the effect of covariates on the count response through the median which are more appropriate for the skewed nature of count data. In other words, the likelihood of the DW model is re-parameterized to explain the effect of the predictors directly on the median. Thus, in comparison with the generalized linear models (GLMs), MDW and GLMs both investigate the relations to a set of covariates via certain location measurements; however, GLMs consider the means, which is not the best way to represent skewed data. These DW regression models are investigated through simulation studies to illustrate their performance. In addition, they are applied to some real data sets and compared with the related count models, mainly Poisson and NB models. Overall, the DW models provide a good fit to the count data as an alternative to the NB models in the over-dispersion case and are much better fitting than the Poisson models. Additionally, contrary to the NB model, the DW can be applied for the under-dispersion case.
13

Empirical Likelihood Confidence Intervals for ROC Curves with Missing Data

An, Yueheng 25 April 2011 (has links)
The receiver operating characteristic, or the ROC curve, is widely utilized to evaluate the diagnostic performance of a test, in other words, the accuracy of a test to discriminate normal cases from diseased cases. In the biomedical studies, we often meet with missing data, which the regular inference procedures cannot be applied to directly. In this thesis, the random hot deck imputation is used to obtain a 'complete' sample. Then empirical likelihood (EL) confidence intervals are constructed for ROC curves. The empirical log-likelihood ratio statistic is derived whose asymptotic distribution isproved to be a weighted chi-square distribution. The results of simulation study show that the EL confidence intervals perform well in terms of the coverage probability and the average length for various sample sizes and response rates.
14

A comparably robust approach to estimate the left-censored data of trace elements in Swedish groundwater

Li, Cong January 2012 (has links)
Groundwater data in this thesis, which is taken from the database of Sveriges Geologiska Undersökning, characterizes chemical and quantitative status of groundwater in Sweden. The data usually is recorded with only quantification limits when it is below certain values. Accordingly, this thesis is aiming at handling such kind of data. The thesis considers this topic by using the EM algorithm to get the results from maximum likelihood estimation. Consequently, estimations of distributions on censored data of trace elements are expounded on. Related simulations show that the estimation is acceptable.
15

Essays on Choice and Demand Analysis of Organic and Conventional Milk in the United States

Alviola IV, Pedro A. 2009 December 1900 (has links)
This dissertation has four interrelated studies, namely (1) the characterization of milk purchase choices which included the purchase of organic milk, both organic and conventional milk and conventional milk only; (2) the estimation of a single-equation household demand function for organic and conventional milk; (3) the assessment of binary choice models for organic milk using the Brier Probability score and Yates partition, and (4) the estimation of demand systems that addresses the censoring issue through the use of econometric techniques. In the first paper, the study utilized the estimation of both multinomial logit and probit models in examining a set of causal socio-demographic variables in explaining the purchase of three outcome milk choices namely organic milk, organic and conventional milk and conventional milk only. These crucial variables include income, household size, education level and employment of household head, race, ethnicity and region. Using the 2004 Nielsen Homescan Panel, the second study used the Heckman two-step procedure in calculating the own-price, cross-price, and income elasticities by estimating the demand relationships for both organic and conventional milk. Results indicated that organic and conventional milk are substitutes. Also, an asymmetric pattern existed with regard to the substitution patterns of the respective milk types. Likewise, the third study showed that predictive outcomes from binary choice models associated with organic milk can be enhanced with the use of the Brier score method. In this case, specifications omitting important socio-demographic variables reduced the variability of predicted probabilities and therefore limited its sorting ability. The last study estimated both censored Almost Ideal Demand Systems (AIDS) and Quadratic Almost Ideal Demand System (QUAIDS) specifications in modeling nonalcoholic beverages. In this research, five estimation techniques were used which included the usage of Iterated Seemingly Unrelated Regression (ITSUR), two stage methods such as the Heien and Wessells (1990) and the Shonkwiler and Yen (1999) approaches, Generalized Maximum Entropy and the Dong, Gould and Kaiser (2004a) methods. The findings of the study showed that at various censoring techniques, price elasticity estimates were observed to have greater variability in highly censored nonalcoholic beverage items such as tea, coffee and bottled water.
16

A study of statistical distribution of a nonparametric test for interval censored data

Chang, Ping-chun 05 July 2005 (has links)
A nonparametric test for the interval-censored failure time data is proposed in determining whether p lifetime populations come from the same distribution. For the comparison problem based on interval-censored failure time data, Sun proposed some nonparametric test procedures in recent year. In this paper, we present simulation procedures to verify the test proposed by Sun. The simulation results indicate that the proposed test is not approximately Chisquare distribution with degrees of freedom p-1 but Chisquare distribution with degrees of freedom p-1 times a constant.
17

Gibbs sampling's application in censored regression model and unit root test

Wu, Wei-Lun 02 September 2005 (has links)
Abstract Generally speaking, when dealing with some data, our analysis will be limited because of the given data was incompletely or hidden. And these kinds of errors in calculation will arrive at a statistics answer. This thesis adopts an analysis based on the Gibbs sampling trying to recover the part of hidden data. Since we found out whether time series is unit root or not, the effects of the simulated series will be similar to the true value. After observing the differences between the hidden data and the recovered data in unit root, we noticed that the hidden data has a bigger size and a weakened power over the recovered data. Finally, as an example, we give the unsecured loans at the Japanese money market to prove our issues by analyzing the data from January, 1999 to July, 2004. Since we found out that the numerical value of loan is zero at several months these past several years. In order to observe the Japanese money market, if we substitute the data of zero loan and use the traditional way to inspect unit root without taking model of average value into account, the result will be I(0). And if we simulate the hidden data with Gibbs sampling and substitute the data to inspect the Japanese money market without taking model of average value into account, the result will be I(0) also. But if we take model of average value into account, the of the Japanese Money Market will be I(1). And if we simulate the hidden data with Gibbs sampling and substitute the data to inspect the Japanese money market, the result will be I(I) also.
18

On the consistency of a simulation procedure and the construction of a non-parametric test for interval-censored data

Sen, Ching-Fu 14 June 2001 (has links)
In this paper, we prove that the simulation method for interval-censored data proposed by Fay (1999) is consistent in the sense that if we select a sample, then the estimate obtained from Turnbulls (1974) EM algorithm will converge to the true parameter when the sample size tends to infinity. We also propose a non-parametric rank test for interval-censored data to determine whether two populations come from the same distribution. Simulation result shows that the proposed test statistics performs pretty satisfactory.
19

Distributed Sequential Detection using Censoring Schemes in Wireless Sensor Networks

Kang, Shih-jhang 05 September 2008 (has links)
This thesis considers the problem of distributed sequential detection in wireless sensor networks (WSNs), where the number of operating sensors is unknown to the fusion center. Since the energy and bandwidth of communication channel are limited in WSNs, we employ the censoring scheme in the sequential detection to achieve energy-efficiency and low communication rate. Specifically, we show by simulations that employing censoring scheme can reduce the number of local decisions that required for the fusion center to make a final decision. The results implies that the energy conservation does not necessary degrade the performance of sequential detection in terms of the expected local decisions required for making a final decisions.
20

Empirical Likelihood Confidence Intervals for the Ratio and Difference of Two Hazard Functions

Zhao, Meng 21 July 2008 (has links)
In biomedical research and lifetime data analysis, the comparison of two hazard functions usually plays an important role in practice. In this thesis, we consider the standard independent two-sample framework under right censoring. We construct efficient and useful confidence intervals for the ratio and difference of two hazard functions using smoothed empirical likelihood methods. The empirical log-likelihood ratio is derived and its asymptotic distribution is a chi-squared distribution. Furthermore, the proposed method can be applied to medical diagnosis research. Simulation studies show that the proposed EL confidence intervals have better performance in terms of coverage accuracy and average length than the traditional normal approximation method. Finally, our methods are illustrated with real clinical trial data. It is concluded that the empirical likelihood methods provide better inferential outcomes.

Page generated in 0.0368 seconds