Spelling suggestions: "subject:"nonparametric"" "subject:"onparametric""
171 |
Nonparametric Confidence Intervals for the Reliability of Real Systems Calculated from Component DataSpooner, Jean 01 May 1987 (has links)
A methodology which calculates a point estimate and confidence intervals for system reliability directly from component failure data is proposed and evaluated. This is a nonparametric approach which does not require the component time to failures to follow a known reliability distribution.
The proposed methods have similar accuracy to the traditional parametric approaches, can be used when the distribution of component reliability is unknown or there is a limited amount of sample component data, are simpler to compute, and use less computer resources. Depuy et al. (1982) studied several parametric approaches to calculating confidence intervals on system reliability. The test systems employed by them are utilized for comparison with published results. Four systems with sample sizes per component of 10, 50, and 100 were studied.
The test systems were complex systems made up of I components, each component has n observed (or estimated) times to failure. An efficient method for calculating a point estimate of system reliability is developed based on counting minimum cut sets that cause system failures.
Five nonparametric approaches to calculate the confidence intervals on system reliability from one test sample of components were proposed and evaluated. Four of these were based on the binomial theory and the Kolomogorov empirical cumulative distribution theory. 600 Monte Carlo simulations generated 600 new sets of component failure data from the population with corresponding point estimates of system reliability and confidence intervals. Accuracy of these confidence intervals was determined by determining the fraction that included the true system reliability.
The bootstrap method was also studied to calculate confidence interval from one sample. The bootstrap method is computer intensive and involves generating many sets of component samples using only the failure data from the initial sample. The empirical cumulative distribution function of 600 bootstrapped point estimates were examined to calculate the confidence intervals for 68, 80, 90 95 and 99 percent confidence levels.
The accuracy of the bootstrap confidence intervals was determined by comparison with the distribution of 600 point estimates of system reliability generated from the Monte Carlo simulations.
The confidence intervals calculated from the Kolomogorov empirical distribution function and the bootstrap method were very accurate. Sample sizes of 10 were not always sufficient for systems with reliabilities close to one.
|
172 |
Modeling longitudinal data with interval censored anchoring eventsChu, Chenghao 01 March 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In many longitudinal studies, the time scales upon which we assess the primary outcomes
are anchored by pre-specified events. However, these anchoring events are
often not observable and they are randomly distributed with unknown distribution.
Without direct observations of the anchoring events, the time scale used for analysis
are not available, and analysts will not be able to use the traditional longitudinal
models to describe the temporal changes as desired. Existing methods often make
either ad hoc or strong assumptions on the anchoring events, which are unveri able
and prone to biased estimation and invalid inference.
Although not able to directly observe, researchers can often ascertain an interval
that includes the unobserved anchoring events, i.e., the anchoring events are
interval censored. In this research, we proposed a two-stage method to fit commonly
used longitudinal models with interval censored anchoring events. In the first stage,
we obtain an estimate of the anchoring events distribution by nonparametric method
using the interval censored data; in the second stage, we obtain the parameter estimates
as stochastic functionals of the estimated distribution. The construction of the
stochastic functional depends on model settings. In this research, we considered two
types of models. The first model was a distribution-free model, in which no parametric
assumption was made on the distribution of the error term. The second model was
likelihood based, which extended the classic mixed-effects models to the situation that the origin of the time scale for analysis was interval censored. For the purpose
of large-sample statistical inference in both models, we studied the asymptotic
properties of the proposed functional estimator using empirical process theory. Theoretically,
our method provided a general approach to study semiparametric maximum
pseudo-likelihood estimators in similar data situations. Finite sample performance of
the proposed method were examined through simulation study. Algorithmically eff-
cient algorithms for computing the parameter estimates were provided. We applied
the proposed method to a real data analysis and obtained new findings that were
incapable using traditional mixed-effects models. / 2 years
|
173 |
Statistical comparisons for nonlinear curves and surfacesZhao, Shi 31 May 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Estimation of nonlinear curves and surfaces has long been the focus of semiparametric
and nonparametric regression. The advances in related model fitting methodology
have greatly enhanced the analyst’s modeling flexibility and have led to scientific discoveries
that would be otherwise missed by the traditional linear model analysis. What has
been less forthcoming are the testing methods concerning nonlinear functions, particularly
for comparisons of curves and surfaces. Few of the existing methods are carefully
disseminated, and most of these methods are subject to important limitations. In the
implementation, few off-the-shelf computational tools have been developed with syntax
similar to the commonly used model fitting packages, and thus are less accessible to
practical data analysts. In this dissertation, I reviewed and tested the existing methods
for nonlinear function comparison, examined their operational characteristics. Some
theoretical justifications were provided for the new testing procedures. Real data exampleswere
included illustrating the use of the newly developed software. A new R package
and a more user-friendly interface were created for enhanced accessibility. / 2020-08-22
|
174 |
On Applications of Semiparametric MethodsLi, Zhijian 01 October 2018 (has links)
No description available.
|
175 |
Do Economic Factors Help Forecast Political Turnover? Comparing Parametric and Nonparametric ApproachesBurghart, Ryan A. 22 April 2021 (has links)
No description available.
|
176 |
On Non-Parametric Confidence Intervals for Density and Hazard Rate Functions & Trends in Daily Snow Depths in the United States and CanadaXu, Yang 09 December 2016 (has links)
The nonparametric confidence interval for an unknown function is quite a useful tool in statistical inferential procedures; and thus, there exists a wide body of literature on the topic. The primary issues are the smoothing parameter selection using an appropriate criterion and then the coverage probability and length of the associated confidence interval. Here our focus is on the interval length in general and, in particular, on the variability in the lengths of nonparametric intervals for probability density and hazard rate functions. We start with the analysis of a nonparametric confidence interval for a probability density function noting that the confidence interval length is directly proportional to the square root of a density function. That is variability of the length of the confidence interval is driven by the variance of the estimator used to estimate the square-root of the density function. Therefore we propose and use a kernel-based constant variance estimator of the square-root of a density function. The performance of confidence intervals so obtained is studied through simulations. The methodology is then extended to nonparametric confidence intervals for the hazard rate function. Changing direction somewhat, the second part of this thesis presents a statistical study of daily snow trends in the United States and Canada from 1960-2009. A storage model balance equation with periodic features is used to describe the daily snow depth process. Changepoint (inhomogeneities features) are permitted in the model in the form of mean level shifts. The results show that snow depths are mostly declining in the United States. In contrast, snow depths seem to be increasing in Canada, especially in north-western areas of the country. On the whole, more grids are estimated to have an increasing snow trend than a decreasing trend. The changepoint component in the model serves to lessen the overall magnitude of the trends in most locations.
|
177 |
Nonparametric geostatistical estimation of soil physical propertiesGhassemi, Ali January 1987 (has links)
No description available.
|
178 |
A Nonparametric Test for the Non-Decreasing Alternative in an Incomplete Block DesignNdungu, Alfred Mungai January 2011 (has links)
The purpose of this paper is to present a new nonparametric test statistic for testing against ordered alternatives in a Balanced Incomplete Block Design (BIBD). This test will then be compared with the Durbin test which tests for differences between treatments in a BIBD but without regard to order. For the comparison, Monte Carlo simulations were used to generate the BIBD. Random samples were simulated from: Normal Distribution; Exponential Distribution; T distribution with three degrees of freedom. The number of treatments considered was three, four and five with all the possible combinations necessary for a BIBD. Small sample sizes were 20 or less and large sample sizes were 30 or more. The powers and alpha values were then estimated after 10,000 repetitions.The results of the study show that the new test proposed is more powerful than the Durbin test. Regardless of the distribution, sample size or number of treatments, the new test tended to have higher powers than the Durbin test.
|
179 |
Estimation For The Cox Model With Various Types Of Censored DataRiddlesworth, Tonya 01 January 2011 (has links)
In survival analysis, the Cox model is one of the most widely used tools. However, up to now there has not been any published work on the Cox model with complicated types of censored data, such as doubly censored data, partly-interval censored data, etc., while these types of censored data have been encountered in important medical studies, such as cancer, heart disease, diabetes, etc. In this dissertation, we first derive the bivariate nonparametric maximum likelihood estimator (BNPMLE) F[subscript n](t,z) for joint distribution function F[sub 0](t,z) of survival time T and covariate Z, where T is subject to right censoring, noting that such BNPMLE F[subscript n] has not been studied in statistical literature. Then, based on this BNPMLE F[subscript n] we derive empirical likelihood-based (Owen, 1988) confidence interval for the conditional survival probabilities, which is an important and difficult problem in statistical analysis, and also has not been studied in literature. Finally, with this BNPMLE F[subscript n] as a starting point, we extend the weighted empirical likelihood method (Ren, 2001 and 2008a) to the multivariate case, and obtain a weighted empirical likelihood-based estimation method for the Cox model. Such estimation method is given in a unified form, and is applicable to various types of censored data aforementioned.
|
180 |
Confronting Theory with Evidence: Methods & ApplicationsThomas, Stephanie January 2016 (has links)
Empirical economics frequently involves testing whether a theoretical proposition is evident in a data set. This thesis explores methods for confronting such theoretical propositions with evidence. Chapter 1 develops a methodological framework for assessing whether binary (`Yes'/`No') observations exhibit a discrete change, confronting a theoretical model with data from an experiment investigating the effect of introducing a private finance option into a public system of finance. Chapter 2 expands the framework to identify two discrete changes, applying the method to the evaluation of adherence to clinical practice guidelines. The framework uses a combination of existing analytical techniques and provides results which are robust and visually intuitive. The overall result is a methodology for evaluation of guideline adherence which leverages existing patient care records and is generalizable across clinical contexts. An application to a set of field data on supplemental oxygen administration decisions of volunteer medical first responders illustrates.
Chapter 3 compares the results of two mechanisms used to control industrial emissions. Cap and Trade imposes an absolute cap on emissions and any emission capacity not utilized by a firm can be sold to other firms via tradable permits. In Intensity Targets systems firms earn (owe) tradable credits for emissions below (above) a baseline implied by a relative Intensity Target. Cap and Trade is commonly believed to be superior to Intensity Targets because the relative Intensity Target subsidizes emissions. Chapter 3 reports on an experiment designed to test theoretical predictions in a long-run laboratory environment in which firms make emission abatement technology and output production decisions when demand for output is uncertain, and banking of tradable permits may or may not be permitted. Particular focus is placed on testing whether the flexibility inherent to Intensity Targets can lead them to be superior to Cap and Trade when demand is stochastic. / Thesis / Doctor of Philosophy (PhD)
|
Page generated in 0.04 seconds