• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Study of the Mean Residual Life Function and Its Applications

Mbowe, Omar B 12 June 2006 (has links)
The mean residual life (MRL) function is an important function in survival analysis, actuarial science, economics and other social sciences and reliability for characterizing lifetime. Different methods have been proposed for doing inference on the MRL but their coverage probabilities for small sample sizes are not good enough. In this thesis we apply the empirical likelihood method and carry out a simulation study of the MRL function using different statistical distributions. The simulation study does a comparison of the empirical likelihood method and the normal approximation method. The comparisons are based on the average lengths of confidence intervals and coverage probabilities. We also did comparisons based on median lengths of confidence intervals for the MRL. We found that the empirical likelihood method gives better coverage probability and shorter confidence intervals than the normal approximation method for almost all the distributions that we considered. Applying the two methods to real data we also found that the empirical likelihood method gives thinner pointwise confidence bands.
2

Inference procedures based on order statistics

Frey, Jesse C. 01 August 2005 (has links)
No description available.
3

Equivariant Functional Shape Analysis in SO(3) with Applications to Gait Analysis

Telschow, Fabian Joachim Erich 16 September 2016 (has links)
No description available.
4

Quantile-based methods for prediction, risk measurement and inference

Ally, Abdallah K. January 2010 (has links)
The focus of this thesis is on the employment of theoretical and practical quantile methods in addressing prediction, risk measurement and inference problems. From a prediction perspective, a problem of creating model-free prediction intervals for a future unobserved value of a random variable drawn from a sample distribution is considered. With the objective of reducing prediction coverage error, two common distribution transformation methods based on the normal and exponential distributions are presented and they are theoretically demonstrated to attain exact and error-free prediction intervals respectively. The second problem studied is that of estimation of expected shortfall via kernel smoothing. The goal here is to introduce methods that will reduce the estimation bias of expected shortfall. To this end, several one-step bias correction expected shortfall estimators are presented and investigated via simulation studies and compared with one-step estimators. The third problem is that of constructing simultaneous confidence bands for quantile regression functions when the predictor variables are constrained within a region is considered. In this context, a method is introduced that makes use of the asymmetric Laplace errors in conjunction with a simulation based algorithm to create confidence bands for quantile and interquantile regression functions. Furthermore, the simulation approach is extended to an ordinary least square framework to build simultaneous bands for quantiles functions of the classical regression model when the model errors are normally distributed and when this assumption is not fulfilled. Finally, attention is directed towards the construction of prediction intervals for realised volatility exploiting an alternative volatility estimator based on the difference of two extreme quantiles. The proposed approach makes use of AR-GARCH procedure in order to model time series of intraday quantiles and forecast intraday returns predictive distribution. Moreover, two simple adaptations of an existing model are also presented.
5

Effect Sizes, Significance Tests, and Confidence Intervals: Assessing the Influence and Impact of Research Reporting Protocol and Practice

Hess, Melinda Rae 30 October 2003 (has links)
This study addresses research reporting practices and protocols by bridging the gap from the theoretical and conceptual debates typically found in the literature with more realistic applications using data from published research. Specifically, the practice of using findings of statistical analysis as the primary, and often only, basis for results and conclusions of research is investigated through computing effect size and confidence intervals and considering how their use might impact the strength of inferences and conclusions reported. Using a sample of published manuscripts from three peer-rviewed journals, central quantitative findings were expressed as dichotomous hypothesis test results, point estimates of effect sizes and confidence intervals. Studies using three different types of statistical analyses were considered for inclusion: t-tests, regression, and Analysis of Variance (ANOVA). The differences in the substantive interpretations of results from these accomplished and published studies were then examined as a function of these different analytical approaches. Both quantitative and qualitative techniques were used to examine the findings. General descriptive statistical techniques were employed to capture the magnitude of studies and analyses that might have different interpretations if althernative methods of reporting findings were used in addition to traditional tests of statistical signficance. Qualitative methods were then used to gain a sense of the impact on the wording used in the research conclusions of these other forms of reporting findings. It was discovered that tests of non-signficant results were more prone to need evidence of effect size than those of significant results. Regardless of tests of significance, the addition of information from confidence intervals tended to heavily impact the findings resulting from signficance tests. The results were interpreted in terms of improving the reporting practices in applied research. Issues that were noted in this study relevant to the primary focus are discussed in general with implicaitons for future research. Recommendations are made regarding editorial and publishing practices, both for primary researchers and editors.
6

Effect sizes, signficance tests, and confidence intervals [electronic resource] : assessing the influence and impact of research reporting protocol and practice / by Melinda Rae Hess.

Hess, Melinda Rae. January 2003 (has links)
Includes vita. / Title from PDF of title page. / Document formatted into pages; contains 223 pages. / Thesis (Ph.D.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: This study addresses research reporting practices and protocols by bridging the gap from the theoretical and conceptual debates typically found in the literature with more realistic applications using data from published research. Specifically, the practice of using findings of statistical analysis as the primary, and often only, basis for results and conclusions of research is investigated through computing effect size and confidence intervals and considering how their use might impact the strength of inferences and conclusions reported. Using a sample of published manuscripts from three peer-rviewed journals, central quantitative findings were expressed as dichotomous hypothesis test results, point estimates of effect sizes and confidence intervals. Studies using three different types of statistical analyses were considered for inclusion: t-tests, regression, and Analysis of Variance (ANOVA). / ABSTRACT: The differences in the substantive interpretations of results from these accomplished and published studies were then examined as a function of these different analytical approaches. Both quantitative and qualitative techniques were used to examine the findings. General descriptive statistical techniques were employed to capture the magnitude of studies and analyses that might have different interpretations if althernative methods of reporting findings were used in addition to traditional tests of statistical signficance. Qualitative methods were then used to gain a sense of the impact on the wording used in the research conclusions of these other forms of reporting findings. It was discovered that tests of non-signficant results were more prone to need evidence of effect size than those of significant results. / ABSTRACT: Regardless of tests of signficance, the addition of information from confidence intervals tended to heavily impact the findings resulting from signficance tests. The results were interpreted in terms of improving the reporting practices in applied research. Issues that were noted in this study relevant to the primary focus are discussed in general with implicaitons for future research. Recommendations are made regarding editorial and publishing practices, both for primary researchers and editors. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
7

Omnibus Tests for Comparison of Competing Risks with Covariate Effects via Additive Risk Model

Nguyen, Duytrac Vu 03 May 2007 (has links)
It is of interest that researchers study competing risks in which subjects may fail from any one of K causes. Comparing any two competing risks with covariate effects is very important in medical studies. This thesis develops omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. In the thesis, the omnibus tests are derived under the additive risk model, that is an alternative to the proportional hazard model, with by a weighted difference of estimates of cumulative cause-specific hazard rates. Simultaneous confidence bands for the difference of two conditional cumulative incidence functions are also constructed. A simulation procedure is used to sample from the null distribution of the test process in which the graphical and numerical techniques are used to detect the significant difference in the risks. A melanoma data set is used for the purpose of illustration.
8

NONPARAMETRIC ESTIMATION OF DERIVATIVES WITH APPLICATIONS

Hall, Benjamin 01 January 2010 (has links)
We review several nonparametric regression techniques and discuss their various strengths and weaknesses with an emphasis on derivative estimation and confidence band creation. We develop a generalized C(p) criterion for tuning parameter selection when interest lies in estimating one or more derivatives and the estimator is both linear in the observed responses and self-consistent. We propose a method for constructing simultaneous confidence bands for the mean response and one or more derivatives, where simultaneous now refers both to values of the covariate and to all derivatives under consideration. In addition we generalize the simultaneous confidence bands to account for heteroscedastic noise. Finally, we consider the characterization of nanoparticles and propose a method for identifying a proper subset of the covariate space that is most useful for characterization purposes.
9

Non-asymptotic bounds for prediction problems and density estimation.

Minsker, Stanislav 05 July 2012 (has links)
This dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the dimension of the problem. The first part answers some open questions for the binary classification problem in the framework of active learning. Given a random couple (X,Y) with unknown distribution P, the goal of binary classification is to predict a label Y based on the observation X. Prediction rule is constructed from a sequence of observations sampled from P. The concept of active learning can be informally characterized as follows: on every iteration, the algorithm is allowed to request a label Y for any instance X which it considers to be the most informative. The contribution of this work consists of two parts: first, we provide the minimax lower bounds for the performance of active learning methods. Second, we propose an active learning algorithm which attains nearly optimal rates over a broad class of underlying distributions and is adaptive with respect to the unknown parameters of the problem. The second part of this thesis is related to sparse recovery in the framework of dictionary learning. Let (X,Y) be a random couple with unknown distribution P. Given a collection of functions H, the goal of dictionary learning is to construct a prediction rule for Y given by a linear combination of the elements of H. The problem is sparse if there exists a good prediction rule that depends on a small number of functions from H. We propose an estimator of the unknown optimal prediction rule based on penalized empirical risk minimization algorithm. We show that the proposed estimator is able to take advantage of the possible sparse structure of the problem by providing probabilistic bounds for its performance.
10

Change point estimation in noisy Hammerstein integral equations / Sprungstellen-Schätzer für verrauschte Hammerstein Integral Gleichungen

Frick, Sophie 02 December 2010 (has links)
No description available.

Page generated in 0.0685 seconds