• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6763
  • 117
  • 29
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 6763
  • 1456
  • 1226
  • 1217
  • 1131
  • 963
  • 639
  • 636
  • 584
  • 467
  • 462
  • 454
  • 451
  • 404
  • 396
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

On a general repair model for repairable systems

Unknown Date (has links)
The minimal repair process assumes that upon repair a system is restored to its functioning condition just before failure. For systems with few vulnerable components it is more reasonable to assume that repair actually brings the state of the system to a level that is between "completely new" and "prior to failure". Kijima (1989) introduced models for such a repair process based on the notion of age reduction. Under age reduction, the system, upon repair, is functionally the same as an identical system of lesser age. An alternative to age reduction is the notion of extra life. Under this notion, the system, upon repair, enjoys a longer expected remaining life than it would have had under a minimal repair. / In this dissertation, we introduce a repair model that generalizes Kijima's models so as to include both the notions of age reduction and extra life. We then look at the problem of estimating system reliability based on observations of the repair process from several systems working independently. We make use of counting processes and martingales to derive large sample properties of the estimator. / Source: Dissertation Abstracts International, Volume: 56-07, Section: B, page: 3837. / Major Professors: Myles Hollander; Jayaram Sethuraman. / Thesis (Ph.D.)--The Florida State University, 1995.
112

Robust Variable Selection

Schumann, David Heinz 20 April 2009 (has links)
The prevalence of extreme outliers in many regression data sets has led to the development of robust methods that can handle these observations. While much attention has been placed on the problem of estimating regression coefficients in the presence of outliers, few methods address variable selection. We develop and study robust versions of the forward selection algorithm, one of the most popular standard variable selection techniques. Specifically we modify the VAMS procedure, a version of forward selection tuned to control the false selection rate, to simultaneously select variables and eliminate outliers. In an alternative approach, robust versions of the forward selection algorithm are developed using the robust forward addition sequence associated with the generalized score statistic. Combining the robust forward addition sequence with robust versions of BIC and the VAMS procedure, a final model is obtained. Monte Carlo simulation compares these robust methods to current robust methods like the LSA and LAD-LASSO. Further simulation investigates the relationship between the breakdown point of the estimation methods central to each procedure and the breakdown point of the final variable selection method.
113

Fast FSR Methods for Second-Order Linear Regression Models

Crews, Hugh Bates 13 May 2008 (has links)
Many variable selection techniques have been developed that focus on first-order linear regression models. In some applications, such as modeling response surfaces, fitting second-order terms can improve predictive accuracy. However, the number of spurious interactions can be large leading to poor results with many methods. We focus on forward selection, describing algorithms that use the natural hierarchy existing in second-order linear regression models to limit spurious interactions. We then develop stopping rules by extending False Selection Rate methodology to these algorithms. In addition, we describe alternative estimation methods for fitting regression models including the LASSO, CART, and MARS. We also propose a general method for controlling multiple-group false selection rates, which we apply to second-order linear regression models. By estimating a separate entry level for first-order and second-order terms, we obtain equal contributions to the false selection rate from each group. We compare the methods via Monte Carlo simulation and apply them to optimizing response surface experimental designs.
114

Improving the Efficiency of Tests and Estimators of Treatment Effect with Auxiliary Covariates in the Presence of Censoring

Lu, Xiaomin 30 May 2007 (has links)
In most randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients because of censoring such as administrative censoring that occurs when the study ends before all the patients had the opportunity to observe their response but also censoring may result from patient dropout. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimate or test for the treatment effect may be biased. In this paper, we considered two situations. In the first situation, we only consider the special case where the censoring time is noninformative and the survival time itself the time-lagged response. We use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the unconditional log-hazard ratio parameter. The prognostic auxiliary covariates are used to derive estimators that are more efficient than the traditional maximum partial likelihood estimator and the corresponding Wald tests are more powerful than the logrank test. In the second situation, we extended the results under the first situation to a general case where the censoring time can be informative and the time-lagged response can be any type. We also use the semiparametric theory to derive a class of consistent and asymptotic estimator for the treatment effect estimator. The prognostic baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.
115

Sparse Estimation and Inference for Censored Median Regression

Shows, Justin Hall 20 July 2009 (has links)
Censored median regression models have been shown to be useful for analyzing a variety of censored survival data with the robustness property. We study sparse estimation and inference of censored median regression. The new method minimizes an inverse censoring probability weighted least absolute deviation subject to the adaptive LASSO penalty. We show that, with a proper choice of the tuning parameter, the proposed estimator has nice theoretical properties such as root-n consistency and asymptotic normality. The estimator can also identify the underlying sparse model consistently. We propose using a resampling method to estimate the variance of the proposed estimator. Furthermore, the new procedure enjoys great advantages in computation, since its entire solution path can be obtained efficiently. Also, the method can be extended to multivariate survival data, where there is a natural or artificial clustering structure. The performance of our estimator is evaluated by extensive simulations and two real data applications.
116

Variable Selection in Linear Mixed Model for Longitudinal Data

Lan, Lan 19 May 2006 (has links)
Fan and Li (JASA, 2001) proposed a family of variable selection procedures for certain parametric models via a nonconcave penalized likelihood approach, where significant variable selection and parameter estimation were done simultaneously, and the procedures were shown to have the oracle property. In this presentation, we extend the nonconcave penalized likelihood approach to linear mixed models for longitudinal data. Two new approaches are proposed to select significant covariates and estimate fixed effect parameters and variance components. In particular, we show the new approaches also possess the oracle property when the tuning parameter is chosen appropriately. We assess the performance of the proposed approaches via simulation and apply the procedures to data from the Multicenter AIDS Cohort Study.
117

A Stationary Stochastic Approximation Algorithm for Estimation in the GLMM

Chang, Sheng-Mao 18 May 2007 (has links)
Estimation in generalized linear mixed models is challenging because the marginal likelihood is an integral without closed form. In many of the leading approaches such as Laplace approximation and Monte Carlo integration, the marginal likelihood is approximated, and the maximum likelihood estimate (MLE) can only be reached with error. An alternative, the simultaneous perturbation stochastic approximation (SPSA) algorithm is designed to maximize an integral and can be employed to find the exact MLE under the same circumstances. However, the SPSA does not directly provide an error estimate if the algorithm is stopped in a number of finite steps. In order to estimate the MLE properly with an statistical error bound, we propose the stationary SPSA (SSPSA) algorithm. Assuming that the marginal likelihood, objective function, is quadratic around the MLE, the SSPSA takes the form of a random coefficient vector autoregressive process. Under mild conditions, the algorithm yields a strictly stationary sequence where the mean of this sequence is asymptotically unbiased to the MLE and has a closed-form variance. Also, the SSPSA sequence is ergodic providing certain constraints on the step size, a parameter of the algorithm, and the mechanism that directs the algorithm to search the parameter space. Sufficient conditions for the stationarity and ergodicity are provided as a guideline for choosing the step size. Several implementation issues are addressed in the thesis: pairing numerical derivative, scaling, and importance sampling. Following the simulation study, we apply the SSPSA on several GLMMs: Epilepsy seizure data, lung cancer data, and salamander mating data. For the first two cases, SSPSA estimates are similar to published results whereas, for the salamander data, our solution greatly differs from others.
118

Statistical inference for correlated data based on censored observations

PARK, JUNG WOOK 14 June 2005 (has links)
Many physical quantities measured over time and space are often observed with data irregularities, such as truncation (detection limit) or censoring. Practitioners often disregard censored data cases which may result in inefficient estimates. On the other hand, censored data treated as observed values will lead to biased estimates. For instance, the data values collected by a monitoring device may have a specific detection limit and the device records the value with its limit, or a constant exceeding the limit value, when the real value exceeds the limit. We present an attractive remedy for handling censored or truncated data collected over time or space. Our method produces (asymptotically) unbiased estimates that are more efficient than the estimates based on treating censored observations as completely observed. In particular, we introduce an imputation method particularly well suited for fitting statistical models dealing with correlated observations in the presence of censored data. Our proposed imputation method involves generating random samples from the conditional distribution of the censored data given the (completely) observed data and current estimates of the parameters. The parameter estimates are then updated based on imputed and completely observed data until convergence. Under Gaussian processes, such a conditional distribution turns out to be a truncated multivariate normal distribution. We use a Gibbs sampling method to generate samples from such truncated multivariate normal distributions. We demonstrate the effectiveness of the technique for a problem common to many correlated data sets and describe its application to several other frequently encountered situations. First, we discuss the use of an imputation technique for a stationary time series data assuming an autoregressive moving average model. Then, we relax the model assumption and discuss how the imputation method works with a nonparametric estimation of a covariance matrix. The use of the imputation method is not limited to a time series model and can be applied to other types of correlated data such as a spatial data. A lattice model is discussed as another application field of the imputation method. For pedagogic purposes, our illustration of the approach based on a simulation study is limited to some simple models such as a first order autoregressive time series model, first order moving average time series model, and first order simultaneous autoregressive error model, with left or right censoring. However, the method can easily be extended to more complicated models. We also derive the Fisher information matrix for an AR(1) process containing censored observations and explain the effect of the censoring on the efficiency gain of the estimates using the trace of the Fisher Information matrix.
119

Multivariate Spatial Temporal Statistical Models for Applications in Coastal Ocean Prediction

Foley, Kristen Madsen 06 July 2006 (has links)
Estimating the spatial and temporal variation of surface wind fields plays an important role in modeling atmospheric and oceanic processes. This is particularly true for hurricane forecasting, where numerical ocean models are used to predict the height of the storm surge and the degree of coastal flooding. We use multivariate spatial-temporal statistical methods to improve coastal storm surge prediction using disparate sources of observation data. An Ensemble Kalman Filter is used to assimilate water elevation into a three dimension primitive equations ocean model. We find that data assimilation is able to improve the estimates for water elevation for a case study of Hurricane Charley of 2004. In addition we investigate the impact of inaccuracies in the wind field inputs which are the main forcing of the numerical model in storm surge applications. A new multivariate spatial statistical framework is developed to improve the estimation of these wind inputs. A spatial linear model of coregionalization (LMC) is used to account for the cross-dependency between the two orthogonal wind components. A Bayesian approach is used for estimation of the parameters of the multivariate spatial model and a physically based wind model while accounting for potential additive and multiplicative bias in the observed wind data. This spatial model consistently improves parameter estimation and prediction for surface wind data for the Hurricane Charley case study when compared to the original physical wind model. These methods are also shown to improve storm surge estimates when used as the forcing fields for the coastal ocean model. Finally we describe a new framework for estimating multivariate nonstationary spatial-temporal processes based on an extension of the LMC model. We compare this approach to other multivariate spatial models and describe an application to surface wind fields from Hurricane Floyd of 1999.
120

Numerical Differentiation Using Statistical Design

Bodily, Chris H 18 July 2002 (has links)
Derivatives are frequently required by numerical procedures across many disciplines. Numerical differentiation can be useful for approximating derivatives. This dissertation will introduce computational differentiation (the process by which derivatives are obtained with a computer), focusing on statistical response surface (RSM) designs for approximating derivatives. The RSM designs are compared with two competing numerical methods: namely a rival saturated statistical design approach, and a method employing finite differencing. A covariance model incorporating function curvature and computer round-off error is proposed for estimating the derivative approximation variances. These variances and the computational workload each method requires become the basis for comparing the derivative approximations. A diagnostic test for variable scaling errors is also described.

Page generated in 0.2256 seconds