• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6766
  • 117
  • 29
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 6766
  • 1457
  • 1227
  • 1219
  • 1131
  • 964
  • 640
  • 636
  • 584
  • 467
  • 462
  • 454
  • 451
  • 405
  • 396
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Multi-Systems Operation and Control

Fenner, Joel S. 07 April 2000 (has links)
<p>The typical process modeling and control approach of dealingwith each manufacturing process stage in isolation is re-evaluated anda new approach is developed for the case of a series ofprocess stages and the case of several similar processesoperating in parallel at different sites. The serialmulti-stage modeling framework uses the possible correlationbetween stages as a tool to achieve tighter control of anonstationary process line. The case of several similarprocesses operating in parallel at different sites is termedglobal manufacturing. Global manufacturing allows updatedestimation of the sensitivity or slope parameters over thecourse of the process in some cases since the site effectcauses a spread in the range of the inputs. This spreadin the inputs provides an opportunity for stable sensistivityestimates without perturbing the process as is usuallynecessary. The Bayesian parallel site estimation procedure isshown to have broad application to any scenario where thedata is collected at various related but not identical sites.Specifically, uniformity modeling is explored using theBayesian estimation procedure. The multi-systems operation and controlmethodologies developed for the serial and global manufacturingcases provide valuable tools for the improvement ofmanufacturing processes in many industries.<P>
132

A Proposed Framework for Establishing Optimal Genetic Designsfor Estimating Narrow-sense Heritability

Silva, Carlos H. 14 April 2000 (has links)
<p> We develop a framework for establishing sample sizes in breeding plans, so that one is able to estimate narrow-sense heritability with smallest possible variance, for a given amount of effort. We call this an optimal genetic design. The framework allows one to compare the variances of estimators of narrow-sense heritability, when estimated from each one of the alternative plans under consideration, and does not require data simulation, but does require computer programming. We apply it to the study of a peanut (Arachis hypogaea) breading example, in order to determine the ideal number of plants to be selected at each generation. We also propose a methodology that allows one to estimate the additive genetic variance for the estimation of the narrow-sense heritability using MINQUE and REML, without an analysis of variance model. It requires that one can build the matrix of genetic variances and covariances among the subjects on which observations are taken. This methodology can be easily adapted to ANOVA-based methods, and we exemplify by using Henderson's Method III. We compare Henderson's Method III, MINQUE, and REML under the proposed methodology, with an emphasis on comparing these estimation methods with non-normally distributed data and unbalanced designs. A location-scale transformation of the beta density is proposed for simulation of non-normal data. <P>
133

Information-based Group Sequential Tests With Lagged or Censored Data

Kung, Meifen 23 June 2000 (has links)
<p>Conventionally, values of nuisance parameters given in a statistical design are often erroneous, thus may result in overpowering or underpowering a test using traditional sample size calculations. In this thesis, we propose to use Fisher Information data monitoring in group sequential studies to not only allow an early stopping in a clinical trial but also maintain the desired power of the test for all values of nuisance parameters. Simulation studies for the simple case of comparing two response rates are used to demonstrate that a test of a single parameter of interest with a specified alternative achieves the desired power in information-based monitoring regardless of the value of the nuisance parameters, provided that this parameter of interest can be estimated efficiently. The emphasis in this part is to show how information-based monitoring can be implemented in practice and to demonstrate the accuracy of the corresponding operating characteristics in some simulation studies.When there is lag time in reporting, standard statistical techniques often lead to biased inferences on interim data. A maximum lag estimator ensures complete information by using data before a lag time period. The estimator is unbiased but less powerful. We propose an inverse probability weighted estimator which accounts for censoring and is consistent and asymptotically normal in estimating mean of dichotomous variables. The joint distribution of test statistics at different times have the covariance structure of a sequential process with independent increments. This allows the use of information-based monitoring. Simulation study shows that our estimator preserves the type I and type II errors, and reduces the number of participants required in a trial. Future approach in finding an efficient estimator is also suggested in chapter 3. <P>
134

Computational approaches for maximum likelihood estimation for nonlinearmixed models.

Hartford, Alan Hughes 19 July 2000 (has links)
<p><P>The nonlinear mixed model is an important tool for analyzingpharmacokinetic and other repeated-measures data.In particular, these models are used when the measured response for anindividual,,has a nonlinear relationship with unknown, random, individual-specificparameters,.Ideally, the method of maximum likelihood is used to find estimates forthe parameters ofthe model after integrating out the random effects in the conditionallikelihood. However, closed form solutions tothe integral are generally not available. As a result, methods have beenpreviously developed to find approximatemaximum likelihood estimates for the parameters in the nonlinear mixedmodel. These approximate methods include FirstOrder linearization, Laplace's approximation, importance sampling, andGaussian quadrature. The methods are availabletoday in several software packages for models of limited sophistication;constant conditional error variance is requiredfor proper utilization of most software. In addition, distributionalassumptions are needed. This work investigates howrobust two of these methods, First Order linearization and Laplace'sapproximation, are to these assumptions. The findingis that Laplace's approximation performs well, resulting in betterestimation than first order linearization when bothmodels converge to a solution. </P><P>A method must provide good estimates of the likelihood at points inthe parameter space near the solution. This workcompares this ability among the numerical integration techniques,Gaussian quadrature, importance sampling, and Laplace'sapproximation. A new "scaled" and "centered" version of Gaussianquadrature is found to be the most accurate technique.In addition, the technique requires evaluation of the integrand at onlya few abscissas. Laplace's method also performswell; it is more accurate than importance sampling with even 100importance samples over two dimensions. Even so,Laplace's method still does not perform as well as Gaussian quadrature.Overall, Laplace's approximation performs betterthan expected, and is shown to be a reliable method while stillcomputationally less demanding. </P><P>This work also introduces a new method to maximize the likelihood.This method can be sharpened to any desired levelof accuracy. Stochastic approximation is incorporated to continuesampling until enough information is gathered to resultin accurate estimation. This new method is shown to work well for linearmixed models, but is not yet successful for thenonlinear mixed model.</P><P>
135

Evaluation of Frequentist and Bayesian Inferences by Relevant Simulation

Kim, Yuntae 03 August 2000 (has links)
<p>Statistical inference procedures in real situations often assume not only the basic assumptions on which the justification of the inference is based, but also some additional assumptions which could affect the justification of the inference. In many cases, the inference procedure is too complicated to be evaluated analytically. Simulation-based evaluation for such an inference could be an alternative, but generally simulation results would be valid only under specific simulated circumstances. For simulation in the parametric model set-up, the simulation result may depend on the chosen parameter value. In this study, we suggest an evaluation methodology relying on an observation-based simulation for frequentist and Bayesian inferences on parametric models. We describe our methodology with the suggestions for three aspects: factors to be measured for the evaluation, measurement of the factors, and evaluation of the inference based on the factors measured. Additionally, we provide an adjustment method for inferences found to be invalid. The suggested methodology can be applied to any inference procedure, no matter how complicated as long as it can be codified for repetition on the computer. Unlike general simulation in the parametric model, the suggested methodology provides results which do not depend on specific parameter values.The argument about the new EPA standards for particulate matter (PM) has led to some statistical analyses for the effect of PM on mortality. Typically, regression model of mortality of the elderly is constructed. The covariates of the regression model includes a suite of highly correlated particulate matter related variables in addition to trend and meteorology variables as the confounding nuisance variables. The classical strategy for the regression model having a suite of highly correlated covariates is to select a model based on a subset of the covariates and then make inference assuming the subset model. However, one might be concerned about the validity of the inferences under his classical strategy since it ignores the uncertainty in the choice of a subset model. The suggested evaluation methodology was applied to evaluate the inference procedures for the PM-mortality regression model with some data from Phoenix, Arizona taken from 1995 to early 1998 . The suggested methodology provided valid evaluation results for various inference procedures and also provided adjusted inferences superior to Bonferroni adjustment. Based on our results, we are able to make conclusions about the short-term effect of PM which does not suffer from validity concerns.<P>
136

Nonparametric Spatial analysis in spectral and space domains

Kim, Hyon-Jung 23 August 2000 (has links)
<p>KIM, HYON-JUNG. Variance Estimation in Spatial Regression Using a NonparametricSemivariogram Based on Residuals. (Under the direction of Professor Dennis D. Boos.)The empirical semivariogram of residuals from a regression model withstationary errors may be used to estimate the covariance structure of the underlyingprocess.For prediction (Kriging) the bias of the semivariogram estimate induced byusing residuals instead of errors has only a minor effect because thebias is small for small lags. However, for estimating the variance of estimatedregression coefficients and of predictions,the bias due to using residuals can be quite substantial. Thus wepropose a method for reducing the bias in empirical semivariogram estimatesbased on residuals. The adjusted empirical semivariogram is then isotonizedand made positive definite and used to estimate the variance of estimatedregression coefficients in a general estimating equations setup.Simulation results for least squares and robust regression show that theproposed method works well in linear models withstationary correlated errors. Spectral Analysis with Spatial Periodogram and Data Tapers.(Under the direction of Professor Montserrat Fuentes.)The spatial periodogram is a nonparametric estimate of the spectral density, which is the Fourier Transform of the covariance function. The periodogram is a useful tool to explain the dependence structure of aspatial process.Tapering (data filtering) is an effective technique to remove the edge effects even inhigh dimensional problemsand can be applied to the spatial data in order to reduce the bias of the periodogram.However, the variance of the periodogram increases as the bias is reduced.We present a method to choose an appropriate smoothing parameter for datatapers and obtain better estimates of the spectral densityby improving the properties of the periodogram.The smoothing parameter is selected taking intoaccount the trade-off between bias and variance of the taperedperiodogram. We introduce a new asymptotic approach for spatial datacalled `shrinking asymptotics', which combines theincreasing-domain and the fixed-domain asymptotics.With this approach, the tapered spatial periodogram can be usedto determine uniquely the spectral density of the stationary process,avoiding the aliasing problem. <P>
137

PARAMETRIC MODELING IN THE PRESENCE OF MEASUREMENT ERROR: MONTE CARLO CORRECTED SCORES

Novick, Steven Jon 23 October 2000 (has links)
<p>Parametric estimation is complicated when data are measured with error. The problem of regression modeling when one or more covariates are measured with error is considered in this paper. It is often the case that, evaluated at the observed error-prone data, the unbiased true-data estimating equations yield an inconsistent estimator.The proposed method is a variant of Nakamura's (1990) method of corrected scores and is closely related to the simulation-based algorithm introduced by Cook and Stefanski (1994). The corrected-score method depends critically on finding a function of the observed data having the property that its conditional expectation given the true data equals a true-data, unbiased score function. Nakamura (1990) gives corrected score functions for special cases, but offers no general solution.It is shown that for a certain class of smooth true-data score functions, a corrected score can be determined by Monte Carlo methods, if not analytically. The relationshipbetween the corrected score method and Cook and Stefanski's (1994) simulation method is studied in detail. The properties of the Monte Carlo generated corrected scorefunctions, and of the estimators they define, are also given considerable attention. Special cases are examined in detail, comparing the proposed method with establishedmethods.<P>
138

Data Reduction and Model Selection with Wavelet Transforms

Martell, Leah A. 07 November 2000 (has links)
<p>Martell, Leah. Data Reduction and Model Selection with Wavelet Transforms. (Under the directiion of Dr. J. C. Lu.)With modern technology massive quantities of data are being collectedcontinuously. The purpose of our research has been to develop amethod for data reduction and model selection applicable to large data setsand replicated data. We propose a novel wavelet shrinkage method byintroducing a new model selection criterion. The proposed shrinkage rule hasat least two advantages over the current shrinkage methods. First, it isadaptive to the smoothness of the signal regardless of whether it has a sparsewavelet representation, since we consider both the deterministic and thestochastic cases. The wavelet decomposition not only catches the signalcomponents for a pure signal, but de-noises and extracts these signalcomponents for a signal contaminated by external influences. Second, theproposed method allows for fine ``tuning'' based on the particular data athand. Our simulation studyshows that the methods based on the model selection criterion have better meansquare error (MSE)over the methods currently known. Two aspects make wavelet analysis the analytical tool of choice.First, thelargest in magnitude wavelet coefficients in the discrete wavelet transform (DWT) ofthe data, extract the relevant information, while discarding the resteliminates the noise component. Second, the DWT allows for a fast algorithmcalculation of computational complexity O(n). For the deterministic case we derive a bound on the approximation error of thenonlinear wavelet estimate determined by the largest in magnitude discrete wavelet coefficients. Upper bounds for the approximation error and the rateof increase of the number of wavelet coefficients in the model areobtained for the new wavelet shrinkage estimate. When the signal comes from astochastic process,a bound for the MSE is found, and for the bias of its estimate. A corrected version of the model selection criterion is introduced and some of its properties are studied. The new wavelet shrinkage is employed in the case of replicated data. An algorithm for model selection is proposed,based on which a manufacturing process can be automatically supervised for quality and efficiency. Weapply it to two real life examples. <P>
139

Statistical Inference for Gap Data

Yang, Liqiang 13 November 2000 (has links)
<p>This thesis research is motivated by a special type of missing data - Gap Data, which was first encountered in a cardiology study conducted at Duke Medical School. This type of data include multiple observations of certain event time (in this medical study the event is the reopenning of a certain artery), some of them may have one or more missing periods called ``gaps'' before observing the``first'' event. Therefore, for those observations, the observed first event may not be the true first event because the true first event might have happened in one of the missing gaps. Due to this kind of missing information, estimating the survival function of the true first event becomes very difficult. No research nor discussion has been done on this type of data by now. In this thesis, the auther introduces a new nonparametric estimating method to solve this problem. This new method is currently called Imputed Empirical Estimating (IEE) method. According to the simulation studies, the IEE method provide a very good estimate of the survival function of the true first event. It significantly outperforms all the existing estimating approaches in our simulation studies. Besides the new IEE method, this thesis also explores the Maximum Likelihood Estimate in thegap data case. The gap data is introduced as a special type of interval censored data for thefirst time. The dependence between the censoring interval (in the gap data case is the observedfirst event time point) and the event (in the gap data case is the true first event) makes the gap data different from the well studied regular interval censored data. This thesis points of theonly difference between the gap data and the regular interval censored data, and provides a MLEof the gap data under certain assumptions.The third estimating method discussed in this thesis is the Weighted Estimating Equation (WEE)method. The WEE estimate is a very popular nonparametric approach currently used in many survivalanalysis studies. In this thesis the consistency and asymptotic properties of the WEE estimateused in the gap data are discussed. Finally, in the gap data case, the WEE estimate is showed to be equivalent to the Kaplan-Meier estimate. Numerical examples are provied in this thesis toillustrate the algorithm of the IEE and the MLE approaches. The auther also provides an IEE estimate of the survival function based on the real-life data from Duke Medical School. A series of simulation studies are conducted to assess the goodness-of-fit of the new IEE estimate. Plots and tables of the results of the simulation studies are presentedin the second chapter of this thesis.<P>
140

SAMPLE SIZE DETERMINATION AND STATIONARITY TESTING IN THE PRESENCE OF TREND BREAKS

Huh, Seungho 23 February 2001 (has links)
<p>Traditionally it is believed that most macroeconomic time series represent stationary fluctuations around a deterministic trend. However, simple applications of the Dickey-Fuller test have, in many cases, been unable to show that major macroeconomic variables are stationary univariate time series structure. One possible reason for non-rejection of unit roots is that the simple mean or linear trend function used by the tests are not sufficient to describe the deterministic part of the series. To address this possibility, unit root tests in the presence of trend breaks have been studied by several researchers.In our work, we deal with some issues associated with unit root testing in time series with a trend break.The performance of various unit root test statistics is compared with respect to the break induced size distortion problem. We examine the effectiveness of tests based on symmetric estimators as compared to those based on the least squares estimator.In particular, we show that tests based on the weighted symmetric estimator not only eliminate thespurious rejection problem but also have reasonably good power properties when modified to allow for a break.We suggest alternative test statistics for testing the unit root null hypothesis in the presence of a trend break. Our new test procedure, which we call the ``bisection'' method, is based on the idea of subgrouping. This is simpler than other methods since the necessity of searching for the break is avoided.Using stream flow data from the US Geological Survey, we perform a temporal analysis of some hydrologicvariables. We first show that the time series for the target variables are stationary, then focus on finding the sample size necessary to detect a mean change if one occurs. Three different approaches are used to solve this problem: OLS, GLS and a frequency domain method. A cluster analysis of stations is also performed using these sample sizes as data.We investigate whether available geographic variables can be used to predict cluster membership. <P>

Page generated in 0.1415 seconds