• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 88
  • 54
  • 34
  • 14
  • 13
  • 12
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 484
  • 86
  • 70
  • 59
  • 56
  • 55
  • 50
  • 48
  • 48
  • 45
  • 45
  • 44
  • 41
  • 40
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Design of Adaptive Derivative Estimator Using Sliding Mode Technique

Wu, Peir-Cherng 01 September 2003 (has links)
This thesis is concerned with the designing of an nth order adaptive integral variable structure derivative estimator (AIVSDE). The proposed estimator's scheme is in fact a modified and extended version of the existing AIVSDE. The new proposed AIVSDE can be used as a direct nth differentiator for a smooth signal which has n continuous and bounded derivatives. The adaptive algorithm is utilized for the switching gain to remove the requirement for a priori knowledge about the upper bound of the derivative of the input signal. The stability of the redesigned first order, the second order, and the nth order derivative's estimation is guaranteed by the proposed scheme. An example is demonstrated for showing the applicability of the proposed AIVSDE.
22

Evaluating the Quality Payment Program in Taiwan for Treating Tuberculosis

Hsieh, Yu-Ting 22 July 2007 (has links)
none
23

Kalibraciniai įvertiniai baigtinių populiacijų statistikoje / Calibrated estimators in finite population statistics

Pumputis, Dalius 10 June 2004 (has links)
Calibrated estimators of the population total and of the ratio of two totals. In the e are presented and analysed. Calibrated estimators – estimators in which auxiliary information is used in order to get more accurate estimates of the parameters. Different distance measures were used to construct calibrated estimators using the Lagrange multiplier method. It is known, that in some cases calibrated estimators coincide with the ratio estimators, which are more accurate if the study variable is well correlated with the known auxiliary variable. Three estimators of totals and three estimators of a ratio are presented in the work as well as their approximate variances and variance estimators. The approximate variances were calculated using Taylor linearization technique. The experimental comparison of the considered estimators is presented when correlation between study variable and known auxiliary variable is 0.8, 0.6, 0.4, 0.2. The computer program for the calculation is made using the Language of Technical Computing MATLAB.
24

A Monte Carlo Study of Single Imputation in Survey Sampling

Xu, Nuo January 2013 (has links)
Missing values in sample survey can lead to biased estimation if not treated. Imputation was posted asa popular way to deal with missing values. In this paper, based on Särndal (1994, 2005)’s research, aMonte-Carlo simulation is conducted to study how the estimators work in different situations and howdifferent imputation methods work for different response distributions.
25

WALD TYPE TESTS WITH THE WRONG DISPERSION MATRIX

Rajapaksha, Kosman Watte Gedara Dimuthu Hansana 01 September 2021 (has links)
A Wald type test with the wrong dispersion matrix is used when the dispersion matrix is not a consistent estimator of the asymptotic covariance matrixof the test statistic. One class of such tests occurs when there are k groups and it is assumed that the population covariance matrices from the k groups are equal, but the common covariance matrix assumption does not hold. The pooled t test, one way AVOVA F test, and one way MANOVA F test are examples of this class. Two bootstrap confidence regions are modified to obtain large sample Wald type tests with the wrong dispersion matrix.
26

An Alternative Estimate of Preferred Direction for Circular Data

Otieno, Bennett Sango 30 July 2002 (has links)
Circular or Angular data occur in many fields of applied statistics. A common problem of interest in circular data is estimating a preferred direction and its corresponding distribution. This problem is complicated by the so-called wrap-around effect, which exists because there is no minimum or maximum on the circle. The usual statistics employed for linear data are inappropriate for directional data, as they do not account for the circular nature of directional data. Common choices for summarizing the preferred direction are the sample circular mean, and sample circular median. A newly proposed circular analog of the Hodges-Lehmann estimator is proposed, as an alternative estimate of preferred direction. The new measure of preferred direction is a robust compromise between circular mean and circular median. Theoretical results show that the new measure of preferred direction is asymptotically more efficient than the circular median and that its asymptotic efficiency relative to the circular mean is quite comparable. Descriptions of how to use the methods for constructing confidence intervals and testing hypotheses are provided. Simulation results demonstrate the relative strengths and weaknesses of the new approach for a variety of distributions. / Ph. D.
27

Enhancing and Reconstructing Digitized Handwriting

Swain, David James 15 August 1997 (has links)
This thesis involves restoration, reconstruction, and enhancement of a digitized library of hand-written documents. Imaging systems that perform this digitization often degrade the quality of the original documents. Many techniques exist for reconstructing, restoring, and enhancing digital images; however, many require <i> a priori </i> knowledge of the imaging system. In this study, only partial <i> a priori </i> knowledge is available, and therefore unknown parameters must be estimated before restoration, reconstruction, or enhancement is possible. The imaging system used to digitize the documents library has degraded the images in several ways. First, it has introduced a ringing that is apparent around each stroke. Second, the system has eliminated strokes of narrow widths. To restore these images, the imaging system is modeled by estimating the point spread function from sample impulse responses, and the image noise is estimated in an attempt to apply standard linear restoration techniques. The applicability of these techniques is investigated in the first part of this thesis. Then nonlinear filters, structural techniques, and enhancement techniques are applied to obtain substantial improvements in image quality. / Master of Science
28

Sequential Procedures for Nonparametric Kernel Regression

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, Sandamali.dharmasena@rmit.edu.au January 2008 (has links)
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.
29

Estimation in partly parametric additive Cox models

Läuter, Henning January 2003 (has links)
The dependence between survival times and covariates is described e.g. by proportional hazard models. We consider partly parametric Cox models and discuss here the estimation of interesting parameters. We represent the ma- ximum likelihood approach and extend the results of Huang (1999) from linear to nonlinear parameters. Then we investigate the least squares esti- mation and formulate conditions for the a.s. boundedness and consistency of these estimators.
30

A study on the parameter estimation based on rounded data

Li, Gen-liang 21 January 2011 (has links)
Most recorded data are rounded to the nearest decimal place due to the precision of the recording mechanism. This rounding entails errors in estimation and measurement. In this paper, we compare the performances of three types of estimators based on rounded data from time series models, namely A-K corrected estimator, approximate MLE and the SOS estimator. In order to perform the comparison, the A-K corrected estimators for the MA(1) model are derived theoretically. To improve the efficiency of the estimation, two types of variance-reduction estimators are further proposed, which are based on linear combinations of aforementioned three estimators. Simulation results show the proposed variance reduction estimators significantly improve the estimation efficiency.

Page generated in 0.018 seconds