Spelling suggestions: "subject:"nonparametric estatistics"" "subject:"nonparametric cstatistics""
101 |
Nonparametric item response modeling for identifying differential item functioning in the moderate-to-small-scale testing contextWitarsa, Petronilla Murlita 11 1900 (has links)
Differential item functioning (DIF) can occur across age, gender, ethnic, and/or
linguistic groups of examinee populations. Therefore, whenever there is more than one
group of examinees involved in a test, a possibility of DIF exists. It is important to detect
items with DIF with accurate and powerful statistical methods. While finding a proper
DIP method is essential, until now most of the available methods have been dominated
by applications to large scale testing contexts. Since the early 1990s, Ramsay has
developed a nonparametric item response methodology and computer software, TestGraf
(Ramsay, 2000). The nonparametric item response theory (IRT) method requires fewer
examinees and items than other item response theory methods and was also designed to
detect DIF. However, nonparametric IRT's Type I error rate for DIF detection had not
been investigated.
The present study investigated the Type I error rate of the nonparametric IRT DIF
detection method, when applied to moderate-to-small-scale testing context wherein there
were 500 or fewer examinees in a group. In addition, the Mantel-Haenszel (MH) DIF
detection method was included.
A three-parameter logistic item response model was used to generate data for the
two population groups. Each population corresponded to a test of 40 items. Item statistics
for the first 34 non-DIF items were randomly chosen from the mathematics test of the
1999 TEVISS (Third International Mathematics and Science Study) for grade eight,
whereas item statistics for the last six studied items were adopted from the DIF items
used in the study of Muniz, Hambleton, and Xing (2001). These six items were the focus
of this study. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
|
102 |
Nonparametric geostatistical estimation of soil physical propertiesGhassemi, Ali January 1987 (has links)
No description available.
|
103 |
A Nonparametric Test for the Non-Decreasing Alternative in an Incomplete Block DesignNdungu, Alfred Mungai January 2011 (has links)
The purpose of this paper is to present a new nonparametric test statistic for testing against ordered alternatives in a Balanced Incomplete Block Design (BIBD). This test will then be compared with the Durbin test which tests for differences between treatments in a BIBD but without regard to order. For the comparison, Monte Carlo simulations were used to generate the BIBD. Random samples were simulated from: Normal Distribution; Exponential Distribution; T distribution with three degrees of freedom. The number of treatments considered was three, four and five with all the possible combinations necessary for a BIBD. Small sample sizes were 20 or less and large sample sizes were 30 or more. The powers and alpha values were then estimated after 10,000 repetitions.The results of the study show that the new test proposed is more powerful than the Durbin test. Regardless of the distribution, sample size or number of treatments, the new test tended to have higher powers than the Durbin test.
|
104 |
Estimation For The Cox Model With Various Types Of Censored DataRiddlesworth, Tonya 01 January 2011 (has links)
In survival analysis, the Cox model is one of the most widely used tools. However, up to now there has not been any published work on the Cox model with complicated types of censored data, such as doubly censored data, partly-interval censored data, etc., while these types of censored data have been encountered in important medical studies, such as cancer, heart disease, diabetes, etc. In this dissertation, we first derive the bivariate nonparametric maximum likelihood estimator (BNPMLE) F[subscript n](t,z) for joint distribution function F[sub 0](t,z) of survival time T and covariate Z, where T is subject to right censoring, noting that such BNPMLE F[subscript n] has not been studied in statistical literature. Then, based on this BNPMLE F[subscript n] we derive empirical likelihood-based (Owen, 1988) confidence interval for the conditional survival probabilities, which is an important and difficult problem in statistical analysis, and also has not been studied in literature. Finally, with this BNPMLE F[subscript n] as a starting point, we extend the weighted empirical likelihood method (Ren, 2001 and 2008a) to the multivariate case, and obtain a weighted empirical likelihood-based estimation method for the Cox model. Such estimation method is given in a unified form, and is applicable to various types of censored data aforementioned.
|
105 |
Generalized Laguerre Series for Empirical Bayes Estimation: Calculations and ProofsConnell, Matthew Aaron 18 May 2021 (has links)
No description available.
|
106 |
The small-sample power of some nonparametric testsGibbons, Jean Dickinson January 1962 (has links)
I. Small-Sample Power of the One-Sample Sign Test for Approximately Normal Distributions. The power function of the one-sided, one-sample sign test is studied for populations which deviate from exact normality, either by skewness, kurtosis, or both. The terms of the Edgeworth asymptotic expansion of order more than N<sup>-3/2</sup> are used to represent the population density. Three sets of hypotheses and alternatives, concerning the location of (1) the median, (2) the median as approximated by the mean and coefficient of skewness, and (3) the mean, are considered in an attempt to make valid comparisons between the power of the sign test and Student's t test under the same conditions. Numerical results are given for samples of size 10, significance level .05, and for several combinations of the coefficients of skewness and kurtosis.
II. Power of Two-Sample Rank Teats on the Equality of Two Distribution Functions. A comparative study is made of the power of two-sample rank tests of the hypothesis that both samples are drawn from the same population. The general alternative is that the variables from one population are stochastically larger than the variables from the other.
One of the alternatives considered is that the variables in the first sample are distributed as the smallest of k variates with distribution F, and the variables in the second sample are distributed as the largest of these k – H₁ : H = 1 - (1 - F)<sup>k</sup>, G = F<sup>k</sup>. These two alternative distributions are mutually symmetric if F is symmetrical. Formulae are presented, which are independent of F, for the evaluation of the probability under H₁ of any joint arrangement of the variables from the two samples. A theorem is proved concerning the equality of the probabilities of certain pairs of orderings under assumptions of mutually symmetric populations. The other alternative is that both samples are normally distributed with the same variance but different means, the standardized difference between the two extreme distributions in the first alternative corresponding to the difference between the means. Numerical results of power are tabulated for small sample sizes, k = 2, 3 and 4, significance levels .01, .05 and .10. The rank tests considered are the most powerful rank test, the one and two-sided Wilcoxon tests, Terry's c₁ test, the one and two-aided median tests, the Wald-Wolfowitz runs test, and two new tests called the Psi test and the Gamma test.
The two-sample rank test which is locally most powerful against any alternative·expressing an arbitrary functional relationship between the two population distribution functions and an unspecified parameter θ is derived and its asymptotic properties studied. The method is applied to two specific functional alternatives, H₁* : H = (1-θ)F<sup>k</sup> + θ[1 - (1-F)<sup>k</sup>], G = F<sup>k</sup>, and H₁**: H = 1 - (1-F)<sup>1+θ</sup>, G = F<sup>1+θ</sup>, where θ ≥ 0, which are similar to the alternative of two extreme distributions. The resulting test statistics are the Gamma test and the Psi test, respectively. The latter test is shown to have desirable small-sample properties.
The asymptotic power functions of the Wilcoxon and WaldWolfowitz tests are compared for the alternative of two extreme distributions with k = 2, equal sample sizes and significance level .05. / Ph. D.
|
107 |
An exploration of parametric versus nonparametric statistics in occupational therapy clinical researchRoyeen, Charlotte Brasic January 1986 (has links)
Data sets from research in clinical practice professions often do not meet assumptions necessary for appropriate use of parametric statistics (Lezak and Gray, 1984). When assumptions underlying the use of the parametric tests are violated or cannot be documented, the power of the parametric test may be invalidated and consequently, the significance levels inaccurate (Gibbons, 1976). Much research has investigated the relative merits of parametric versus nonparametric procedures using simulation studies, but little has been done using actual data sets from a particular discipline. This study compared the application of parametric and nonparametric statistics using a body of literature in clinical occupational therapy. The most common parametric procedures in occupational therapy research literature from 1980 - 1984 were identified using methodology adapted from Goodwin and Goodwin (1985). Five small sample size data sets from published occupational therapy research articles typifying the most commonly used univariate parametric procedures were obtained, and subjected to exploratory data analyses (Tukey, 1977) in order to evaluate whether or not assumptions underlying appropriate use of the respective parametric procedures had been met. Subsequently, the nonparametric analogue test was identified and computed.
Results revealed that in three of the five cases (paired t-test, one factor ANOVA and Pearson Correlation Coefficient) assumptions underlying the use of the parametric test were not met. In one case (independent t-test) the assumptions were met with a minor qualification. In only one case (simple linear regression) were assumptions clearly met. It was also found that in each of the two cases where parametric assumptions were met, no significant differences in p values between the parametric and the nonparametric tests were found. And conversely, in each of the three cases where parametric assumptions were not met, significant differences between the parametric and nonparametric results were found. These findings indicate that if cases were considered as a whole, there was a one hundred percent agreement between whether or not parametric assumptions were violated and whether or not differences were discovered regarding parametric versus nonparametric results.
Other findings regarding (a) non-normality, (b) outliers, (c) multiple violation of assumptions for a given procedure, and (d) research designs employed are discussed and implications identified. Suggestions for future research are put forth. / Ph. D.
|
108 |
HATLINK: a link between least squares regression and nonparametric curve estimationEinsporn, Richard L. January 1987 (has links)
For both least squares and nonparametric kernel regression, prediction at a given regressor location is obtained as a weighted average of the observed responses. For least squares, the weights used in this average are a direct consequence of the form of the parametric model prescribed by the user. If the prescribed model is not exactly correct, then the resulting predictions and subsequent inferences may be misleading. On the other hand, nonparametric curve estimation techniques, such as kernel regression, obtain prediction weights solely on the basis of the distance of the regressor coordinates of an observation to the point of prediction. These methods therefore ignore information that the researcher may have concerning a reasonable approximate model. In overlooking such information, the nonparametric curve fitting methods often fit anomalous patterns in the data.
This paper presents a method for obtaining an improved set of prediction weights by striking the proper balance between the least squares and kernel weighting schemes. The method is called "HATLINK," since the appropriate balance is achieved through a mixture of the hat matrices corresponding to the least squares and kernel fits. The mixing parameter is determined adaptively through cross-validation (PRESS) or by a version of the Cp statistic. Predictions obtained through the HATLINK procedure are shown through simulation studies to be robust to model misspecification by the researcher. It is also demonstrated that the HA TLINK procedure can be used to perform many of the usual tasks of regression analysis, such as estimate the error variance, provide confidence intervals, test for lack of fit of the user's prescribed model, and assist in the variable selection process. In accomplishing all of these tasks, the HATLINK procedure provides a modelrobust alternative to the standard model-based approach to regression. / Ph. D.
|
109 |
Empirical Bayesian Smoothing Splines for Signals with Correlated Errors: Methods and ApplicationsRosales Marticorena, Luis Francisco 22 June 2016 (has links)
No description available.
|
110 |
Heuristics for offline rectangular packing problemsOrtmann, Frank 03 1900 (has links)
Thesis (PhD (Logistics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: Packing problems are common in industry and there is a large body of literature on the subject.
Two packing problems are considered in this dissertation: the strip packing problem and the
bin packing problem. The aim in both problems is to pack a speci ed set of small items, the
dimensions of which are all known prior to packing (hence giving rise to an o ine problem),
into larger objects, called bins. The strip packing problem requires packing these items into a
single bin, one dimension of which is unbounded (the bin is therefore referred to as a strip). In
two dimensions the width of the strip is typically speci ed and the aim is to pack all the items
into the strip, without overlapping, so that the resulting packing height is a minimum. The bin
packing problem, on the other hand, is the problem of packing the items into a speci ed set of
bins (all of whose dimensions are bounded) so that the wasted space remaining in the bins (which
contain items) is a minimum. The bins may all have the same dimensions (in which case the
problem is known as the single bin size bin packing problem), or may have di erent dimensions,
in which case the problem is called the multiple bin size bin packing problem (MBSBPP). In
two dimensions the wasted space is the sum total of areas of the bins (containing items) not
covered by items.
Many solution methodologies have been developed for above-mentioned problems, but the scope
of the solution methodologies considered in this dissertation is restricted to heuristics. Packing
heuristics follow a xed set of rules to pack items in such a manner as to nd good, feasible
(but not necessarily optimal) solutions to the strip and bin packing problems within as short
a time span as possible. Three types of heuristics are considered in this dissertation: (i) those
that pack items into levels (the heights of which are determined by the heights of the tallest
items in these levels) in such a manner that all items are packed along the bottom of the level,
(ii) those that pack items into levels in such a manner that items may be packed anywhere
between the horizontal boundaries that de ne the levels, and (iii) those heuristics that do not
restrict the packing of items to levels. These three classes of heuristics are known as level
algorithms, pseudolevel algorithms and plane algorithms, respectively.
A computational approach is adopted in this dissertation in order to evaluate the performances
of 218 new heuristics for the strip packing problem in relation to 34 known heuristics from
the literature with respect to a set of 1 170 benchmark problem instances. It is found that
the new level-packing heuristics do not yield signi cantly better solutions than the known
heuristics, but several of the newly proposed pseudolevel heuristics do yield signi cantly better
results than the best of the known pseudolevel heuristics in terms of both packing densities
achieved and computation times expended. During the evaluation of the plane algorithms two
classes of heuristics were identi ed for packing problems, namely sorting-dependent and sortingindependent
algorithms. Two new sorting techniques are proposed for the sorting-independent
algorithms and one of them yields the best-performing heuristic overall. A new heuristic approach for the MBSBPP is also proposed, which may be combined with
level and pseudolevel algorithms for the strip packing problem in order to nd solutions to the
problem very rapidly. The best-performing plane-packing heuristic is modi ed to pack items
into the largest bins rst, followed by an attempted repacking of the items in those bins into
smaller bins with the aim of further minimising wasted space. It is found that the resulting
plane-packing algorithm yields the best results in terms of time and packing density, but that
the solution di erences between pseudolevel algorithms are not as marked for the MBSBPP as
for the strip packing problem. / AFRIKAANSE OPSOMMING: Inpakkingsprobleme kom algemeen in die industrie voor en daar is 'n aansienlike volume literatuur
oor hierdie onderwerp. Twee inpakkingsprobleme word in hierdie proefskrif oorweeg,
naamlik die strook-inpakkingsprobleem en die houer-inpakkingsprobleem. In beide probleme is
die doel om 'n gespesi seerde versameling klein voorwerpe, waarvan die dimensies almal voordat
inpakking plaasvind, bekend is (en die probleem dus 'n sogenaamde a
yn-probleem is), in
een of meer groter houers te pak. In die strook-inpakkingsprobleem word hierdie voorwerpe
in een houer, waarvan een dimensie onbegrens is, ingepak (hierdie houer word dus 'n strook
genoem). In twee dimensies word die wydte van die strook gewoonlik gespesi seer en is die doel
om al die voorwerpe sonder oorvleueling op s o 'n manier in die strook te pak dat die totale
inpakkingshoogte geminineer word. In die houer-inpakkingsprobleem, daarenteen, is die doel
om die voorwerpe op s o 'n manier in 'n gespesi seerde aantal houers (waarvan al die dimensies
begrens is) te pak dat die vermorste of oorblywende ruimte in die houers (wat wel voorwerpe
bevat) 'n minimum is. Die houers mag almal dieselfde dimensies h^e (in welke geval die probleem
as die enkelgrootte houer-inpakkingsprobleem bekend staan), of mag verskillende dimensies h^e
(in welke geval die probleem as die veelvuldige-grootte houer-inpakkingsprobleem bekend staan,
afgekort as VGHIP). In twee dimensies word die vermorste ruimte geneem as die somtotaal van
daardie deelareas van die houers (wat wel voorwerpe bevat) waar daar geen voorwerpe geplaas
word nie.
Verskeie oplossingsmetodologie e is al vir die bogenoemde twee inpakkingsprobleme ontwikkel,
maar die bestek van die metodologie e wat in hierdie proefskrif oorweeg word, word beperk tot
heuristieke. 'n Inpakkingsheuristiek volg 'n vaste stel re els waarvolgens voorwerpe in houers
gepak word om so spoedig moontlik goeie, toelaatbare (maar nie noodwendig optimale) oplossings
tot die strook-inpakkingsprobleem en die houer-inpakkingsprobleem te vind. Drie tipes
inpakkingsheuristieke word in hierdie proefskrif oorweeg, naamlik (i) heuristieke wat voorwerpe
langs die onderste randte van horisontale vlakke in die houers pak (die hoogtes van hierdie vlakke
word bepaal deur die hoogtes van die hoogste item in elke vlak), (ii) heuristieke wat voorwerpe
op enige plek binne horisontale stroke in die houers pak, en (iii) heuristieke waar inpakking
nie volgens horisontale vlakke of stroke beperk word nie. Hierdie drie klasse heuristieke staan
onderskeidelik as vlakalgoritmes, pseudo-vlakalgoritmes en platvlakalgoritmes bekend.
'n Berekeningsbenadering word in hierdie proefskrif gevolg deur die werkverrigting van die
218 nuwe heuristieke vir die strook-inpakkingsprobleem met die werkverrigting van 34 bekende
heuristieke uit die literatuur te vergelyk, deur al die heuristieke op 1 170 toetsprobleme toe
te pas. Daar word bevind dat die nuwe vlakalgoritmes nie 'n noemenswaardige verbetering in
oplossingskwaliteit in vergeleke met soortgelyke bestaande algoritmes in die literatuur lewer nie,
maar dat verskeie nuwe pseudo-vlakalgoritmes wel noemenswaardige verbeteringe in terme van
beide inpakkingsdigthede en oplossingstye in vergeleke met die beste bestaande algoritmes in die
literatuur lewer. Assessering van die platvlakalgoritmes het gelei tot die identi kasie van twee
deelklasse van algoritmes, naamlik sorteringsafhanklike- en sorteringsonafhanklike algoritmes.
Twee nuwe sorteringstegnieke word ook vir die deelklas van sorteringsonafhanklike algoritmes
voorgestel, en een van hulle lewer die algeheel beste inpakkingsheursitiek.
'n Nuwe heuristiese benadering word ook vir die VGHIP ontwikkel. Hierdie benadering kan
met vlak- of pseudo-vlakalgoritmes vir die strook-inpakkingsprobleem gekombineer word om
baie vinnig oplossings vir die VGHIP te vind. Die beste platvlakheuristiek vir die strookinpakkingsprobleem
word ook aangepas om voorwerpe eers in die grootste houers te pak, en
daarna in kleiner houers te herpak met die doel om vermorste ruimte verder te minimeer.
Daar word bevind dat die resulterende platvlakalgoritme die beste resultate in terme van
beide inpakkingsdigtheid en oplossingstyd lewer, maar dat oplossingsverskille tussen die pseudovlakalgoritmes
nie so opmerklik vir die VGHIP is as wat die geval met die strookinpakkingsprobleem
was nie.
|
Page generated in 0.1229 seconds