• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 509
  • 36
  • 14
  • 7
  • 6
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 656
  • 656
  • 292
  • 189
  • 157
  • 156
  • 88
  • 81
  • 81
  • 75
  • 68
  • 65
  • 57
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A qualitative survey of current practices in missionary candidate assessment

Chira, Roberta M. January 2002 (has links) (PDF)
Thesis (Psy. D.)--Wheaton College Graduate School, Wheaton, IL, 2002. / Abstract. Includes bibliographical references (leaves 79-82).
22

The dimensions of grief among Chinese in Hong Kong

Tsui, Ka-yee, Yenny. January 2005 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2005. / Title proper from title frame. Also available in printed format.
23

Reliability testing of a life script questionnaire a research report submitted in partial fulfillment ... /

Cigan, Erica Rosaline. January 1981 (has links)
Thesis (M.S.)--University of Michigan, 1981.
24

An investigation of faking good responding on MMPI-II, PSI and PCRI /

Carney, Shawn. January 1900 (has links)
Thesis (M.A.)--Rowan University, 2005. / Typescript. Includes bibliographical references.
25

A qualitative survey of current practices in missionary candidate assessment

Chira, Roberta M. January 2002 (has links)
Thesis (Psy. D.)--Wheaton College Graduate School, Wheaton, IL, 2002. / Abstract. Includes bibliographical references (leaves 79-82).
26

An investigation of the effect of the type of music upon mental test performance of high school students

Merrell, Edgar Johnston January 1943 (has links)
[No abstract submitted] / Education, Faculty of / Graduate
27

Accuracy of parameter estimation on polytomous IRT models

Park, Chung 01 January 1997 (has links)
Procedures based on item response theory (IRT) are widely accepted for solving various measurement problems which cannot be solved using classical test theory (CTT) procedures. The desirable features of dichotomous IRT models over CTT are well known and have been documented by Hambleton, Swaminathan, and Rogers (1991). However, dichotomous IRT models are inappropriate for situations where items need to be scored in more than two categories. For example, in performance assessments, most of the scoring rubrics for performance assessment require scoring of examinee's responses in ordered categories. In addition, polytomous IRT models are useful for assessing an examinee's partial knowledge or levels of mastery. However, the successful application of polytomous IRT models to practical situations depends on the availability of reasonable and well-behaved estimates of the parameters of the models. Therefore, in this study, the behavior of estimators of parameters in polytomous IRT models were examined. In the first study, factors that affected the accuracy, variance, and bias of the marginal maximum likelihood (MML) estimators in the generalized partial credit model (GPCM) were investigated. Overall, the results of the study showed that the MML estimators of the parameters of the GPCM, as obtained through the computer program, PARSCALE, performed well under various conditions. However, there was considerable bias in the estimates of the category parameters under all conditions investigated. The average bias did not decrease when sample size and test length increased. The bias contributed to large RMSE in the estimation of category parameters. Further studies need to be conducted to study the effect of bias in the estimates of parameters on the estimation of ability, the development of item banks, and on adaptive testing based on polytomous IRT models. In the second study, the effectiveness of Bayesian procedures for estimating parameters in the GPCM was examined. The results showed that Bayes procedures provided more accurate estimates of parameters with small data sets. Priors on the slope parameters, while having only a modest effect on the accuracy of estimation of slope parameters, had a very positive effect on the accuracy of estimation of the step difficulty parameters.
28

Linking multiple -choice and constructed -response items to a common proficiency scale

Bastari, B 01 January 2000 (has links)
Tests consisting of both multiple-choice and constructed-response items have gained in popularity in recent years. The evidence shows that many assessment programs have administered these two item formats in the same test. However, linking these two item formats on a common scale has not been thoroughly studied. Even though several methods for linking scales under item response theory (IRT) have been developed, many studies have addressed multiple-choice items only and only a few studies have addressed constructed-response items. No linking studies have addressed both item formats in the same assessment. The purpose of this study was to investigate the effects of several factors on the accuracy of linking item parameter estimates onto a common scale using the combination of the three-parameter logistic (3-PL) model for multiple-choice items with the graded response model (GRM) for constructed-response items. Working with an anchor-test design, the factors considered were: (1) test length, (2) proportion of items of each format in the test, (3) anchor test length, (4) sample size, (5) ability distributions, and (6) method of equating. The data for dichotomous and polytomous responses for unique and anchor items were simulated to vary as a function of these factors. The main findings were as follows: the constructed-response items had a large influence in parameter estimation for both types of item formats. Generally, the slope parameters were estimated with small bias but large variance. Threshold parameters were also estimated with small bias but large variance for constructed-response items. However, the opposite results were obtained for multiple-choice items. For the guessing parameter estimates, the recovery was relatively good. The coefficients of transformation were also relatively well estimated. Overall, it was found that the following conditions led to more effective results: (1) a long test, (2) a large proportion of multiple-choice items in the test, (3) a long anchor test, (4) a large sample size, (5) no ability differences between the groups used in linking the two tests, and (6) the method of concurrent calibration. At the same time, more research will be necessary to expand the conditions, such as the introduction of multidimensional data, under which linking of item formats to a common scale is evaluated.
29

Evaluating the effects of several multi -stage testing design variables on selected psychometric outcomes for certification and licensure assessment

Zenisky, April L 01 January 2004 (has links)
Computer-based testing is becoming popular with credentialing agencies because new test designs are possible and the evidence is clear that these new designs can increase the reliability and validity of candidate scores and pass/fail decisions. Research on MST to date suggests that the measurement quality of MST results is comparable to full-fledged computer-adaptive tests and improved over computerized fixed-form tests. MST's promise dwells in this potential for improved measurement with greater control than other adaptive approaches for constructing test forms. Recommending use of the MST design and advising how best to set up the design, however, are two different things. The purpose of the current simulation study was to advance an established line of research on MST methodology by enhancing understanding of how several important design variables affect outcomes for high-stakes credentialing. Modeling of the item bank, the candidate population, and the statistical characteristics of test items reflect an operational credentialing exam's conditions. Studied variables were module arrangement (4 designs), amount of overall test information (4 levels), distribution of information over stages (2 variations), strategies for between-stage routing (4 levels), and pass rates (3 levels), for 384 conditions total. Results showed that high levels of decision accuracy (DA) and decision consistency (DC) were consistently observed, even when test information was reduced by as much as 25%. No differences due to the choice of module arrangement were found. With high overall test information, results were optimal when test information was divided equally among stages; with reduced test information gathering more test information at Stage 1 provided the best results. Generalizing simulation study findings is always problematic. In practice, psychometric models never completely explain candidate performance, and with MST, there is always the potential psychological impact on candidates if test difficulty shifts are noticed. At the same time, two findings seem to stand out in this research: (1) with limited amounts of overall test information, it may be best to capitalize on available information with accurate branching decisions early, and (2) there may be little statistical advantage in exceeding test information much above 10 as gains in reliability and validity appear minimal.
30

Weighting procedures for robust ability estimation in item response theory

Skorupski, William P 01 January 2004 (has links)
Methods of ability parameter estimation in educational testing are subject to the biases inherent in various estimation procedures. This is especially true in the case of tests whose properties do not meet the asymptotic assumptions of estimation procedures like Maximum Likelihood Estimation. The item weighting procedures in this study were developed as a means to improve the robustness of such ability estimates. A series of procedures to weight the contribution of items to examinees' scores are described and empirically tested using a simulation study under a variety of reasonable conditions. Item weights are determined to minimize the contribution of some items while simultaneously maximizing the contribution of others. These procedures differentially weight the contribution of items to examinees' scores, by accounting for either (1) the amount of information with respect to trait estimation, or (2) the relative precision of item parameter estimates. Results indicate that weighting by item information produced ability estimates that were moderately less biased at the tails of the ability distribution and had substantially lower standard errors than scores derived from a traditional item response theory framework. Areas for future research using this scoring method are suggested.

Page generated in 0.0472 seconds