• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 88
  • 9
  • 9
  • 8
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 392
  • 392
  • 392
  • 80
  • 79
  • 79
  • 77
  • 73
  • 64
  • 63
  • 63
  • 55
  • 48
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Using Item Response Theory to Develop a Shorter Version of the Transition Readiness Assessment Questionnaire (TRAQ)

Johnson, Kiana, McBee, R., Wood, David L. 01 January 2016 (has links)
No description available.
92

Item response theory

Inman, Robin F. 01 January 2001 (has links)
This study was performed to show advantages of Item Response THeory (IRT) over Classical Test Theory (CTT). Item Response THeory is a complex theory with many applications. This study used one application, test analysis. Ten items from a social psychology midterm were analyzed in order to show how IRT is more accurate than CTT, because IRT has the ability to add and delete individual items. Also, IRT features the Item Characteristic Curve (ICC) to give an easy to read interpretation of the results. The results showed the levels of the three indexes, item discrimination, difficulty, and guessing. The results indicated in which area each item was weak or strong. With this information, suggestions can be made to improve the item and ultimately improve the measurement accuracy of the entire test. Classical Test Theory cannot do this on individual item basis without changing the accuracy of the entire test. The results of this study confirm that IRT can be used to analyze individual items and allow for the improvement or revision of the item. This means IRT can be used for test analysis in a more efficient and accurate manner than CTT. This study provides an introduction to Item Response Theory in the hopes that more research will be performed to establish IRT as a commonly used tool for improving testing measurement.
93

The Effects Of Differential Item Functioning On Predictive Bias

Bryant, Damon 01 January 2004 (has links)
The purpose of this research was to investigate the relation between measurement bias at the item level (differential item functioning, dif) and predictive bias at the test score level. Dif was defined as a difference in the probability of getting a test item correct for examinees with the same ability but from different subgroups. Predictive bias was defined as a difference in subgroup regression intercepts and/or slopes in predicting a criterion. Data were simulated by computer. Two hypothetical subgroups (a reference group and a focal group) were used. The predictor was a composite score on a dimensionally complex test with 60 items. Sample size (35, 70, and 105 per group), validity coefficient (.3 or .5), and the mean difference on the predictor (0, .33, .66, and 1 standard deviation, sd) and the criterion (0 and .35 sd) were manipulated. The percentage of items showing dif (0%, 15%, and 30%) and the effect size of dif (small = .3, medium = .6, and large = .9) were also manipulated. Each of the 432 conditions in the 3 x 2 x 4 x 2 x 3 x 3 design was replicated 500 times. For each replication, a predictive bias analysis was conducted, and the detection of predictive bias against each subgroup was the dependent variable. The percentage of dif and the effect size of dif were hypothesized to influence the detection of predictive bias; hypotheses were also advanced about the influence of sample size and mean subgroup differences on the predictor and criterion. Results indicated that dif was not related to the probability of detecting predictive bias against any subgroup. Results were inconsistent with the notion that measurement bias and predictive bias are mutually supportive, i.e., the presence (or absence) of one type of bias is evidence in support of the presence (or absence) of the other type of bias. Sample size and mean differences on the predictor/criterion had direct and indirect effects on the probability of detecting predictive bias against both reference and focal groups. Implications for future research are discussed.
94

An IRT Investigation of Common LMX Measures

Howald, Nicholas 29 November 2017 (has links)
No description available.
95

Type I Error Rates and Power Estimates for Several Item Response Theory Fit Indices

Schlessman, Bradley R. 29 December 2009 (has links)
No description available.
96

DO APPLICANTS AND INCUMBENTS RESPOND TO PERSONALITY ITEMS SIMILARLY? A COMPARISON USING AN IDEAL POINT RESPONSE MODEL

O'Brien, Erin L. 09 July 2010 (has links)
No description available.
97

A Bifactor Model of Burnout? An Item Response Theory Analysis of the Maslach Burnout Inventory – Human Services Survey.

Periard, David Andrew 05 August 2016 (has links)
No description available.
98

Detecting Insufficient Effort Responding: An Item Response Theory Approach

Barnes, Tyler Douglas January 2016 (has links)
No description available.
99

Case and covariate influence: implications for model assessment

Duncan, Kristin A. 12 October 2004 (has links)
No description available.
100

A semi-parametric approach to estimating item response functions

Liang, Longjuan 22 June 2007 (has links)
No description available.

Page generated in 0.0441 seconds