Spelling suggestions: "subject:"desponse 1heory"" "subject:"desponse btheory""
111 |
Detecting Insufficient Effort Responding: An Item Response Theory ApproachBarnes, Tyler Douglas January 2016 (has links)
No description available.
|
112 |
Case and covariate influence: implications for model assessmentDuncan, Kristin A. 12 October 2004 (has links)
No description available.
|
113 |
A semi-parametric approach to estimating item response functionsLiang, Longjuan 22 June 2007 (has links)
No description available.
|
114 |
The Coupled Water-Protein Dynamics within Hydration Layer surrounding Protein and Semiclassical Approximation for Optical Response FuntionLi, Tanping 26 September 2011 (has links)
No description available.
|
115 |
The Effect of Item Parameter Uncertainty on Test ReliabilityBodine, Andrew James 24 August 2012 (has links)
No description available.
|
116 |
Constructing an Estimate of Academic Capitalism and Explaining Faculty Differences through Multilevel AnalysisKniola, David J. 24 November 2009 (has links)
Two broad influences have converged to shape a new environment in which universities must now compete and operate. Shrinking financial resources and a global economy have arguably compelled universities to adapt. The concept of academic capitalism helps explain the new realities and places universities in the context of a global, knowledge-based economy (Slaughter & Leslie, 1997). Prior to this theory, the role of universities in the knowledge economy was largely undocumented. Academic capitalism is a measurable concept defined by the mechanisms and behaviors of universities that seek to generate new sources of revenue and are best revealed through faculty work. This study was designed to create empirical evidence of academic capitalism through the behaviors of faculty members at research universities. Using a large-scale, national database, the researcher created a new measure—an estimate of academic capitalism—at the individual faculty member level and then used multi-level analysis to explain variation among these individual faculty members. This study will increase our understanding of the changing nature of faculty work, will lead to future studies on academic capitalism that involve longitudinal analysis and important sub-populations, and will likely influence institutional and public policy. / Ph. D.
|
117 |
Latent trait, factor, and number endorsed scoring of polychotomous and dichotomous responses to the Common Metric QuestionnaireBecker, R. Lance 28 July 2008 (has links)
Although job analysis is basic to almost all human resource functions, little attention has been given to the response format and scoring strategy of job analysis instruments. This study investigated three approaches to scoring polychotomous and dichotomous responses from the frequency and importance scales of the Common Metric Questionnaire (CMQ). Factor, latent trait, and number endorsed scores were estimated from the responses of 2684 job incumbents in six organizations. Scores from four of the CMQ scales were used in linear and nonlinear multiple regression equations to predict pay. The results demonstrated that: (a) simple number endorsed scoring of dichotomous responses was superior to the other scoring strategies; (b) Scoring of dichotomous responses was superior to scoring of polychotomous responses for each scoring technique; (c) scores estimated from the importance scale were better predictors of pay then scores from the frequency scale; (d) the relationship between latent trait and factor scores is nonlinear; (e) latent trait scores estimated with the two-parameter logistic model were superior to latent trait scores from the three parameter model; (f) test information functions for each scale demonstrated that the CMQ scales accurately measured a relatively narrow range of theta; (g) the reliability of factor scores estimated from dichotomous data is superior to factor scores from polychotomous data. Issues regarding the construction of job analysis instruments and the use of item response theory are discussed. / Ph. D.
|
118 |
Gender and Ethnicity-Based Differential Item Functioning on the Myers-Briggs Type IndicatorGratias, Melissa B. 07 May 1997 (has links)
Item Response Theory (IRT) methodologies were employed in order to examine the Myers-Briggs Type Indicator (MBTI) for differential item functioning (DIF) on the basis of crossed gender and ethnicity variables. White males were the reference group, and the focal groups were: black females, black males, and white females. The MBTI was predicted to show DIF in all comparisons. In particular, DIF on the Thinking-Feeling scale was hypothesized especially in the comparisons between white males and black females and between white males and white females. A sample of 10,775 managers who took the MBTI at assessment centers provided the data for the present experiment. The Mantel-Haenszel procedure and an IRT-based area technique were the methods of DIF-detection.
Results showed several biased items on all scales for all comparisons. Ethnicitybased bias was seen in the white male vs. black female and white male vs. black male comparisons. Gender-based bias was seen particularly in the white male vs. white female comparisons. Consequently, the Thinking-Feeling showed the least DIF of all scales across comparisons, and only one of the items differentially scored by gender was found to be biased. Findings indicate that the gender-based differential scoring system is not defensible in managerial samples, and there is a need for further research into the study of differential item functioning with regards to ethnicity. / Master of Science
|
119 |
Item response theoryInman, Robin F. 01 January 2001 (has links)
This study was performed to show advantages of Item Response THeory (IRT) over Classical Test Theory (CTT). Item Response THeory is a complex theory with many applications. This study used one application, test analysis. Ten items from a social psychology midterm were analyzed in order to show how IRT is more accurate than CTT, because IRT has the ability to add and delete individual items. Also, IRT features the Item Characteristic Curve (ICC) to give an easy to read interpretation of the results. The results showed the levels of the three indexes, item discrimination, difficulty, and guessing. The results indicated in which area each item was weak or strong. With this information, suggestions can be made to improve the item and ultimately improve the measurement accuracy of the entire test. Classical Test Theory cannot do this on individual item basis without changing the accuracy of the entire test. The results of this study confirm that IRT can be used to analyze individual items and allow for the improvement or revision of the item. This means IRT can be used for test analysis in a more efficient and accurate manner than CTT. This study provides an introduction to Item Response Theory in the hopes that more research will be performed to establish IRT as a commonly used tool for improving testing measurement.
|
120 |
A generalized partial credit FACETS model for investigating order effects in self-report personality dataHayes, Heather 05 July 2012 (has links)
Despite its convenience, the process of self-report in personality testing can be impacted by a variety of cognitive and perceptual biases. One bias that violates local independence, a core criterion of modern test theory, is the order effect. In this bias, characteristics of an item response are impacted not only by the content of the current item but also the accumulated exposure to previous, similar-content items. This bias is manifested as increasingly stable item responses for items that appear later in a test. Previous investigations of this effect have been rooted in classical test theory (CTT) and have consistently found that item reliabilities, or corrected item-total score correlations, increase with the item's serial position in the test. The purpose of the current study was to more rigorously examine order effects via item response theory (IRT). To this end, the FACETS modeling approach (Linacre, 1989) was combined with the Generalized Partial Credit model (GPCM; Muraki, 1992) to produce a new model, the Generalized Partial Credit FACETS model (GPCFM). Serial position of an item serves as a facet that contributes to the item response, not only via its impact on an item's location on the latent trait continuum, but also its discrimination. Thus, the GPCFM differs from previous generalizations of the FACETS model (Wang&Liu, 2007) in that the item discrimination parameter is modified to include a serial position effect. This parameter is important because it reflects the extent to which the purported underlying trait is represented in an item score. Two sets of analyses were conducted. First, a simulation study demonstrated effective parameter recovery, though measurements of error were impacted by sample size for all parameters, test length for trait level estimates, and the size of the order effect for trait level estimates, and an interaction between sample size and test length for item discrimination. Secondly, with respect to real self-report personality data, the GPCFM demonstrated good fit as well as superior fit relative to competing, nested models while also identifying order effects in some traits, particularly Neuroticism, Openness, and Agreeableness.
|
Page generated in 0.0623 seconds