81 |
The Development, Pilot, and Field Test of the Core HIV/AIDS Knowledge Assessment for Undergraduate and Graduate Students in Counseling-Related Degree ProgramsAcklin, Carrie L. 01 May 2016 (has links)
The purpose of this study was to develop a core HIV/AIDS knowledge assessment (CHAKA) for students enrolled in counseling-related degree programs. Although there are studies that examined counseling HIV/AIDS knowledge, the instruments that were used were limited in ways that may compromise the accuracy of the inferences that were made. This study was carried out in three phases. Phase 1 involved developing an initial pool of items; Phase 2 involved an expert review for content validation as well as a pilot-test; Phase 3 involved field testing the CHAKA. The field-test involved 343 undergraduate and graduate students at Southern Illinois University. Item response theory (IRT) was used to analyze the data. Before the data were analyzed, they were examined to see if the CHAKA was a unidimensional test. Results of the factor analysis performed was that the CHAKA may not be unidimensional; however the internal consistency was decent (α= .734). A two-parameter logistic (2PL) model was fit to the data. Results from the item parameter estimates displayed relatively low discrimination and difficulty parameters in addition to some problematic items (i.e., negative discrimination estimates, unusually large difficulty values). Additional analyses revealed that locally dependent items may have accounted for the possible multidimensionality, low discrimination indices, and inflated difficulty values. The low discrimination values likely affected the information values of the items and the test. All item information values were less than 1. Last, both uniform and non-uniform differential item functioning (DIF) was present between undergraduate and graduate students. IRT appears to be a promising approach to instrument development in counseling-related programs. Although the CHAKA properties were not ideal, further revisions and a larger sample size may contribute to the overall improvement of this instrument.
|
82 |
Non-parametric item response theory applications in the assessment of dementiaMcGrory, Sarah January 2015 (has links)
This thesis sought to address the application of non-parametric item response theory (NIRT) to cognitive and functional assessment in dementia. Performance on psychometric tests is key to diagnosis and monitoring of dementia. NIRT can be used to improve the psychometric properties of tests used in dementia assessment in multiple ways: confirming an underlying unidimensional structure, establishing formal item hierarchical patterns of decline, increasing insight by examining item parameters such as difficulty and discrimination, and creating shorter tests. From a NIRT approach item difficulty refers to the ease with which an item is endorsed. Discrimination is an index of how well an item can differentiate between patients of varying levels of severity. Firstly I carried out a systematic review to identify applications of both parametric and non-parametric IRT to measures assessing global cognitive functioning in people with dementia. This review demonstrated that IRT can increase the interpretive power of cognitive assessment scales and confirmed the limited number of IRT analyses of cognitive scales in dementia populations. This thesis extended this approach by applying Mokken scaling analysis to commonly used measures of current cognitive ability (Addenbrooke’s Cognitive Examination-Revised (ACE-R)) and of premorbid cognitive ability (National Adult Reading Test (NART)). Differential item functioning (DIF) by diagnosis identified slight variations in the patterns of hierarchical decline in the ACE-R. These disease-specific sequences of decline could serve as an adjunct to diagnosis, for example where learning a name and address is a more difficult task than being orientated in time, late onset Alzheimer’s disease is a more probable diagnosis than mixed Alzheimer’s and vascular dementia. These analyses also allowed key items to be identified which can be used to create briefer scales (mini-ACE and Mini-NART) which have good psychometric properties. These scales are clinically relevant, comprising highly discriminatory, invariantly ordered items. They also allow sensitive measurement and adaptive testing and can reduce test administration time and patient stress. Impairment of functional abilities represents a crucial component of dementia diagnosis with performance on these functional tasks predictive of overall disease. A second aspect of this thesis, therefore, was the application of Mokken scaling analyses to measures of functional decline in dementia, specifically the Lawton Instrumental Activities of Daily Living (IADL) scale and Physical Self-Maintenance Scale (PSMS). While gender DIF was observed for several items, implying the likelihood of equal responses from men and women is not equal a generally consistent pattern of impairment in functional ability was observed across different types of dementia.
|
83 |
Measurement of Stigma and Relationships Between Stigma, Depression, and Attachment Style Among People with HIV and People with Hepatitis CCabrera, Christine M. January 2014 (has links)
This dissertation is composed of three studies that examined illness-related stigma, depressive symptoms and attachment style among patients living with HIV and Hepatitis C (HCV). The first study examined the psychometric properties of a brief HIV Stigma Scale (B-HSS) in a sample of adult patients living with HIV (PHA) (n=94). The second study developed and explored the psychometric properties of the HCV Stigma Scale in a sample of adult patients living with HCV (PHC) (n =92). Psychometric properties were evaluated with classical test theory and item response theory methodology. The third study explored whether illness-related stigma mediated the relationship between insecure attachment styles (anxious attachment or avoidant attachment) and depressive symptoms among PHA (n =72) and PHC (n=83). From June to December 2008, patients were recruited to participate in a questionnaire study at the outpatient clinics in The Ottawa Hospital. Findings indicated that the 9-item B-HSS is a reliable and valid measure of HIV stigma with items that are highly discriminatory, which indicates that items are highly effective at discriminating patients with different levels of stigma. The 9-item HCV Stigma Scale was also found to be reliable and valid with highly discriminatory items that effectively differentiate PHC. Construct validity for both scales was supported by relationships with theoretically related constructs: depression and quality of life. Among PHA, when HIV stigma was controlled the relationship between anxious attachment style and depression was not significant. However, the relationship between avoidant attachment style and depressive symptoms decreased but remained significant. Among PHC when HCV stigma was controlled the relationship between insecure attachment styles and depressive symptoms was not significant. Dissertation results emphasize the importance of identifying patients experiencing illness-related stigma and the relevance of addressing stigma and attachment style when treating depressive symptoms among PHA and PHC.
|
84 |
Nonparametric item response modeling for identifying differential item functioning in the moderate-to-small-scale testing contextWitarsa, Petronilla Murlita 11 1900 (has links)
Differential item functioning (DIF) can occur across age, gender, ethnic, and/or
linguistic groups of examinee populations. Therefore, whenever there is more than one
group of examinees involved in a test, a possibility of DIF exists. It is important to detect
items with DIF with accurate and powerful statistical methods. While finding a proper
DIP method is essential, until now most of the available methods have been dominated
by applications to large scale testing contexts. Since the early 1990s, Ramsay has
developed a nonparametric item response methodology and computer software, TestGraf
(Ramsay, 2000). The nonparametric item response theory (IRT) method requires fewer
examinees and items than other item response theory methods and was also designed to
detect DIF. However, nonparametric IRT's Type I error rate for DIF detection had not
been investigated.
The present study investigated the Type I error rate of the nonparametric IRT DIF
detection method, when applied to moderate-to-small-scale testing context wherein there
were 500 or fewer examinees in a group. In addition, the Mantel-Haenszel (MH) DIF
detection method was included.
A three-parameter logistic item response model was used to generate data for the
two population groups. Each population corresponded to a test of 40 items. Item statistics
for the first 34 non-DIF items were randomly chosen from the mathematics test of the
1999 TEVISS (Third International Mathematics and Science Study) for grade eight,
whereas item statistics for the last six studied items were adopted from the DIF items
used in the study of Muniz, Hambleton, and Xing (2001). These six items were the focus
of this study. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
|
85 |
A comparison of traditional and IRT factor analysis.Kay, Cheryl Ann 12 1900 (has links)
This study investigated the item parameter recovery of two methods of factor analysis. The methods researched were a traditional factor analysis of tetrachoric correlation coefficients and an IRT approach to factor analysis which utilizes marginal maximum likelihood estimation using an EM algorithm (MMLE-EM). Dichotomous item response data was generated under the 2-parameter normal ogive model (2PNOM) using PARDSIM software. Examinee abilities were sampled from both the standard normal and uniform distributions. True item discrimination, a, was normal with a mean of .75 and a standard deviation of .10. True b, item difficulty, was specified as uniform [-2, 2]. The two distributions of abilities were completely crossed with three test lengths (n= 30, 60, and 100) and three sample sizes (N = 50, 500, and 1000). Each of the 18 conditions was replicated 5 times, resulting in 90 datasets. PRELIS software was used to conduct a traditional factor analysis on the tetrachoric correlations. The IRT approach to factor analysis was conducted using BILOG 3 software. Parameter recovery was evaluated in terms of root mean square error, average signed bias, and Pearson correlations between estimated and true item parameters. ANOVAs were conducted to identify systematic differences in error indices. Based on many of the indices, it appears the IRT approach to factor analysis recovers item parameters better than the traditional approach studied. Future research should compare other methods of factor analysis to MMLE-EM under various non-normal distributions of abilities.
|
86 |
Designing Software to Unify Person-Fit AssessmentPfleger, Phillip Isaac 10 December 2020 (has links)
Item-response theory (IRT)assumes that the model fits the data. One commonly overlooked aspect of model-fit assessment is an examination of personfit, or person-fit assessment (PFA). One reason that PFA lacks popularity among psychometricians is that comprehensive software is notpresent.This dissertation outlines the development and testing ofa new software package, called wizirt, that will begin to meet this need. This software package provides a wide gamut of tools to the user but is currently limited to unidimensional, dichotomous, and parametricmodels. The wizirt package is built in the open source language R, where it combines the capabilities of a number of other R packages under a single syntax.In addition to the wizirt package, I have created a number of resources to help users learn to use the package. This includes support for individuals who have never used R before, as well as more experienced R users.
|
87 |
Designing Software to Unify Person-Fit AssessmentPfleger, Phillip Isaac 10 December 2020 (has links)
Item-response theory (IRT)assumes that the model fits the data. One commonly overlooked aspect of model-fit assessment is an examination of personfit, or person-fit assessment (PFA). One reason that PFA lacks popularity among psychometricians is that comprehensive software is notpresent.This dissertation outlines the development and testing ofa new software package, called wizirt, that will begin to meet this need. This software package provides a wide gamut of tools to the user but is currently limited to unidimensional, dichotomous, and parametricmodels. The wizirt package is built in the open source language R, where it combines the capabilities of a number of other R packages under a single syntax.In addition to the wizirt package, I have created a number of resources to help users learn to use the package. This includes support for individuals who have never used R before, as well as more experienced R users.
|
88 |
Regularization Methods for Detecting Differential Item Functioning:Jiang, Jing January 2019 (has links)
Thesis advisor: Zhushan Mandy Li / Differential item functioning (DIF) occurs when examinees of equal ability from different groups have different probabilities of correctly responding to certain items. DIF analysis aims to identify potentially biased items to ensure the fairness and equity of instruments, and has become a routine procedure in developing and improving assessments. This study proposed a DIF detection method using regularization techniques, which allows for simultaneous investigation of all items on a test for both uniform and nonuniform DIF. In order to evaluate the performance of the proposed DIF detection models and understand the factors that influence the performance, comprehensive simulation studies and empirical data analyses were conducted. Under various conditions including test length, sample size, sample size ratio, percentage of DIF items, DIF type, and DIF magnitude, the operating characteristics of three kinds of regularized logistic regression models: lasso, elastic net, and adaptive lasso, each characterized by their penalty functions, were examined and compared. Selection of optimal tuning parameter was investigated using two well-known information criteria AIC and BIC, and cross-validation. The results revealed that BIC outperformed other model selection criteria, which not only flagged high-impact DIF items precisely, but also prevented over-identification of DIF items with few false alarms. Among the regularization models, the adaptive lasso model achieved superior performance than the other two models in most conditions. The performance of the regularized DIF detection model using adaptive lasso was then compared to two commonly used DIF detection approaches including the logistic regression method and the likelihood ratio test. The proposed model was applied to analyzing empirical datasets to demonstrate the applicability of the method in real settings. / Thesis (PhD) — Boston College, 2019. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Research, Measurement and Evaluation.
|
89 |
Measuring Procedural Justice: A Case Study in CriminometricsGraham, Amanda K. 01 October 2019 (has links)
No description available.
|
90 |
ITEM RESPONSE MODELS AND CONVEX OPTIMIZATION.Lewis, Naama 01 May 2020 (has links)
Item Response Theory (IRT) Models, like the one parameter, two parameters, or normal Ogive, have been discussed for many years. These models represent a rich area of investigation due to their complexity as well as the large amount of data collected in relationship to model parameter estimation. Here we propose a new way of looking at IRT models using I-projections and duality. We use convex optimization methods to derive these models. The Kullback-Leibler divergence is used as a metric and specific constraints are proposed for the various models. With this approach, the dual problem is shown to be much easier to solve than the primal problem. In particular when there are many constraints, we propose the application of a projection algorithm for solving these types of problems. We also consider re-framing the problem and utilizing a decomposition algorithm to solve for parameters as well. Both of these methods will be compared to the Rasch and 2-Parameter Logistic models using established computer software where estimation of model parameters are done under Maximum Likelihood Estimation framework. We will also compare the appropriateness of these techniques on multidimensional item response data sets and propose new models with the use of I-projections.
|
Page generated in 0.0956 seconds