• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

特異項目機能検出方法の比較 : BILOG-MGとSIBTESTを用いた検討

熊谷, 龍一, KUMAGAI, Ryuichi, 脇田, 貴文, WAKITA, Takafumi 25 December 2003 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
2

Analytic Selection of a Valid Subtest for DIF Analysis when DIF has Multiple Potential Causes among Multiple Groups

January 2014 (has links)
abstract: The study examined how ATFIND, Mantel-Haenszel, SIBTEST, and Crossing SIBTEST function when items in the dataset are modelled to differentially advantage a lower ability focal group over a higher ability reference group. The primary purpose of the study was to examine ATFIND's usefulness as a valid subtest selection tool, but it also explored the influence of DIF items, item difficulty, and presence of multiple examinee populations with different ability distributions on both its selection of the assessment test (AT) and partitioning test (PT) lists and on all three differential item functioning (DIF) analysis procedures. The results of SIBTEST were also combined with those of Crossing SIBTEST, as might be done in practice. ATFIND was found to be a less-than-effective matching subtest selection tool with DIF items that are modelled unidimensionally. If an item was modelled with uniform DIF or if it had a referent difficulty parameter in the Medium range, it was found to be selected slightly more often for the AT List than the PT List. These trends were seen to increase as sample size increased. All three DIF analyses, and the combined SIBTEST and Crossing SIBTEST, generally were found to perform less well as DIF contaminated the matching subtest, as well as when DIF was modelled less severely or when the focal group ability was skewed. While the combined SIBTEST and Crossing SIBTEST was found to have the highest power among the DIF analyses, it also was found to have Type I error rates that were sometimes extremely high. / Dissertation/Thesis / Doctoral Dissertation Educational Psychology 2014
3

The Impact of Multidimensionality on the Detection of Differential Bundle Functioning Using SIBTEST.

Raiford-Ross, Terris 12 February 2008 (has links)
In response to public concern over fairness in testing, conducting a differential item functioning (DIF) analysis is now standard practice for many large-scale testing programs (e.g., Scholastic Aptitude Test, intelligence tests, licensing exams). As highlighted by the Standards for Educational and Psychological Testing manual, the legal and ethical need to avoid bias when measuring examinee abilities is essential to fair testing practices (AERA-APA-NCME, 1999). Likewise, the development of statistical and substantive methods of investigating DIF is crucial to the goal of designing fair and valid educational and psychological tests. Douglas, Roussos and Stout (1996) introduced the concept of item bundle DIF and the implications of differential bundle functioning (DBF) for identifying the underlying causes of DIF. Since then, several studies have demonstrated DIF/DBF analyses within the framework of “unintended” multidimensionality (Oshima & Miller, 1992; Russell, 2005). Russell (2005), in particular, examined the effect of secondary traits on DBF/DTF detection. Like Russell, this study created item bundles by including multidimensional items on a simulated test designed in theory to be unidimensional. Simulating reference group members to have a higher mean ability than the focal group on the nuisance secondary dimension, resulted in DIF for each of the multidimensional items, that when examined together produced differential bundle functioning. The purpose of this Monte Carlo simulation study was to assess the Type I error and power performance of SIBTEST (Simultaneous Item Bias Test; Shealy & Stout, 1993a) for DBF analysis under various conditions with simulated data. The variables of interest included sample size and ratios of reference to focal group sample sizes, correlation between primary and secondary dimensions, magnitude of DIF/DBF, and angular item direction. Results showed SIBTEST to be quite powerful in detecting DBF and controlling Type I error for almost all of the simulated conditions. Specifically, power rates were .80 or above for 84% of all conditions and the average Type I error rate was approximately .05. Furthermore, the combined effect of the studied variables on SIBTEST power and Type I error rates provided much needed information to guide further use of SIBTEST for identifying potential sources of differential item/bundle functioning.
4

Elektriska flickor och mekaniska pojkar : Om gruppskillnader på prov - en metodutveckling och en studie av skillnader mellan flickor och pojkar på centrala prov i fysik

Ramstedt, Kristian January 1996 (has links)
This dissertation served two purposes. The first was to develop a method of detecting differential item functioning (DIF) within tests containing both dichotomously and polytomously scored items. The second was related to gender and aimed a) to investigate if those items that were functioning differently for girls and boys showed any characteristic properties and, if so, b) determine if these properties could be used to predict which items would be flagged for D1F. The method development was based on the Mantel-Haenszel (MH) method used for dichotmously scored items. By dichotomizing the polytomously scored items both types of item could be compared on the same statistical level as either solved or non-solved items. It was not possible to compare the internal score structures for the two gender groups, only overall score differences were detected. By modelling the empirical item characteristic curves it was possible to develop a MH method for identifying nonuniform DIF. Both internal and external ability criteria were used. Total test score with no purification was used as the internal criterion. Purification was not done for validity reasons, no items were judged as biased. Teacher set marks were used as external criteria. The marking scale had to be transformed for either boys or girls since a comparison of scores for boys and girls with the same marks showed that boys always got higher mean scores. The results of the two MH analyses based on internal and external criterion were compared with results from P-SIBTEST. All three methods corresponded well although P-SIBTEST flagged considerably more items in favour of the reference group (boys) which exhibited a higher overall ability. All 200 items included in the last 15 annual national tests in physics were analysed for DIF and classified by ten criteria The most significant result was that items in electricity were, to a significantly higher degree, flagged as DIF in favour of girls whilst items in mechanics were flagged in favour of boys. Items in other content areas showed no significant pattern. Multiple-Choice items were flagged in favour of boys. Regardless of the degree of significance by which items from different content areas were flagged on a group level it was not possible to predict which single item would be flagged for DIF. The most probable prediction was always that an item was neutral. Some possible interpretations of DIF as an effect of multidimen-sionality were discussed as were some hypotheses about the reasons why boys did better in mechanics and girls in electricity. / digitalisering@umu
5

Differential Item Functioning Analysis of the Herrmann Brain Dominance Instrument

Lees, Jared Andrew 12 September 2007 (has links) (PDF)
Differential item functioning (DIF) is present when examinees who have the same level of a trait have a different probability of correctly answering a test item intended to measure that trait (Shepard & Averill, 1981). The following study is a DIF analysis of the Herrmann Brain Dominance Instrument (HBDI), a preference profiling instrument developed by Herrmann International to help individuals identify their dominant preferences and then classify their level of dominance into four preference quadrants. Examinees who completed the American English version of the instrument were classified as the reference group and examinees of the International English version were classified as the focal group. Out of 105 items, 11 were manifesting a large amount of DIF and were flagged for further review. The POLYSIBTEST procedure was used to carry out the DIF analysis. POLYSIBTEST is an extension of the SIBTEST procedure, which is a conceptually simple method for analyzing DIF that uses a latent trait measure rather than an observed total score. The latent trait measure helps detect both uniform and nonuniform DIF and the POLYSIBTEST procedure is used for both dichotomous and polytomous items. Each of the four preference quadrants were analyzed separately to reduce incorrect findings as a result of ipsative scoring. The process used to complete the DIF analysis was documented so that additional language groups may be analyzed by Herrmann International.

Page generated in 0.0245 seconds