• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 154
  • 76
  • 24
  • 18
  • 16
  • 16
  • 11
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 859
  • 434
  • 422
  • 136
  • 127
  • 124
  • 118
  • 117
  • 115
  • 109
  • 101
  • 86
  • 86
  • 86
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Evaluation of RELATE Using Rasch Analysis

Yoshida, Keitaro 30 November 2010 (has links) (PDF)
The importance of valid and reliable couple assessment has been increasing with growth in research on couple and family relationships as well as in therapeutic and educational interventions for couples and families. However, self-report instruments–the most popular type of couple assessment–have been criticized at least partly due to limitations in Classical Test Theory (CTT) which has been used solely in developing and evaluating couple assessments for decades. In an effort to address the limitations in the sole use of CTT in developing self-report couple assessments, the present study integrated a modern test theory called Item Response Theory (IRT) and evaluated the properties of subscales in the RELATionship Evaluation (RELATE) using the existing data from 4,784 participants. Using the Rasch rating scale or partial credit model which is one of the IRT models, the author demonstrated that some of the RELATE subscales had items and response categories that functioned less optimally or in an unexpected way. The results suggested that some items misfit the model or overlapped with other items, many scales did not cover the entire range of the measured construct, and response categories for many items malfunctioned. The author made recommendations on possible remedies that could be adopted to improve the function of individual scales and items.
472

Multidimensional Item Response Theory in Clinical Measurement: A Bifactor Graded-Response Model Analysis of the Outcome-Questionnaire-45.2

Berkeljon, Arjan 22 May 2012 (has links) (PDF)
Bifactor Item Response Theory (IRT) models are presented as a plausible structure for psychological measures with a primary scale and two or more subscales. A bifactor graded response model, appropriate for polytomous categorical data, was fit to two university counseling center datasets (N=4,679 and N=4,500) of Outcome-Questionnaire-45.2 (OQ) psychotherapy intake data. The bifactor model showed superior fit compared to a unidimensional IRT model. IRT item parameters derived from the bifactor model show that items discriminate well on the primary scale. Items on the OQ's subscales maintain some discrimination ability over and above the primary scale. However, reliability estimates for the subscales, controlling for the primary scale, suggest that clinical use should likely proceed with caution. Item difficulty or severity parameters reflected item content well, in that increased probability of endorsement was found at high levels of distress for items tapping severe symptomatology. Increased probability of endorsement was found at lower levels of distress for items tapping milder symptomatology. Analysis of measurement invariance showed that item parameters hold equally across gender for most OQ items. A subset of items was found to have item parameters non-invariant across gender. Implications for research and practice are discussed, and directions for future work given.
473

A sentiment analysis approach to manage the new item problem of Slope One / En ansats att använda attitydsanalys för att hantera problemet med nya föremål i Slope one

Johansson, Jonas, Runnman, Kenneth January 2017 (has links)
This report targets a specific problem for recommender algorithms which is the new item problem and propose a method with sentiment analysis as the main tool. Collaborative filtering algorithms base their predictions on a database with users and their corresponding ratings to items. The new item problem occurs when a new item is introduced in the database because the item has no ratings. The item will therefore be unavailable as a recommendation for the users until it has gathered some ratings. Products that can be rated by users in the online community often has experts that get access to these products before its release date for the consumers, this can be taken advantage of in recommender systems. The experts can be used as initial guides for predictions. The method that is used in this report relies on sentiment analysis to translate written reviews by experts into a rating based on the sentiment of the text. This way when a new item is added it is also added with the ratings of experts in the field. The result from this study shows that the recommender algorithm slope one can generate more reliable recommendations with a group of expert users than without when a new item is added to the database. The expert users that is added must have ratings for other items as well as the ratings for the new item to get more accurate recommendations. / Denna rapport studerar påverkan av problemet med nya objekt i rekommendationsalgoritmen Slope One och en metod föreslås i rapporten för att lösa det specifika problemet. Problemet uppstår när ett nytt objekt läggs till i en databas då det inte finns några betyg som getts till objektet/produkten. Då rekommendationsalgoritmer som Slope One baserar sina rekommendationer på relationerna mellan användares betyg av filmer så blir träffsäkerheten låg för en rekommendation av en film med få betyg. Metoden som föreslås i rapporten involverar attitydanalys som det huvudsakliga verktyget för att få information som kan ersätta faktiska betyg som användare gett en produkt. När produkter kan bli betygsatta av användare på olika forum på internet så finns det ofta experter får tillgång till produkten innan den släpps till omvärlden, den information som dessa experter har kan användas för att fylla det informationsgap som finns när ett nytt objekt läggs till. Dessa experter kommer då initiellt att användas som guide för rekomendationssystemet. Så när ett nytt objekt läggs till så görs det tillsammans med betyg från experter för att få mer träffsäkra rekomendationer. Resultatet från denna studie visar att Slope One genererar mer träffsäkra rekommendationer då en ny produkt läggs till i databasen med ett antal betyg som genererats genom attitydanalysanalys på experters textrecensioner. Det är värt att notera att ett betyg enbart för dessa expertanvändare inte håller utan experterna måste ha betyg av andra produkter inom samma område för kunna influera rekommendationer för den nya produkten.
474

The Use of Item Response Theory in Developing a Phonics Diagnostic Inventory

Pirani-McGurl, Cynthia A. 01 May 2009 (has links)
This study was conducted to investigate the reliability of the Phonics Diagnostic Inventory (PDI), a curriculum-based, specific skill mastery measurement tool for diagnosing and informing the treatment of decoding weaknesses. First, a modified one-parameter item response theory model was employed to identify the properties of potential items for inclusion in each subtest to then inform the construction of subtests using the most reliable items. Second, the properties of each subtest were estimated and examined. The test information and test characteristic curves (TCC) for the newly developed forms are reported. Finally, the accuracy and sensitivity of PDI cut scores for each subtest were examined. Specifically, based upon established cut scores, the accuracy with which students would be identified as in need of support and those who are not in need of support were investigated. The PDI generated from this research was found to more reliably diagnose specific decoding deficits in mid-year second grade students than initially constructed forms. Research also indicates further examination of cut scores is warranted to maximize decision consistency. Implications for future studies are also discussed.
475

An Assessment of The Nonparametric Approach for Evaluating The Fit of Item Response Models

Liang, Tie 01 February 2010 (has links)
As item response theory (IRT) has developed and is widely applied, investigating the fit of a parametric model becomes an important part of the measurement process when implementing IRT. The usefulness and successes of IRT applications rely heavily on the extent to which the model reflects the data, so it is necessary to evaluate model-data fit by gathering sufficient evidence before any model application. There is a lack of promising solutions on the detection of model misfit in IRT. In addition, commonly used fit statistics are not satisfactory in that they often do not possess desirable statistical properties and lack a means of examining the magnitude of misfit (e.g., via graphical inspections). In this dissertation, a newly-proposed nonparametric approach, RISE was thoroughly and comprehensively studied. Specifically, the purposes of this study are to (a) examine the promising fit procedure, RISE, (b) compare the statistical properties of RISE with that of the commonly used goodness-of-fit procedures, and (c) investigate how RISE may be used to examine the consequences of model misfit. To reach the above-mentioned goals, both a simulation study and empirical study were conducted. In the simulation study, four factors including ability distribution, sample size, test length and model were varied as the factors which may influence the performance of a fit statistic. The results demonstrated that RISE outperformed G2 and S-X2 in that it controlled Type I error rates and provided adequate power under all conditions. In the empirical study, the three fit statistics were applied to one empirical data and the misfitting items were flagged. RISE and S-X2 detected reasonable numbers of misfitting items while G2 detected almost all items when sample size is large. To further demonstrate an advantage of RISE, the residual plot on each misfitting item was shown. Compared to G2 and S-X2, RISE gave a much clearer picture of the location and magnitude of misfit for each misfitting item. Other than statistical properties and graphical displays, the score distribution and test characteristic curve (TCC) were investigated as model misfit consequence. The results indicated that for the given data, there was no practical consequence on classification before and after replacement of misfitting items detected by three fit statistics.
476

Item Parameter Drift as an Indication of Differential Opportunity to Learn: An Exploration of item Flagging Methods & Accurate Classification of Examinees

Sukin, Tia M. 01 September 2010 (has links)
The presence of outlying anchor items is an issue faced by many testing agencies. The decision to retain or remove an item is a difficult one, especially when the content representation of the anchor set becomes questionable by item removal decisions. Additionally, the reason for the aberrancy is not always clear, and if the performance of the item has changed due to improvements in instruction, then removing the anchor item may not be appropriate and might produce misleading conclusions about the proficiency of the examinees. This study is conducted in two parts consisting of both a simulation and empirical data analysis. In these studies, the effect on examinee classification was investigated when the decision was made to remove or retain aberrant anchor items. Three methods of detection were explored; (1) delta plot, (2) IRT b-parameter plots, and (3) the RPU method. In the simulation study, degree of aberrancy was manipulated as well as the ability distribution of examinees and five aberrant item schemes were employed. In the empirical data analysis, archived statewide science achievement data that was suspected to possess differential opportunity to learn between administrations was re-analyzed using the various item parameter drift detection methods. The results for both the simulation and empirical data study provide support for eliminating the use of flagged items for linking assessments when a matrix-sampling design is used and a large number of items are used within that anchor. While neither the delta nor the IRT b-parameter plot methods produced results that would overwhelmingly support their use, it is recommended that both methods be employed in practice until further research is conducted for alternative methods, such as the RPU method since classification accuracy increases when such methods are employed and items are removed and most often, growth is not misrepresented by doing so.
477

Measuring the perceptual and mnemonic effect of contextual informationon individual item representation

Choi, Yong Min 27 October 2022 (has links)
No description available.
478

Count-Techniken: Eine Lösung für heikle Fragen?

Junkermann, Justus 25 November 2022 (has links)
Verzerrungen durch sozial erwünschtes Antwortverhalten bei heiklen Fragen stellen ein großes Problem der empirischen Sozialforschung dar. Dies führt bei Umfragen zu Item- und Unit-Nonresponse sowie zu unehrlichen Antworten. Diese Verzerrung durch soziale Erwünschtheit (Social Desirability Bias, SD-Bias) führt oftmals dazu, dass Fragen zu sozial erwünschtem Verhalten oder Meinungen (z. B. Wahlteilnahme, Blutspenden oder soziales Engagement) überschätzt und zu sozial unerwünschtem Verhalten oder Einstellungen (z. B. Drogenkonsum, Rassismus oder Diebstahl) unterschätzt werden. In dieser Dissertation wird anhand eines Survey-Experiments überprüft ob die Item Count Technique (ICT) und ihr ähnliche Count-Techniken (z.B. Person Count Techique, Item Sum Technique etc.) bessere Ergebnisse bei heiklen Fragen erzielen als direktes Fragen. Dies wurde mit einem experimentellen Design anhand eines Online Access Panels (n = 3044) überprüft. Dabei wurden die gleichen heiklen Fragen in der Experimentalgruppe mit Count-Techniken gestellt (n =2527) und in der Kontrollgruppe (n = 517) direkt gestellt. Insgesamt konnten Count-Techniken keine besseren Ergebnisse erzielen als direktes Fragen.:1 Problemstellung 1.1 Auswirkungen heikler Fragen 2 Dimensionen heikler Fragen 2.1 Soziale Erwünschtheit 2.2 Eingriff in die Privatsphäre 2.3 Gefahr der Aufdeckung 2.4 Psychologische Kosten 2.5 Definition 3 Soziale Erwünschtheit 3.1 Social-Desirability-Neigung 3.1.1 Definition und Konstrukt 3.2 Messung sozialer Erwünschtheit 3.2.1 Die Crowne-Marlowe-Skala 3.2.2 Das Balanced Inventory of Desirable Responding (BIDR) 3.3 Social Desirability Belief und Trait Desirability 3.3.1 Messung des SD-Beliefs 3.3.2 Trait Desirability 4 Handlungstheorie und Befragtenverhalten 4.1 SEU-Theorie und heikle Fragen 4.2 Das Modell der Frame-Selection und heikle Fragen 5 Externe Effekte und heikle Fragen 5.1 Interviewereffekte und heikle Fragen 5.2 Bystander-Effekte und soziale Erwünschtheit 5.3 Modus-Effekte und heikle Fragen 6 Klassische Lösungen für heikle Fragen 6.1 Bogus Pipeline 6.2 Sealed Envelope 6.3 Forgiving Wording 7 Die Randomized Response Technique 7.1 Entstehung und Grundprinzip nach Warner (1965) 7.2 Die „Unrelated Question“ Technique 7.3 Die „Forced Response“ Technique 7.4 Takahasi’s Technique 7.5 Die „Two Step Procedure“ 7.6 Kuks Card Method 7.7 Multivariate Analyse mit der RRT 7.8 Empirische Tests der RRT 7.9 Handlungstheorie und RRT 7.10 Allgemeine Probleme der RRT 8 Das Crosswise-Model 8.1 Entstehung und Grundprinzip 8.2 Empirischer Test des Crosswise-Models mit Meta-Analyse 9 Die Item Count Technique 9.1 Probleme und Design der ICT-Fragen 9.2 Handlungstheorie und ICT 9.3 Kurze Geschichte der ICT 9.4 Das Double List Design 9.5 Die Person Count Technique 9.6 Die Fixed und Variable Person Count Technique 9.7 Die Item Sum Technique 9.8 Die Person Sum Technique 9.9 Multivariate Analyse von ICT-Daten 9.10 Bisherige empirische Tests der Item Count Technique mit Meta-Studie 10 Empirischer Test der Count-Techniken 10.1 Hypothesen 10.2 Grundlegendes Design 10.3 Datenerhebung 10.4 Operationalisierung 10.4.1 SD-Neigung (BIDR16) 10.4.2 SD-Belief und Trait Desirability 10.4.3 Modell der Frame Selection 10.5 Grundlegende Annahmen und Probleme der Hypothesenprüfung 10.6 Prüfung der Item Count Technique 10.7 Prüfung der Person Count Technique 10.8 Prüfung der Fixed Person Count Technique 10.9 Prüfung der Variable Person Count Technique 10.10Prüfung der Item Sum Technique 10.11Prüfung der Person Sum Technique 10.12Prüfung der Count-Techniken insgesamt 11 Prüfung der Theorien des Befragtenverhaltens 11.1 Test der SEU-Theorie 11.2 Test des Modells der Frame Selection 11.3 Prüfung der handlungstheoretischen Betrachtung der Count-Techniken 12 Fazit 13 Anhang 13.1 Deskriptive Statistiken 13.1.1 BIDR16 13.1.2 SD-Belief 13.1.3 Umfrageeinstellung Literatur
479

Bayesian Model Checking Strategies for Dichotomous Item Response Theory Models

Toribio, Sherwin G. 16 June 2006 (has links)
No description available.
480

A FRAMEWORK FOR PSYCHOMETRIC ANALYSIS OF STUDENT PERFORMANCE ACROSS TIME: AN ILLUSTRATION WITH NATIONAL EDUCATIONAL LONGITUDINAL STUDY DATA

Hart, Raymond C., Jr 04 May 2007 (has links)
No description available.

Page generated in 0.2346 seconds