• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 88
  • 9
  • 9
  • 8
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 392
  • 392
  • 392
  • 80
  • 79
  • 79
  • 77
  • 73
  • 64
  • 63
  • 63
  • 55
  • 48
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Constructing an Estimate of Academic Capitalism and Explaining Faculty Differences through Multilevel Analysis

Kniola, David J. 24 November 2009 (has links)
Two broad influences have converged to shape a new environment in which universities must now compete and operate. Shrinking financial resources and a global economy have arguably compelled universities to adapt. The concept of academic capitalism helps explain the new realities and places universities in the context of a global, knowledge-based economy (Slaughter & Leslie, 1997). Prior to this theory, the role of universities in the knowledge economy was largely undocumented. Academic capitalism is a measurable concept defined by the mechanisms and behaviors of universities that seek to generate new sources of revenue and are best revealed through faculty work. This study was designed to create empirical evidence of academic capitalism through the behaviors of faculty members at research universities. Using a large-scale, national database, the researcher created a new measure—an estimate of academic capitalism—at the individual faculty member level and then used multi-level analysis to explain variation among these individual faculty members. This study will increase our understanding of the changing nature of faculty work, will lead to future studies on academic capitalism that involve longitudinal analysis and important sub-populations, and will likely influence institutional and public policy. / Ph. D.
102

Latent trait, factor, and number endorsed scoring of polychotomous and dichotomous responses to the Common Metric Questionnaire

Becker, R. Lance 28 July 2008 (has links)
Although job analysis is basic to almost all human resource functions, little attention has been given to the response format and scoring strategy of job analysis instruments. This study investigated three approaches to scoring polychotomous and dichotomous responses from the frequency and importance scales of the Common Metric Questionnaire (CMQ). Factor, latent trait, and number endorsed scores were estimated from the responses of 2684 job incumbents in six organizations. Scores from four of the CMQ scales were used in linear and nonlinear multiple regression equations to predict pay. The results demonstrated that: (a) simple number endorsed scoring of dichotomous responses was superior to the other scoring strategies; (b) Scoring of dichotomous responses was superior to scoring of polychotomous responses for each scoring technique; (c) scores estimated from the importance scale were better predictors of pay then scores from the frequency scale; (d) the relationship between latent trait and factor scores is nonlinear; (e) latent trait scores estimated with the two-parameter logistic model were superior to latent trait scores from the three parameter model; (f) test information functions for each scale demonstrated that the CMQ scales accurately measured a relatively narrow range of theta; (g) the reliability of factor scores estimated from dichotomous data is superior to factor scores from polychotomous data. Issues regarding the construction of job analysis instruments and the use of item response theory are discussed. / Ph. D.
103

Gender and Ethnicity-Based Differential Item Functioning on the Myers-Briggs Type Indicator

Gratias, Melissa B. 07 May 1997 (has links)
Item Response Theory (IRT) methodologies were employed in order to examine the Myers-Briggs Type Indicator (MBTI) for differential item functioning (DIF) on the basis of crossed gender and ethnicity variables. White males were the reference group, and the focal groups were: black females, black males, and white females. The MBTI was predicted to show DIF in all comparisons. In particular, DIF on the Thinking-Feeling scale was hypothesized especially in the comparisons between white males and black females and between white males and white females. A sample of 10,775 managers who took the MBTI at assessment centers provided the data for the present experiment. The Mantel-Haenszel procedure and an IRT-based area technique were the methods of DIF-detection. Results showed several biased items on all scales for all comparisons. Ethnicitybased bias was seen in the white male vs. black female and white male vs. black male comparisons. Gender-based bias was seen particularly in the white male vs. white female comparisons. Consequently, the Thinking-Feeling showed the least DIF of all scales across comparisons, and only one of the items differentially scored by gender was found to be biased. Findings indicate that the gender-based differential scoring system is not defensible in managerial samples, and there is a need for further research into the study of differential item functioning with regards to ethnicity. / Master of Science
104

Item response theory

Inman, Robin F. 01 January 2001 (has links)
This study was performed to show advantages of Item Response THeory (IRT) over Classical Test Theory (CTT). Item Response THeory is a complex theory with many applications. This study used one application, test analysis. Ten items from a social psychology midterm were analyzed in order to show how IRT is more accurate than CTT, because IRT has the ability to add and delete individual items. Also, IRT features the Item Characteristic Curve (ICC) to give an easy to read interpretation of the results. The results showed the levels of the three indexes, item discrimination, difficulty, and guessing. The results indicated in which area each item was weak or strong. With this information, suggestions can be made to improve the item and ultimately improve the measurement accuracy of the entire test. Classical Test Theory cannot do this on individual item basis without changing the accuracy of the entire test. The results of this study confirm that IRT can be used to analyze individual items and allow for the improvement or revision of the item. This means IRT can be used for test analysis in a more efficient and accurate manner than CTT. This study provides an introduction to Item Response Theory in the hopes that more research will be performed to establish IRT as a commonly used tool for improving testing measurement.
105

A generalized partial credit FACETS model for investigating order effects in self-report personality data

Hayes, Heather 05 July 2012 (has links)
Despite its convenience, the process of self-report in personality testing can be impacted by a variety of cognitive and perceptual biases. One bias that violates local independence, a core criterion of modern test theory, is the order effect. In this bias, characteristics of an item response are impacted not only by the content of the current item but also the accumulated exposure to previous, similar-content items. This bias is manifested as increasingly stable item responses for items that appear later in a test. Previous investigations of this effect have been rooted in classical test theory (CTT) and have consistently found that item reliabilities, or corrected item-total score correlations, increase with the item's serial position in the test. The purpose of the current study was to more rigorously examine order effects via item response theory (IRT). To this end, the FACETS modeling approach (Linacre, 1989) was combined with the Generalized Partial Credit model (GPCM; Muraki, 1992) to produce a new model, the Generalized Partial Credit FACETS model (GPCFM). Serial position of an item serves as a facet that contributes to the item response, not only via its impact on an item's location on the latent trait continuum, but also its discrimination. Thus, the GPCFM differs from previous generalizations of the FACETS model (Wang&Liu, 2007) in that the item discrimination parameter is modified to include a serial position effect. This parameter is important because it reflects the extent to which the purported underlying trait is represented in an item score. Two sets of analyses were conducted. First, a simulation study demonstrated effective parameter recovery, though measurements of error were impacted by sample size for all parameters, test length for trait level estimates, and the size of the order effect for trait level estimates, and an interaction between sample size and test length for item discrimination. Secondly, with respect to real self-report personality data, the GPCFM demonstrated good fit as well as superior fit relative to competing, nested models while also identifying order effects in some traits, particularly Neuroticism, Openness, and Agreeableness.
106

Using Posterior Predictive Checking of Item Response Theory Models to Study Invariance Violations

Xin, Xin 05 1900 (has links)
The common practice for testing measurement invariance is to constrain parameters to be equal over groups, and then evaluate the model-data fit to reject or fail to reject the restrictive model. Posterior predictive checking (PPC) provides an alternative approach to evaluating model-data discrepancy. This paper explores the utility of PPC in estimating measurement invariance. The simulation results show that the posterior predictive p (PP p) values of item parameter estimates respond to various invariance violations, whereas the PP p values of item-fit index may fail to detect such violations. The current paper suggests comparing group estimates and restrictive model estimates with posterior predictive distributions in order to demonstrate the pattern of misfit graphically.
107

Modelagem para construção de escalas avaliativas e classificatórias em exames seletivos utilizando teoria da resposta ao item uni e multidimensional / Modeling for constructing of classificatory and evaluative scales in selective tests using uni and multidimensional item response theory

Quaresma, Edilan de Sant'Ana 28 May 2014 (has links)
O uso de provas elaboradas na forma de itens, em processos de avaliação para classificação, é uma herança histórica dos séculos XVI e XVII, ainda em uso nos dias atuais tanto na educação formal quanto em processos seletivos, a exemplo dos exames vestibulares. Elaboradas para mensurar conhecimentos, traços latentes que não podem ser medidos diretamente, as provas costumam ser corrigidas considerando unicamente o escore obtido pelo sujeito avaliado, sem contemplar informações importantes relacionadas aos itens das mesmas. O presente trabalho teve como objetivos: (i) utilizar a modelagem baseada na teoria da resposta ao item unidimensional - TRI e multidimensional - TRIM para construir escalas do conhecimento para a prova da FUVEST e (ii) classificar os candidatos aos seis cursos de graduação oferecidos pela Escola Superior de Agricultura \"Luiz de Queiroz\", unidade da Universidade de São Paulo, com base na escala construída. A hipótese imbutida no corpo do trabalho admitiu que o uso da TRIM classifica de forma diferente os candidatos que os atuais métodos utilizados pela FUVEST. Foram utilizados os padrões de respostas dos 2326 candidatos submetidos à prova, para que uma análise unidimensional fosse realizada, sob o enfoque da TRI, gerando uma escala de proficiências . Quatro traços latentes foram diagnosticados no processo avaliativo, por meio da modelagem multidimensional da TRIM, gerando uma escala das quatro dimensões. Uma proposta para classificação dos candidatos é apresentada, baseada na média das proficiências individuais ponderada pelas cargas fatoriais diagnosticadas pela modelagem. Análise comparativa entre os critérios de classificação utilizados pela FUVEST e pela TRIM foram realizados, identificando discordância entre os mesmos. O trabalho apresenta propostas de interpretação pedagógica para as escalas unidimensional e multidimensional e indica a TRIM como o critério complementar para classificação dos candidatos, valorizando informações individuais dos itens e, portanto, utilizando uma avaliação classificatória mais abrangente. / The use of elaborate exams in the form of items, in evaluation procedures for classification, is a historical legacy of the 16th and 17th centuries, still in use today both in formal education and in selective cases such as entrance examinations. Designed to measure knowledge, latent trait that can not be measured directly, the exams are usually corrected considering only the score obtained by the subject, without including important information related to the items of it. This study aimed to: (i) use the modeling approach unidimensional and multidimensional item response theory (IRT and MIRT, respectively), to build knowledge scales of the entrance examination FUVEST/2012; (ii) classifing candidates for the 6 undergraduate courses offered by the \"Luiz de Queiroz\" College of Agriculture , unit of the University of São Paulo, based on the scale then. The hypothesis supposes that the use of MIRT ranked candidates differently than current methods used by FUVEST. We used the patterns of responses of 2326 candidates submitted to the test, so that a one-dimensional analysis was performed under the IRT approach, generating a range of proficiencies. Four latent traits were diagnosed in the evaluation process by means of multidimensional modeling MIRT, generating a scale of four dimensions. A proposal for classification of the candidates is presented, based on the weighted average of the individual proficiencies by the factor loadings diagnosed by modeling. Comparative analysis of the classification criteria used by FUVEST and MIRT were performed by identifying discrepancies between them. This work presents the proposals of the pedagogical interpretation for one-dimensional and multidimensional scales and indicates the MIRT as additional criteria for the candidates, to valorize individual information of the items and therefore using a more comprehensive classification review.
108

Modelagem para construção de escalas avaliativas e classificatórias em exames seletivos utilizando teoria da resposta ao item uni e multidimensional / Modeling for constructing of classificatory and evaluative scales in selective tests using uni and multidimensional item response theory

Edilan de Sant'Ana Quaresma 28 May 2014 (has links)
O uso de provas elaboradas na forma de itens, em processos de avaliação para classificação, é uma herança histórica dos séculos XVI e XVII, ainda em uso nos dias atuais tanto na educação formal quanto em processos seletivos, a exemplo dos exames vestibulares. Elaboradas para mensurar conhecimentos, traços latentes que não podem ser medidos diretamente, as provas costumam ser corrigidas considerando unicamente o escore obtido pelo sujeito avaliado, sem contemplar informações importantes relacionadas aos itens das mesmas. O presente trabalho teve como objetivos: (i) utilizar a modelagem baseada na teoria da resposta ao item unidimensional - TRI e multidimensional - TRIM para construir escalas do conhecimento para a prova da FUVEST e (ii) classificar os candidatos aos seis cursos de graduação oferecidos pela Escola Superior de Agricultura \"Luiz de Queiroz\", unidade da Universidade de São Paulo, com base na escala construída. A hipótese imbutida no corpo do trabalho admitiu que o uso da TRIM classifica de forma diferente os candidatos que os atuais métodos utilizados pela FUVEST. Foram utilizados os padrões de respostas dos 2326 candidatos submetidos à prova, para que uma análise unidimensional fosse realizada, sob o enfoque da TRI, gerando uma escala de proficiências . Quatro traços latentes foram diagnosticados no processo avaliativo, por meio da modelagem multidimensional da TRIM, gerando uma escala das quatro dimensões. Uma proposta para classificação dos candidatos é apresentada, baseada na média das proficiências individuais ponderada pelas cargas fatoriais diagnosticadas pela modelagem. Análise comparativa entre os critérios de classificação utilizados pela FUVEST e pela TRIM foram realizados, identificando discordância entre os mesmos. O trabalho apresenta propostas de interpretação pedagógica para as escalas unidimensional e multidimensional e indica a TRIM como o critério complementar para classificação dos candidatos, valorizando informações individuais dos itens e, portanto, utilizando uma avaliação classificatória mais abrangente. / The use of elaborate exams in the form of items, in evaluation procedures for classification, is a historical legacy of the 16th and 17th centuries, still in use today both in formal education and in selective cases such as entrance examinations. Designed to measure knowledge, latent trait that can not be measured directly, the exams are usually corrected considering only the score obtained by the subject, without including important information related to the items of it. This study aimed to: (i) use the modeling approach unidimensional and multidimensional item response theory (IRT and MIRT, respectively), to build knowledge scales of the entrance examination FUVEST/2012; (ii) classifing candidates for the 6 undergraduate courses offered by the \"Luiz de Queiroz\" College of Agriculture , unit of the University of São Paulo, based on the scale then. The hypothesis supposes that the use of MIRT ranked candidates differently than current methods used by FUVEST. We used the patterns of responses of 2326 candidates submitted to the test, so that a one-dimensional analysis was performed under the IRT approach, generating a range of proficiencies. Four latent traits were diagnosed in the evaluation process by means of multidimensional modeling MIRT, generating a scale of four dimensions. A proposal for classification of the candidates is presented, based on the weighted average of the individual proficiencies by the factor loadings diagnosed by modeling. Comparative analysis of the classification criteria used by FUVEST and MIRT were performed by identifying discrepancies between them. This work presents the proposals of the pedagogical interpretation for one-dimensional and multidimensional scales and indicates the MIRT as additional criteria for the candidates, to valorize individual information of the items and therefore using a more comprehensive classification review.
109

Influence of Item Response Theory and Type of Judge on a Standard Set Using the Iterative Angoff Standard Setting Method

Hamberlin, Melanie Kidd 08 1900 (has links)
The purpose of this investigation was to determine the influence of item response theory and different types of judges on a standard. The iterative Angoff standard setting method was employed by all judges to determine a cut-off score for a public school district-wide criterion-reformed test. The analysis of variance of the effect of judge type and standard setting method on the central tendency of the standard revealed the existence of an ordinal interaction between judge type and method. Without any knowledge of p-values, one judge group set an unrealistic standard. A significant disordinal interaction was found concerning the effect of judge type and standard setting method on the variance of the standard. A positive covariance was detected between judges' minimum pass level estimates and empirical item information. With both p-values and b-values, judge groups had mean minimum pass levels that were positively correlated (ranging from .77 to .86), regardless of the type of information given to the judges. No differences in correlations were detected between different judge types or different methods. The generalizability coefficients and phi indices for 12 judges included in any method or judge type were acceptable (ranging from .77 to .99). The generalizability coefficient and phi index for all 24 judges were quite high (.99 and .96, respectively).
110

Informatikos pagrindų konceptualizavimas naudojant uždavinius / Conceptualisation of informatics fundamentals through tasks

Daukšaitė, Gabrielė 01 July 2014 (has links)
Magistro darbe tyrinėjama, kaip Lietuvos ir kai kurių užsienio valstybių bendrojo lavinimo mokyklose yra mokoma informatikos, aiškinamasi, koks požiūris į šią mokomąja discipliną, kurie veiksniai tai įtakoja. Tyrimui pasirinktas įdomesnis kelias – naudojamasi informatikos ir kompiuterinio lavinimosi varžybomis „Bebras“, kurios vyksta daugiau kaip dešimtyje valstybių. Palyginti 2008–2010 metais Lietuvoje vykusių „Bebro“ varžybų užduočių rinkiniai pagal įvairius informatikos konceptus. Pasinaudojus 2010 metais Lietuvos „Bebro“ varžybose dalyvavusių mokinių rezultatų duomenimis bei pritaikius atitinkamus matematinius užduočių vertinimo modelius, buvo įvertinta užduočių rinkinio informacinė funkcija, kuri leidžia parinkti tinkamiausias užduotis atitinkamam mokinių žinių lygiui. Mokinių informatikos žinių lygis neatsiejamas nuo informatikos pagrindų, kurie formuojasi laikui bėgant, kai mokinys gauna tinkamą informaciją ne tik per informatikos ar informacinių technologijų pamokas, bet ir kai mokytojai informacines ir komunikacines priemones taiko per kitų dalykų pamokas. Darbe apskaičiuoti užduočių sunkumo koeficientai, kurie palyginti su užduočių sunkumo lygiais, kuriuos priskyrė uždavinių sudarytojai ar vertintojai. Taip pat nustatyti užduočių skiriamosios gebos indeksai, kurie nustato, kiek gerai užduotis atskiria geresnius mokinių darbus nuo blogesnių tikrinamo dalyko atžvilgiu. Tyrimo rezultatai svarbūs tiek mokytojams, kurie turi įtakos mokinių informatikos pagrindų... [toliau žr. visą tekstą] / In this master thesis, computer science curriculum in compulsory school of Lithuania and other foreign countries are reviewed. Data of "Beaver" information technology contest, which is organized in more than ten countries, has been selected as a more attractive way to imlement this study. The comparisons of tasks sets in Lithuanian “Beaver” competition in 2008 – 2010 according to informatics concepts are presented. In this thesis, there was assessed information function of tasks set by using data of pupils’ results. The data of results were obtained from Lithuanian competition of “Beaver” in 2010. Information function allows choosing the best tasks for due ability level of pupils. Pupils’ abilities level of computer science is inseparable from informatics fundamentals, which is forming over time when pupils get the right information about informatics fundamentals during the computers science lesson, or when their teachers use information and communication technologies. The difficulty parameters of tasks, and discriminations parameters of tasks, which describe how well an item can differentiate between examinees having abilities below the item location and those having abilities above the item location, are calculated. The results of this study are important for teachers, which influence formation of informatics fundamentals of pupils, as well as for experts and creators of competition tasks, because for them it is important the right and purposeful introduction to computer... [to full text]

Page generated in 0.0447 seconds