• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 156
  • 27
  • 12
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 272
  • 89
  • 83
  • 67
  • 67
  • 66
  • 51
  • 48
  • 47
  • 40
  • 38
  • 34
  • 32
  • 31
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Developing and validating a school-based screening tool of Fundamental Movement Skills (FUNMOVES) using Rasch analysis

Eddy, Lucy, Preston, N., Mon-Williams, M., Bingham, Daniel, Atkinson, J.M.C., Ellingham-Khan, M., Otteslev, A., Hill, L.J.B. 22 February 2023 (has links)
Yes / A large proportion of children are not able to perform age-appropriate fundamental movement skills (FMS). Thus, it is important to assess FMS so that children needing additional support can be identified in a timely fashion. There is great potential for universal screening of FMS in schools, but research has established that current assessment tools are not fit for purpose. To develop and validate the psychometric properties of a FMS assessment tool designed specifically to meet the demands of universal screening in schools. A working group consisting of academics from developmental psychology, public health and behavioural epidemiology developed an assessment tool (FUNMOVES) based on theory and prior evidence. Over three studies, 814 children aged 4 to 11 years were assessed in school using FUNMOVES. Rasch analysis was used to evaluate structural validity and modifications were then made to FUNMOVES activities after each study based on Rasch results and implementation fidelity. The initial Rasch analysis found numerous psychometric problems including multidimensionality, disordered thresholds, local dependency, and misfitting items. Study 2 showed a unidimensional measure, with acceptable internal consistency and no local dependency, but that did not fit the Rasch model. Performance on a jumping task was misfitting, and there were issues with disordered thresholds (for jumping, hopping and balance tasks). Study 3 revealed a unidimensional assessment tool with good fit to the Rasch model, and no further issues, once jumping and hopping scoring were modified. The finalised version of FUNMOVES (after three iterations) meets standards for accurate measurement, is free and able to assess a whole class in under an hour using resources available in schools. Thus FUNMOVES has the potential to allow schools to efficiently screen FMS to ensure that targeted support can be provided and disability barriers removed. / ESRC White Rose Doctoral Training Partnership Pathway Award (ES/P000745/ 1). ActEarly: a City Collaboratory approach to early promotion of good health and wellbeing funded by the Medical Research Council (grant reference MR/S037527/). National Institute for Health Research Yorkshire and Humber ARC (reference: NIHR20016).
172

Development and validation of the vision-related dizziness questionnaire

Armstrong, Deborah, Alderson, Alison J., Davey, Christopher J., Elliott, David B. 29 May 2018 (has links)
Yes / Purpose: To develop and validate the first patient-reported outcome measure (PROM) to quantify vision-related dizziness. Dizziness is a common, multifactorial syndrome that causes reductions in quality of life and is a major risk factor for falls, but the role of vision is not well understood. Methods: Potential domains and items were identified by literature review and discussions with experts and patients to form a pilot PROM, which was completed by 335 patients with dizziness. Rasch analysis was used to determine the items with good psychometric properties to include in a final PROM, to check undimensionality, differential item functioning, and to convert ordinal questionnaire data into continuous interval data. Validation of the final 25-item instrument was determined by its convergent validity, patient, and item-separation reliability and unidimensionality using data from 223 patients plus test–retest repeatability from 79 patients. results: 120 items were originally identified, then subsequently reduced to 46 to form a pilot PROM. Rasch analysis was used to reduce the number of items to 25 to produce the vision-related dizziness or VRD-25. Two subscales of VRD-12-frequency and VRD-13-severity were shown to be unidimensional, with good psychometric properties. Convergent validity was shown by moderately good correlations with the Dizziness Handicap Inventory (r = 0.75) and good test–retest repeatability with intra-class correlation coefficients of 0.88. conclusion: VRD-25 is the only PROM developed to date to assess vision-related dizziness. It has been developed using Rasch analysis and provides a PROM for this under-researched area and for clinical trials of interventions to reduce vision-related dizziness. / College of Optometrists (UK) research studentship.
173

A Rasch Rating Scale Analysis of the Brief Symptom Inventory

Roberts, Richard L. (Richard Lee) 08 1900 (has links)
The problem of this study addresses a preliminary Rasch rating scale analysis of the Brief Symptom Inventory in relation to reliability and validity. Also, this investigator will utilize information provided by the latent trait psychometric model.
174

The application and empirical comparison of item parameters of Classical Test Theory and Partial Credit Model of Rasch in performance assessments

Mokilane, Paul Moloantoa 05 1900 (has links)
This study empirically compares the Classical Test Theory (CTT) and the Partial Credit Model (PCM) of Rasch focusing on the invariance of item parameters. The invariance concept which is the consequence of the principle of specific objectivity was tested in both CTT and PCM using the results of learners who wrote the National Senior Certificate (NSC) Mathematics examinations in 2010. The difficulty levels of the test items were estimated from the independent samples of learn- ers. The same sample of learners used in the calibration of the difficulty levels of the test items in the PCM model were also used in the calibration of the difficulty levels of the test items in CTT model. The estimates of the difficulty levels of the test items were done using RUMM2030 in the case of PCM while SAS was used in the case of CTT. RUMM2030 and SAS are both the statistical softwares. The analysis of variance (ANOVA) was used to compare the four different design groups of test takers. In cases where the ANOVA showed a significant difference between the means of the design groups, the Tukeys groupings was used to establish where the difference came from. The research findings were that the test items' difficulty parameter estimates based on the CTT theoretical framework were not invariant across the different independent sample groups. The over- all findings from this study were that the CTT theoretical framework was unable to produce item difficulty invariant parameter estimates. The PCM estimates were very stable in the sense that for most of the items, there was no significant difference between the means of at least three design groups and the one that deviated from the rest did not deviate that much. The item parameters of the group that was representative of the population (proportional allocation) and the one where the same number of learners (50 learners) was taken from different performance categories did not differ significantly for all the items except for item 6.6 in examination question paper 2. It is apparent that for the test item parameters to be invariant of the group of test takers in PCM, the group of test takers must be heterogeneous and each performance category needed to be big enough for the proper calibration of item parameters. The higher values of the estimated item parameters in CTT were consistently found in the sample that was dominated by the high proficient learners in Mathematics ("bad") and the lowest values were consistently calculated in the design group that was dominated by the less proficient learners. This phenomenon was not apparent in the Rasch model. / Mathematical Sciences / M.Sc. (Statistics)
175

Transposição da Teoria da Resposta ao Item: uma abordagem pedagógica / Transposition of Item Response Theory: a pedagogical approach

Silva, Eder Alencar 23 June 2017 (has links)
Este trabalho tem por objetivo apresentar a Teoria da Resposta ao Item (TRI), por meio de uma abordagem pedagógica, aos professores da educação básica, que mencionaram esta necessidade por meio de pesquisa realizada pelo autor. Levar parte do conhecimento teórico que embasa esta teoria ao conhecimento do docente, principalmente a construção da curva de probabilidade de acerto do item, favorecerá a compreensão, a análise e o monitoramento do processo avaliativo educacional. Este material apresenta as principais definições e conceitos da avaliação externa em larga escala, além de fornecer insumos para a compreensão das suposições realizadas para aplicação da metodologia. Neste sentido, o texto foi estruturado de forma a apresentar didaticamente as etapas do processo de implementação de uma avaliação, desde a construção do item até a apuração e divulgação dos resultados. Todo enfoque será dado à construção do modelo da TRI com um parâmetro (dificuldade do item), também conhecido como modelo de Rasch, o que simplifica e facilita a compreensão da metodologia. O modelo utilizado nas avaliações externas em larga escala (modelo com três parâmetros) será introduzido a partir das considerações realizadas na abordagem que explicita o pensamento da construção do modelo de um parâmetro. Acredita-se que esta compreensão possa colaborar com o professor na exploração das habilidades/competências dos alunos durante os anos escolares. / This study aims to present the Item Response Theory (IRT), through a pedagogical approach, to teachers of basic education, which mentioned this necessity through research conducted by the author. To take part of the theoretical knowledge that supports this theory to the teacher\'s knowledge, especially the construction of probability curve of item correct response, it will favor for understanding, analysis and monitoring the evaluation educational process. This material presents the main definitions and concepts of the external evaluation in large scale, besides providing inputs for understanding the assumptions made to apply the methodology. In this sense, the text was structured in order to present the implementation process stages of a large scale assessment, from the item construction to the results calculation and dissemination. The focus will be given to the IRT model construction of one-parameter (difficulty of the item), also known as Rasch model, since it simplifies and facilitates the understanding of methodology. The model used in the external assessment on a large scale (three-parameter model) will be introduced from the considerations made in the approach that explicit the thought of one-parameter model construction. It is believed that understanding can collaborate with teacher in exploration of the students\' skills/competences during the school year.
176

The application and empirical comparison of item parameters of Classical Test Theory and Partial Credit Model of Rasch in performance assessments

Mokilane, Paul Moloantoa 05 1900 (has links)
This study empirically compares the Classical Test Theory (CTT) and the Partial Credit Model (PCM) of Rasch focusing on the invariance of item parameters. The invariance concept which is the consequence of the principle of specific objectivity was tested in both CTT and PCM using the results of learners who wrote the National Senior Certificate (NSC) Mathematics examinations in 2010. The difficulty levels of the test items were estimated from the independent samples of learn- ers. The same sample of learners used in the calibration of the difficulty levels of the test items in the PCM model were also used in the calibration of the difficulty levels of the test items in CTT model. The estimates of the difficulty levels of the test items were done using RUMM2030 in the case of PCM while SAS was used in the case of CTT. RUMM2030 and SAS are both the statistical softwares. The analysis of variance (ANOVA) was used to compare the four different design groups of test takers. In cases where the ANOVA showed a significant difference between the means of the design groups, the Tukeys groupings was used to establish where the difference came from. The research findings were that the test items' difficulty parameter estimates based on the CTT theoretical framework were not invariant across the different independent sample groups. The over- all findings from this study were that the CTT theoretical framework was unable to produce item difficulty invariant parameter estimates. The PCM estimates were very stable in the sense that for most of the items, there was no significant difference between the means of at least three design groups and the one that deviated from the rest did not deviate that much. The item parameters of the group that was representative of the population (proportional allocation) and the one where the same number of learners (50 learners) was taken from different performance categories did not differ significantly for all the items except for item 6.6 in examination question paper 2. It is apparent that for the test item parameters to be invariant of the group of test takers in PCM, the group of test takers must be heterogeneous and each performance category needed to be big enough for the proper calibration of item parameters. The higher values of the estimated item parameters in CTT were consistently found in the sample that was dominated by the high proficient learners in Mathematics ("bad") and the lowest values were consistently calculated in the design group that was dominated by the less proficient learners. This phenomenon was not apparent in the Rasch model. / Mathematical Sciences / M.Sc. (Statistics)
177

Medida de habilidade em programação funcional via modelagem de Rasch com validação dicotômica

Goulart, Reane Franco 01 July 2011 (has links)
Changes in the process of teaching and learning can be both useful and nonuseful to enhance students learning. This work tried to show that current teaching methods not always meet the needs efficiently when it comes to improve students skills. For that, it relied on experiments with Language Programming subject matter s students. It did so because such a subject matter poses questions whose answers can be answered freely because its codes can be written in many ways , while its assessment is dichotomic. Teacher s didactic procedures, methodology applied in classes, programming language, and the time taken to develop the work proposed were the categories considered in the research. In an experimental group of students, Robert Mager s theory was applied to compare their learning. In such theory, instructional aims are supposed to provide a statement on the information students will get and on their understanding and ability to use them after the course ends. Conclusion is that students skills and performance were improved, that is to say, that there was an increase of knowledge, which can be measured and presented graphically by Rasch model. / Mudanças no processo de ensino e aprendizagem podem ser tanto benéficas quanto ineficazes para o aprendizado do aluno. Este trabalho buscou mostrar que métodos de ensino atuais não suprem com eficiência a necessidade de melhorar as habilidades discentes. Para tanto, recorreram-se a experimentos com alunos da disciplina Linguagem de Programação porque esta apresenta questões cujas respostas são livres ou seja, porque os códigos podem ser feitos de diversas maneiras , enquanto a avaliação é dicotômica. Avaliaram-se os procedimentos didáticos do professor, a metodologia usada nas aulas, a linguagem de programação e o tempo usado para desenvolver o exercício proposto. Em uma das turmas experimentais, a teoria de Robert Mager foi aplicada para comparar o aprendizado dos alunos. Nessa teoria, os objetivos instrucionais preveem a declaração sobre o que o aluno vai receber de informação e se as compreendeu para ser capaz de usá-las após o término do curso. Concluiu-se que houve melhoria na habilidade e no desempenho dos alunos, isto é, aumento no conhecimento mensurável e demonstrável graficamente pelo modelo de Rasch. / Doutor em Ciências
178

Transposição da Teoria da Resposta ao Item: uma abordagem pedagógica / Transposition of Item Response Theory: a pedagogical approach

Eder Alencar Silva 23 June 2017 (has links)
Este trabalho tem por objetivo apresentar a Teoria da Resposta ao Item (TRI), por meio de uma abordagem pedagógica, aos professores da educação básica, que mencionaram esta necessidade por meio de pesquisa realizada pelo autor. Levar parte do conhecimento teórico que embasa esta teoria ao conhecimento do docente, principalmente a construção da curva de probabilidade de acerto do item, favorecerá a compreensão, a análise e o monitoramento do processo avaliativo educacional. Este material apresenta as principais definições e conceitos da avaliação externa em larga escala, além de fornecer insumos para a compreensão das suposições realizadas para aplicação da metodologia. Neste sentido, o texto foi estruturado de forma a apresentar didaticamente as etapas do processo de implementação de uma avaliação, desde a construção do item até a apuração e divulgação dos resultados. Todo enfoque será dado à construção do modelo da TRI com um parâmetro (dificuldade do item), também conhecido como modelo de Rasch, o que simplifica e facilita a compreensão da metodologia. O modelo utilizado nas avaliações externas em larga escala (modelo com três parâmetros) será introduzido a partir das considerações realizadas na abordagem que explicita o pensamento da construção do modelo de um parâmetro. Acredita-se que esta compreensão possa colaborar com o professor na exploração das habilidades/competências dos alunos durante os anos escolares. / This study aims to present the Item Response Theory (IRT), through a pedagogical approach, to teachers of basic education, which mentioned this necessity through research conducted by the author. To take part of the theoretical knowledge that supports this theory to the teacher\'s knowledge, especially the construction of probability curve of item correct response, it will favor for understanding, analysis and monitoring the evaluation educational process. This material presents the main definitions and concepts of the external evaluation in large scale, besides providing inputs for understanding the assumptions made to apply the methodology. In this sense, the text was structured in order to present the implementation process stages of a large scale assessment, from the item construction to the results calculation and dissemination. The focus will be given to the IRT model construction of one-parameter (difficulty of the item), also known as Rasch model, since it simplifies and facilitates the understanding of methodology. The model used in the external assessment on a large scale (three-parameter model) will be introduced from the considerations made in the approach that explicit the thought of one-parameter model construction. It is believed that understanding can collaborate with teacher in exploration of the students\' skills/competences during the school year.
179

Measuring peer victimization and school leadership : A study of definitions, measurement methods and associations with psychosomatic health / Att mäta mobbning och skolledarskap : en studie om definitioner, mätmetoder och samband med psykosomatisk hälsa

Hellström, Lisa January 2015 (has links)
The aim of this thesis is to explore methods for assessing peer victimization and pedagogical leadership in school. The thesis includes four studies. Study I and II are based on web-based questionnaires among 2, 568 students in grades 7, 8 and 9. Study III is based on a questionnaire (n=128) and four focus group interviews (n=21) among students in grades 7 and 9. Study IV is based on a web-based questionnaire including 344 teachers. The results from Study I showed that among students who experienced peer victimization 13% were captured by a bullying measure, 44% by a measure of repeated peer aggression, and 43% by both measures, i.e. the two measures captured partly different pupils. Study II showed that the two measures captured the same proportion of adolescents with psychosomatic problems and showed no significant differences in mean values on the Psychosomatic Problems (PSP) scale. In Study III it was shown that besides the traditional criteria the adolescents definition of bullying also included a criterion based on the health consequences of bullying. That is, a single but hurtful or harmful incident could also be considered bullying irrespective of whether the traditional criteria were fulfilled or not. The Rasch analysis in Study IV indicated two sub dimensions of the Pedagogical and Social Climate (PESOC-PLP) scale; direct pedagogical leadership and indirect pedagogical leadership. Satisfying psychometric properties indicated that the PESOC-PLP scale could be used to measure pedagogical leadership of the principal. This thesis highlights problems with how bullying and school leadership is currently defined and measured. By strengthening the understanding of measurement methods of peer victimization and school leadership the aim is that the results from this thesis will contribute in providing a safe and positive school experience for children and adolescence and that it can be used as a valuable tool to combat peer victimization. / Baksidestext: The negative consequences of peer victimization on children and adolescents such as worsening academic achievement and mental ill health are major public health concerns which have been subjected to extensive research. However, there are long-standing concerns how to define, measure, and estimate prevalence rates of peer victimization and successful school leadership. The aim of this thesis is to study methods for assessing peer victimization and pedagogical leadership in school. The results show that excluding other forms of peer victimization than bullying have serious implications for the identification of victims and may underestimate the full impact of peer victimization on children. Further, the validation of the Pedagogical and Social Climate (PESOC-PLP) scale is a step towards ensuring valid assessments of pedagogical school leadership. By strengthening the understanding of measurement methods of peer victimization and school leadership the aim is that the results from this thesis will contribute in providing a safe and positive school experience for children and adolescence and that it can be used as a valuable tool to combat peer victimization.
180

AN EXPLORATION OF THE USE OF DATA, ANALYSIS AND RESEARCH AMONG COLLEGE ADMISSION PROFESSIONALS IN THE CONTEXT OF DATA-DRIVEN DECISION MAKING

Schroeder, Kimberly Ann Chaffer 01 January 2012 (has links)
Increasing demands for accountability from both the public and the government have resulted in increasing pressure for higher education professionals to use data to support their choices. There is significant speculation that professionals at all levels of education lack the knowledge to implement data-driven decision making. However, empirical studies regarding whether or not professionals at four-year postsecondary institutions are utilizing data to guide programmatic and policy decisions are lacking. The purpose of this exploratory study was to explore the knowledge and habits of undergraduate admission professionals at four-year colleges and universities regarding their use of data in decision making. A survey instrument was disseminated and, the data collected from the instrument provided empirical information, which serves as the basis for a discussion about what specific knowledge admission professionals at four-year institutions possess and how they use data in their decision making. The instrument disseminated was designed specifically for this study. Therefore, before the research questions were addressed, Rasch analysis was utilized to evaluate the validity and reliability of the survey instrument. Data was then used to determine that undergraduate admission professionals perceived themselves as using data in their decision making. The results also indicated admission professionals feel confident in their ability to interpret and use data to in their decision making.

Page generated in 0.0392 seconds