• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 154
  • 76
  • 24
  • 18
  • 16
  • 16
  • 11
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 859
  • 434
  • 422
  • 136
  • 127
  • 124
  • 118
  • 117
  • 115
  • 109
  • 101
  • 86
  • 86
  • 86
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

A Study of Student's Perceptions of Blended Learning Environments at a State-Supported Postsecondary Institution

Shaw, Joanna G. 05 1900 (has links)
The purpose of this study was to conduct exploratory research regarding students' perceptions of blended learning environments at a state supported postsecondary institution. Specifically investigated were students' overall perceptions of blended learning environments, the reasons they chose to take a blended course, and whether generational differences existed in students' affected perceptions. An electronic survey was distributed to students enrolled in blended learning courses at the end of the spring 2009 term.
332

Can a computer adaptive assessment system determine, better than traditional methods, whether students know mathematics skills?

Whorton, Skyler 19 April 2013 (has links)
Schools use commercial systems specifically for mathematics benchmarking and longitudinal assessment. However these systems are expensive and their results often fail to indicate a clear path for teachers to differentiate instruction based on students’ individual strengths and weaknesses in specific skills. ASSISTments is a web-based Intelligent Tutoring System used by educators to drive real-time, formative assessment in their classrooms. The software is used primarily by mathematics teachers to deliver homework, classwork and exams to their students. We have developed a computer adaptive test called PLACEments as an extension of ASSISTments to allow teachers to perform individual student assessment and by extension school-wide benchmarking. PLACEments uses a form of graph-based knowledge representation by which the exam results identify the specific mathematics skills that each student lacks. The system additionally provides differentiated practice determined by the students’ performance on the adaptive test. In this project, we describe the design and implementation of PLACEments as a skill assessment method and evaluate it in comparison with a fixed-item benchmark.
333

The development of the Numeracy Apprehension Scale for children aged 4-7 years : qualitative exploration of associated factors and quantitative testing

Petronzi, Dominic January 2016 (has links)
Previous psychological literature has shown mathematics anxiety in older populations to have an association with many factors, including an adverse effect on task performance. However, the origins of mathematics anxiety have, until recently, received limited attention. It is now accepted that this anxiety is rooted within the early educational years, but research has not explored the associated factors in the first formal years of schooling. Based on previous focus groups with children aged 4-7 years, ‘numeracy apprehension’ is suggested in this body of work, as the foundation phase of negative emotions and experiences, in which mathematics anxiety can develop. Building on this research, the first piece of research utilized 2 interviews and 5 focus groups to obtain insight from parents (n=7), teachers (n=9) and mathematics experts (n=2), to explore how children experience numeracy and their observations of children’s attitudes and responses. Thematic and content analysis uncovered a range of factors that characterised children’s numeracy experiences. These included: stigma and peer comparisons; the difficulty of numeracy and persistent failure; a low sense of ability; feelings of inadequacy; peer evaluation; transference of teacher anxieties; the right or wrong nature of numeracy; parental influences; dependence on peers; avoidance and children being aware of a hierarchy based on numeracy performance. Key themes reflected the focus group findings of children aged 4-7 years. This contributed to an item pool for study 2, to produce a first iteration of the Numeracy Apprehension Scale (NAS) that described day-to-day numeracy lesson situations. This 44-item measure was implemented with 307 children aged 4-7 years, across 4 schools in the U.K. Exploratory factor analysis led to a 26-item iteration of the NAS, with a 2-factor structure of Prospective Numeracy Task Apprehension and On-line Number Apprehension, which related to, for example, observation and evaluation anxiety, worry and teacher anxiety. The results suggested that mathematics anxiety may stem from the initial development of numeracy apprehension and is based on consistent negative experiences throughout an educational career. The 26-item iteration of the NAS was further validated in study 3 with 163 children aged 4-7 years, across 2 schools in the U.K. The construct validity of the scale was tested by comparing scale scores against numeracy performance on a numeracy task to determine whether a relationship between scale and numeracy task scores was evident. Exploratory factor analysis was again conducted and resulted in the current 19-item iteration of the NAS that related to a single factor of On-line Number Apprehension. This related to the experience of an entire numeracy lesson, from first walking in to completing a task and was associated with, for example, explaining an answer to the teacher, making mistakes and getting work wrong. A significant negative correlation was observed between the NAS and numeracy performance scores, suggesting that apprehensive children demonstrate a performance deficit early in education and that the NAS has the potential to be a reliable assessment of children’s numeracy apprehension. This empirical reinforces that the early years of education are the origins of mathematics anxiety, in the form of numeracy apprehension.
334

Aplicação e avaliação das propriedades psicométricas do Indice Eurohis-Qol 8-item em uma amostra brasileira

Pires, Ana Caroline de Toledo January 2016 (has links)
A crescente importância da QV enquanto desfecho em saúde fez o grupo WHOQOL, da OMS, elaborar medidas de avaliação de QV para utilização internacional. Com a necessidade de instrumentos menores que demandassem pouco tempo para o preenchimento, foi desenvolvido o EUROHIS-QOL 8-ITEM, originado dos itens do WHOQOL- BREF. Objetivos: Testar as propriedades psicométricas do EUROHIS-QOL 8-ITEM em uma amostra brasileira. Métodos: A amostra foi constituída de 325 indivíduos. Eles foram divididos em dois grupos, 151 indivíduos constituiram o grupo dos doentes do Hospital de Clínicas de Porto Alegre, RS, e 174 o grupo dos saudáveis. Para a avaliação das propriedades psicométricas do índice EUROHIS-QOL 8-ITEM, foram realizadas algumas análises. A Consistência Interna, foi avaliada usando o Alpha de Cronbach. A validade discriminante foi avaliada comparando o grupo de doentes e saudáveis e também o de deprimidos e não-deprimidos. A análise da validade convergente ocorreu através da correlação do EUROHIS-QOL 8-ITEM com diferentes medidas de QV já validadas e reconhecidas, o SF-36 e o WHOQOL-BREF. A análise fatorial foi avaliada usando o modelo de equação estrutural (SEM). Avaliou-se a unidimensionalidade usando as propriedades do modelo de Rasch. Resultados: A consistência interna avaliada pelo Alpha de Cronbach (com valor de 0,81) monstrou-se boa. O índice EURO-HIS–QOL 8-ITEM também mostrou boa capacidade discriminativa entre os grupos de doentes e saudáveis (média1=3,32; DP1=0,70; média2=3,77; DP2=0,63 t =6,12, p < 0,001) e também entre os grupos de deprimidos e não deprimidos (média3=3,14; DP3=0,69; média4=3,72; DP4=0,61 t = 7,25 p <0,001). O instrumento demonstrou boa validade convergente, através de correlações significativas (p < 0,001) entre o EUROHIS-QOL 8-ITEM e todos os domínidos do WHOQOL- BREF (QV Geral r = 0,47; Saúde Geral r= 0,54; Físico r = 0,69; Psicológico r = 0,62; Relações Sociais r = 0,55; Meio Ambiente r = 0,55) e entre o EUROHIS-QOL 8-ITEM e os domínios do SF-36 (QV Geral r = 0,36; Capacidade Funcional r =0,49; Limitação Física r = 0,45; Dor r = 0,43; Saúde Geral r = 0,52; Vitalidade r = 0,21; Aspectos Sociais r = 0,45; Aspectos Emocionais r = 0,38 e Saúde Mental r = 0,17), com exceção do domínio social (p = 0,38). Na análise de Rasch, as medidas de ajuste geral do modelo apresentaram adequado desempenho estatístico e foi considerado um bom ajuste logo na primeira avaliação (Ajuste de resíduo Interação Item pessoa: M= 0,01 e DP= 1,51; ajuste de resíduo de pessoa: M = -0,38 e DP= 1,19 e Item traço: Item total X²=69,60 p=0,00. Personal Separation Index = 0,82), ou seja, os resíduos foram aceitáveis, não foi preciso excluir itens. O EUROHIS-QOL 8-ITEM, apresentou bom ajuste aos dados na análise fatorial confirmatória (X²= 18,46; DF= 15; CFI= 0,99; RMSEA= 0,03; GFI = 0,99; RMR=0,03; P = p,24). Conclusão: O EURHIS-QOL 8-ITEM, validado em amostras europeias apresentou adequadas propriedades psicométricas neste estudo, mostrando-se uma medida confiável de QV para ser usada em amostras brasileiras. / In the 70s, quality of life began to be considered a health outcome. With the growing importance of this assessment in different areas of medicine, there were no instruments developed in the cross-cultural perspective for international use. In this context, quality of life assessement outcome measures were developed by the WHOQOL group from WHO. With the need of shorter instruments which demanded less time to be filled in, the EUROHIS-QOL 8 ITEM was developed, originated from WHOQOL-BREF items. Objectives: Test the psychometric properties of EUROHIS-QOL 8-ITEM in a Brazilian sample. Methods: The sample consisted of 325 individuals. They were divided in to two groups, 151 subjects constituted the group of patients from the Hospital de Clinicas de Porto Alegre, RS, and 174 subjects the group of healthy controls. Some analyses were performed for the assessment of the psychometric properties of EUROHIS-QOL 8-ITEM index. Internal consistency was measured by using Cronbach’s alpha. Discriminant validity was assessed by comparing the group of patients and healthy controls and also the depressed and nondepressed. Analysis of convergent validity was through the correlation of EUROHIS-QOL 8-ITEM with different quality of life measures already validated and recognized as the SF-36 and WHOQO-BREF. Factor analysis was assessed using structural equation model (SEM). Unidimensionality was assessed using the properties of the Rasch model. Results: The Cronbach's alpha showed good internal consistency (with a value of 0.81). The measure also showed good discriminative ability between the groups of patients and healthy controls (mean1=3.32; SD1=0.70; mean2=3.77; SD2=0.63 t =6.12, p = 0,00) and between the depressed and nondepressed groups (mean3=3.14; SD3=0.69; mean4=3.72; SD4=0.61 t = 7.25 p =0.00). The instrument showed good convergent validity through significant correlations ( p < 0.001 ) between the EUROHIS–QOL 8-ITEM and all domains of WHOQOL-BREF (QV Overall r = 0.47; General Health r= 0.54; Physical Health r = 0.69; Psychological Health r = 0.62; Social Relationship r = 0.55; Meio Environment r = 0.55) and between EUROHIS-QOL 8-ITEM and the domains of the SF-36 (QV Overall r = 0.36; Functioning Physical r =0.49; Role Physical r = 0.45; Bodily Pain r = 0.43; General Health r = 0.52; Vitality r = 0.21; Social Functioning r = 0.45; Role Emotional r = 0.38 and Mental Health r = 0.17) , except for the social domain ( p = 0.38). In the Rasch analysis, general fit measures of the model had adequate statistical performance and were considered a good fit at the first assessment (residual fit Item-person Interaction: M = 0.01, SD = 1.51; person residual fit: M = -0.38, SD = 1.19 and Item-trait: Total Item X² = 69.60 p = 0.00. Personal Separation Index = 0.82), that is, the residuals were acceptable, it was not necessary to exclude items. The EUROHIS-QOL 8-ITEM showed a good fit to the data in the confirmatory factor analysis (X² = 18.46, DF = 15; CFI = 0.99; RMSEA = 0.03; GFI = 0.99; RMR = 0.03; P = 24). Conclusion: EUROHIS-QOL 8-ITEM, validated in European samples, showed adequate psychometric properties in this study showing to be a reliable quality of life measure to be used in Brazilian samples.
335

Collaborative Filtering för att välja spelnivåer / Collaborative Filtering for choosing game levels

Dahlberg, Fredrik, Söderqvist, Mathias January 2013 (has links)
Fler och fler spel öppnas upp för användargenererat innehåll, vilket ofta resulterar i större mängder material än vad en ensam spelare kan utnyttja. Den unika spelare vill ta del av det som passar just dennes smak.Studien genomfördes med designforskning som metodval och med hjälp av denna metod skapades en artefakt. Med hjälp av den utvecklade artefakten, ett plattformspel som är både enkelt att förstå och spela, kunde en datamängd samlas in ifrån olika spelare. Data byggdes upp av att användarna efter varje slutförd nivå, explicit fick lämna sitt betyg på nivån i en skala mellan 1 och 5.Genom att introducera collaborative filtering och där låta användarens tidigare betyg jämföras med övriga användare kan en predicering av kommande betyg ges. Vid jämförelser av olika collaborative filtering-algoritmer kunde den mest lämpliga upptäckas och senare även användas.Resultaten visar att mer precisa uppskattningar av kommande betyg kan göras av collaborative filteringen än genom att använda nivåns medelbetyg och resultaten leder därför till slutsatsen att collaborative filtering kan ge skräddarsydda spelupplevelser för en unik användare och således förhöja dennes spelupplevelse. / Program: Systemarkitekturutbildningen
336

On Rank-invariant Methods for Ordinal Data

Yang, Yishen January 2017 (has links)
Data from rating scale assessments have rank-invariant properties only, which means that the data represent an ordering, but lack of standardized magnitude, inter-categorical distances, and linearity. Even though the judgments often are coded by natural numbers they are not really metric. The aim of this thesis is to further develop the nonparametric rank-based Svensson methods for paired ordinal data that are based on the rank-invariant properties only. The thesis consists of five papers. In Paper I the asymptotic properties of the measure of systematic disagreement in paired ordinal data, the Relative Position (RP), and the difference in RP between groups were studied. Based on the findings of asymptotic normality, two tests for analyses of change within group and between groups were proposed. In Paper II the asymptotic properties of rank-based measures, e.g. the Svensson’s measures of systematic disagreement and of additional individual variability were discussed, and a numerical method for approximation was suggested. In Paper III the asymptotic properties of the measures for paired ordinal data, discussed in Paper II, were verified by simulations. Furthermore, the Spearman rank-order correlation coefficient (rs) and the Svensson’s augmented rank-order agreement coefficient (ra) were compared. By demonstrating how they differ and why they differ, it is emphasized that they measure different things. In Paper IV the proposed test in Paper I for comparing two groups of systematic changes in paired ordinal data was compared with other nonparametric tests for group changes, both regarding different approaches of categorising changes. The simulation reveals that the proposed test works better for small and unbalanced samples. Paper V demonstrates that rank invariant approaches can also be used in analysis of ordinal data from multi-item scales, which is an appealing and appropriate alternative to calculating sum scores.
337

Effect of Violating Unidimensional Item Response Theory Vertical Scaling Assumptions on Developmental Score Scales

Topczewski, Anna Marie 01 July 2013 (has links)
Developmental score scales represent the performance of students along a continuum, where as students learn more they move higher along that continuum. Unidimensional item response theory (UIRT) vertical scaling has become a commonly used method to create developmental score scales. Research has shown that UIRT vertical scaling methods can be inconsistent in estimating grade-to-grade growth, within-grade variability, and separation of grade distributions (effect size) of developmental score scale. In particular the finding of scale shrinkage (decreasing within-grade score variability as grade-level increases) has led to concerns about and criticism of IRT vertical scales. The causes of scale shrinkage have yet to be fully understood. Real test data and simulation studies have been unable to provide complete answers as to why IRT vertical scaling inconsistencies occur. Violations of assumptions have been a commonly cited potential cause for the inconsistent results. For this reason, this dissertation is an extensive investigation into how violations of the three assumptions of UIRT vertical scaling - local item dependence, unidimensionality, and similar reliability of grade level tests - affect estimated developmental score scales. Simulated tests were developed that purposefully violated a UIRT vertical scaling assumption. Three sets of simulated tests were created to test the effect of violating a single assumption. First, simulated tests were created with increasing, decreasing, low, medium, and high local item dependence. Second, multidimensional simulated tests were created by varying the correlation between dimensions. Third, simulated tests with dissimilar reliability were created by varying item parameters characteristics of the grade level tests. Multiple versions of twelve simulated tests were used to investigate UIRT vertical scaling assumption violations. The simulated tests were calibrated under the UIRT model to purposefully violate an assumption of UIRT vertical scaling. Each simulated test version was replicated for 1000 random examinee samples to assess the bias and standard error of estimated grade-to-grade-growth, within-grade-variability, and separation-of-grade-distributions (effect size) of the estimated developmental score scales. The results suggest that when UIRT vertical scaling assumptions are violated the resulting estimated developmental score scales contain standard error and bias. For this study, the magnitude of standard error was similar across all simulated tests regardless of the assumption violation. However, bias fluctuated as a result of different types and magnitudes of UIRT vertical scaling assumption violations. More local item dependence resulted in more grade-to-grade-growth and separation-of-grade-distributions bias. And local item dependence resulted in developmental score scales that displayed scale expansion. Multidimensionality resulted in more grade-to-grade-growth and separation-of-grade-distributions bias when the correlation between dimensions was smaller. Multidimensionality resulted in developmental score scales that displayed scale expansion. Dissimilar reliability of grade level tests resulted in more grade-to-grade-growth bias and minimal separation-of-grade-distributions bias. Dissimilar reliability of grade level tests resulted in scale expansion or scale shrinkage depending on the item characteristics of the test. Limitations of this study and future research are discussed.
338

Simple structure MIRT equating for multidimensional tests

Kim, Stella Yun 01 May 2018 (has links)
Equating is a statistical process used to accomplish score comparability so that the scores from the different test forms can be used interchangeably. One of the most widely used equating procedures is unidimensional item response theory (UIRT) equating, which requires a set of assumptions about the data structure. In particular, the essence of UIRT rests on the unidimensionality assumption, which requires that a test measures only a single ability. However, this assumption is not likely to be fulfilled for many real data such as mixed-format tests or tests composed of several content subdomains: failure to satisfy the assumption threatens the accuracy of the estimated equating relationships. The main purpose of this dissertation was to contribute to the literature on multidimensional item response theory (MIRT) equating by developing a theoretical and conceptual framework for true-score equating using a simple-structure MIRT model (SS-MIRT). SS-MIRT has several advantages over other complex MIRT models such as improved efficiency in estimation and a straightforward interpretability. In this dissertation, the performance of the SS-MIRT true-score equating procedure (SMT) was examined and evaluated through four studies using different data types: (1) real data, (2) simulated data, (3) pseudo forms data, and (4) intact single form data with identity equating. Besides SMT, four competitors were included in the analyses in order to assess the relative benefits of SMT over the other procedures: (a) equipercentile equating with presmoothing, (b) UIRT true-score equating, (c) UIRT observed-score equating, and (d) SS-MIRT observed-score equating. In general, the proposed SMT procedure behaved similarly to the existing procedures. Also, SMT showed more accurate equating results compared to the traditional UIRT equating. Better performance of SMT over UIRT true-score equating was consistently observed across the three studies that employed different criterion relationships with different datasets, which strongly supports the benefit of a multidimensional approach to equating with multidimensional data.
339

Improving the Transition Readiness Assessment Questionnaire (TRAQ) using Item Response Theory

Wood, David L., Johnson, Kiana R., McBee, Matthew 01 January 2017 (has links)
Background: Measuring the acquisition of self-management and health care utilization skills are part of evidence based transition practice. The Transition Readiness Assessment Questionnaire (TRAQ) is a validated 20-question and 5-factor instrument with a 5-point Likert response set using a Stages of Change Framework. Objective: To improve the performance of the TRAQ and allow more precise measurement across the full range of transition readiness skills (from precontemplation to initiation to mastery). Design/Methods: On data from 506 previously completed TRAQs collected from several clinical practices we used MPlus v.7.4 to apply a graded response model (GRM), examining item discrimination and difficulty. New questions were written and added across all domains to increase the difficulty and discrimination of the overall scale. To evaluate the performance of new items and the resulting factor structure of the revised scale we fielded a new version of the TRAQ (with a total of 30 items) using an online anonymous survey of first year college students (in process). Results: We eliminated the five least discriminating TRAQ items with minimal impact to the conditional test information. After item elimination (k = 15) the factor structure of the instrument was maintained with good quality, ?2 (86) = 365.447, CFI = 0.977, RMSEA = 0.079, WRMR = 1.017. We also found that a majority of items could reliably discriminate only across lower levels of transition readiness (precontemplation to initiation) but could not discriminate at higher levels of transition readiness (action and mastery). Therefore we wrote 15 additional items intended to have higher difficulty. On the new 30 item TRAQ, confirmatory factor analysis, internal reliability and IRT results will be reported from a large sample of college students Conclusion(s): Using IRT and factor analyses we eliminated 5 of 20 TRAQ items that were poorly discriminating. We found that many of the items in the TRAQ could discriminate among those in the early stages of transition readiness, but could not discriminate among those in later stages of transition readiness. To have a more robust measure of transition readiness we added more difficult items and are evaluating the scale’s psychometric properties.
340

An Analysis of Item Bias in the WISC-R with Kainaiwa Native Canadian Children

Pace, Deborah Faith 01 May 1995 (has links)
The present study examined the responses of 332 Kainai students ranging in age from 6 to 16 years to the Information, Arithmetic, and Picture Completion subtests of the Wechsler Intelligence Scale for Children-Revised (WISC-R) in order to determine the validity of these subtests as a measure of their intelligence. Two indices of validity were assessed: (a) subtest unidimensionality, and (b) order of item difficulty. With regard to the assumption of unidimensionality, examination of the data indicated low item-factor loadings on the Information, Arithmetic, and Picture Completion subtests. Examination of difficulty parameters revealed a nonlinear item difficulty order on all three subtests. These results support the conclusion of previous research that the WISC-R does not adequately assess the intelligence of Native children. Possible bases for the invalidity of the WISC-R for this population are discussed and recommendations for future research are presented.

Page generated in 0.0191 seconds