• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 8
  • 7
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 35
  • 35
  • 35
  • 32
  • 16
  • 16
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Art Education in Finland and the United States: A Qualitative Inquiry into Teacher Perceptions

Knight, Lauren E 12 August 2014 (has links)
The purpose of this study was to gain insights into the educational system in Finland, where art seem to be valued, and America, where it seems to struggle. I first studied how policies that promote a business-like ideology and standardized testing in schools have impacted art education in the United States. Then I investigated Finland’s educational system, which does not rely on standardized testing to monitor student learning and teachers. During my research I noticed that Finland uses a noncompetitive approach to education, which I assumed was connected to the art Folk School that originated in Denmark and moved throughout Europe. Based upon this information, I anticipated that art education was valued more in Finland than in the United States. I also anticipated that Finland’s educational success had a connection to its non-competitive system and its inclusion of the arts. In order to explore this idea, I investigated Finland’s approach to art education by interviewing Finnish professionals in the art education field.
32

Grafos de avaliação : um modelo conceitual para avaliação escolar apoiada por computador / Evaluation graphs : a conceptual model to help the assessment of students

Mizusaki, Lucas Eishi Pimentel January 2016 (has links)
Seja por meio de novas metodologias, por novas ferramentas, ou pela simples presença nas salas de aula, as Tecnologias de Informação e Comunicação estão alterando profundamente as práticas educativas. Este trabalho se debruça sobre a interação entre as teorias de aprendizagem e as diferentes ferramentas computacionais para educação. Apontando uma incompatibilidade metodológica entre os modelos de avaliação do aluno existentes em Learning Management Systems e em ontologias computacionais frente a metodologias de ensino cognitivistas, propõe-se um novo modelo computacional de avaliação para representar aspectos cognitivos e comportamentais dos alunos. Chamado de grafos de avaliação, é um modelo baseado na área de sistemas de suporte à tomada de decisões em grupo, desenvolvida usando uma metodologia orientada ao consenso junto ao Projeto Amora do Colégio de Aplicação da UFRGS. Espera-se que esse trabalho possa servir de base para a construção de ferramentas de avaliação computacional adequadas para essas metodologias. / Through the use of new methodologies and tools, or by its simple presence in classrooms, Information and Communication Technologies are radically changing educational practices. In this context, this work focuses on issues manifested in computational tools through the scope of different learning theories. It points out a methodological incompatibility among traditional student assessment tools available in current Learning Management Systems and some Computational Ontologies concerning cognitivist learning theories. Therefore, a new computational technique is proposed to evaluate cognitive and behavioral aspects of students. Called evaluation graphs, it is a Decision Support System developed as a consensus-driven methodology to be used in the AMORA project that is being conducted in the application school at UFRGS. It is expected that this new model will serve as the basis to build new student assessment tools compatible with these methodologies.
33

Grafos de avaliação : um modelo conceitual para avaliação escolar apoiada por computador / Evaluation graphs : a conceptual model to help the assessment of students

Mizusaki, Lucas Eishi Pimentel January 2016 (has links)
Seja por meio de novas metodologias, por novas ferramentas, ou pela simples presença nas salas de aula, as Tecnologias de Informação e Comunicação estão alterando profundamente as práticas educativas. Este trabalho se debruça sobre a interação entre as teorias de aprendizagem e as diferentes ferramentas computacionais para educação. Apontando uma incompatibilidade metodológica entre os modelos de avaliação do aluno existentes em Learning Management Systems e em ontologias computacionais frente a metodologias de ensino cognitivistas, propõe-se um novo modelo computacional de avaliação para representar aspectos cognitivos e comportamentais dos alunos. Chamado de grafos de avaliação, é um modelo baseado na área de sistemas de suporte à tomada de decisões em grupo, desenvolvida usando uma metodologia orientada ao consenso junto ao Projeto Amora do Colégio de Aplicação da UFRGS. Espera-se que esse trabalho possa servir de base para a construção de ferramentas de avaliação computacional adequadas para essas metodologias. / Through the use of new methodologies and tools, or by its simple presence in classrooms, Information and Communication Technologies are radically changing educational practices. In this context, this work focuses on issues manifested in computational tools through the scope of different learning theories. It points out a methodological incompatibility among traditional student assessment tools available in current Learning Management Systems and some Computational Ontologies concerning cognitivist learning theories. Therefore, a new computational technique is proposed to evaluate cognitive and behavioral aspects of students. Called evaluation graphs, it is a Decision Support System developed as a consensus-driven methodology to be used in the AMORA project that is being conducted in the application school at UFRGS. It is expected that this new model will serve as the basis to build new student assessment tools compatible with these methodologies.
34

Policy Evidence by Design: How International Large-Scale Assessments Influence Repetition Rates

Cardoso, Manuel Enrique January 2022 (has links)
Policy Evidence by Design: International Large-Scale Assessments and Grade Repetition Links between international large-scale assessment (ILSA) methodologies, international organization (IO) ideologies, and education policies are not well understood. Framed by statistical constructivism, this article describes two interrelated phenomena. First, OECD/ PISA and UNESCO/TERCE documents show how IOs’ doctrines about the value of education, based on either Human Capital Theory or Human Rights, shape the design of the ILSAs they support. Second, quantitative analyses for four Latin American countries show that differently designed ILSAs disagree on the effectiveness of a specific policy, namely, grade retention: PISA’s achievement gap between repeaters and nonrepeaters doubles TERCE’s. This matters and warrants further research: divergent empirical results could potentially incentivize different education policies, reinforce IOs’ initial policy biases, and provide perverse incentives for countries to modulate retention rates or join an ILSA on spurious motivations. In summary, ILSA designs, shaped by IOs’ educational doctrines, yield different data, potentially inspiring divergent global policy directives and national decisions. When ILSAs met policy: Evolving discourses on grade repetition. This study explores phenomena of ordinalization and scientization of policy discourse, focusing on the case of grade retention in publications by OECD’s PISA and UNESCO’s ERCE (2007-2017), from a sociology of quantification perspective. While prior research shows these ILSAs yield divergent data regarding grade retention’s effectiveness, this study shows similarities in their critical discourse on grade repetition’s effectiveness. Genre analysis finds similarities in how both ILSAs structure their discourse on grade repetition and use references solely to critique it, presenting a partial view of the scholarly landscape. However, horizontal comparisons also find differences across ILSAs in the use of ordinalization (e.g., rankings) in charts, as well as differences in the extent to which their policy discourse embraces scientization. The ILSAs converge in singling out grade repetition as the policy most strongly associated with low performance; this should be interpreted in the context of one key similarity in their design. Policymaking to the test? How ILSAs influence repetition rates Do international large-scale assessments influence education policy? How? Through scripts, lessons, or incentives? For some, they all produce similar outcomes. For others, different assessment data, shaped by different designs, and mediated by international organizations’ policy directives, prompt different policy decisions. For some, participation in these assessments may be linked to lower repetition rates, as per the policy scripts hypothesis inspired by world society theory (WST). For others, assessments’ comparison strategies (age vs. grade) influence repetition in participating countries, according to policy lessons or incentives hypotheses, respectively inspired by educational effectiveness research (EER) and the sociology of quantification, and particularly the notion of retroaction. Fixed-effects panel regression models of eighteen Latin American countries (1992-2017) show that participation in assessments is associated with changing repetition rates in primary and secondary, while controlling for other factors. The findings show statistically significant differences between some assessment types. The conclusions spur new questions, delineating a future agenda.
35

Anxiety as a Mediating Variable to Learning Outcomes in a Human Patient Simulation Experience: A Mixed Methods Study

Beischel, Kelly 01 October 2010 (has links)
No description available.
36

Organizational Determinants Of Information Quality In Local Education Agencies

Crandall, Angela M. 12 September 2008 (has links)
No description available.
37

A conceptual model for the management of the implementation of a continuous assessment plan at a university of technology / Jan Jacob Antonie Christoffel Smit

Smit, Jan Jacob Antonie Christoffel January 2008 (has links)
In South Africa today, the challenge is to redress past inequalities and to transform the higher education system. This transformation of the higher education system is necessary in order to serve a new social order. The introduction of outcomes-based education and training requires a new approach to education, including the process of assessment. An outcomes-based approach to education and training focuses on continuous assessment through the use of a range of assessment methods. The Ministry of Education tasked the National Department of Education to embark on a review of their academic programmes. This review has been in response to register programmes on the National Qualifications Framework. This review has also been part of an attempt to improve the quality of qualifications. In most learning organisations, assessment and learning have always been closely related. If assessment has not simply been seen as the end point in learning but has been an important component in the design of the learning process itself, this statement will be severely tested by the movement towards an outcomes model for education and training. The primary aim of the study was to develop a conceptual model for the management of the impleme tation of a continuous assessment plan in a university of technology by means of aliterature study and an empirical investigation. Currently, information regarding the conceptualisation of this topic is inadequate and vague. If the nature of the complexities involved in the management and implementation of CASS at universities of technology are known, a conceptualised model can be developed for the effective management of the implementation thereof. The implementation of an integrated model of assessment requires the creation of an enabling environment in which the model can be implemented. This study has found that this is not true for many universities of technology, as: • programme design still rests on subjects that are not aimed at outcome-based models; • administrative systems are not designed to accommodate the recording of continuous assessments; • students, lecturers and other stakeholders have not undergone the necessary training regarding the change in paradigm from content-based to outcomebased education; and • policy regarding modularisation and continuous assessment has not yet been defined and implemented. The study serves to present a useable model for the management of the implementation of continuous assessment at universities of technology. The study is based on a balanced opinion as the experiences of both lecturers and students were investigated by means of structured questionnaires. The findings were verified by means of a focus group interview with administrative staff involved with continuous assessment. The model that was developed is a usable model as it was subjected to a number of verification tests. / Thesis (Ph.D. (Teaching and Learning))--North-West University, Vaal Triangle Campus, 2008.
38

Validity, reliability and fairness of item measurements attained by a comprehensive computer-assisted assessment tool

van der Merwe, Preller Josefus January 2006 (has links)
Thesis (M.Tech. (Information Technology, Faculty of Applied and Computer Sciences))--Vaal University of Technology, 2006 / The sole purpose of a test is to make a measurement. Assessment is very much a process of measurement, whether the outcome is used for baseline, diagnostic, formative or summative purposes. When measurement is taken, in whatever form, a score is obtained. The score that is obtained forms the important part of assessment, because this score determines the outcome of the assessment, the decisions that are to be made regarding the student’s progress, curriculum changes and the evaluation of a course as a whole. Although a score is obtained from a test, the analysis thereof is frequently much neglected. The use of computers in education is not a new concept. The first computer application goes back a long way when computers were first used to do psychological testing. It then became clear that computers can be applied to more fields in education, especially in the field of testing. In the early days real progress was slow, since computers were expensive and were only used in large companies. However, the scenario has changed with the widespread availability of personal computers that has enabled educators to focus on the appropriate role of computerisation in the development, administration, scoring and interpretation of tests. The main objective of this study is to show the major advantage of using computers as a comprehensive assessment tool and to demonstrate the ability to construct and ‘bank’ test items to subsequently produce a standardised test. An added advantage was the computer’s ability to administer tests to students and manage student progress records. The research findings indicate that a Comprehensive Computer-Assisted Assessment Tool (CCAT) has the potential to contribute to the enhancement of assessment and that it can enable educators to prepare valid, reliable and fair test items which were more difficult and time-consuming without technology.
39

An investigation of listening and reading proficiency in English of grade 5 and grade 6 students in Chinese-speaking cities

Staniute, Laura January 2018 (has links)
University of Macau / Faculty of Arts and Humanities. / Department of English
40

Comparing Two Individually Administered Reading Assessments for Predicting Outcomes on SAGE Reading

Stevens, Meighan Noelle 01 March 2017 (has links)
Accountability for student learning outcomes is of importance to parents and school and district administrators, especially since the passage of The No Child Left Behind Act in 2001. The requirement for high-stakes testing to measure progress has fostered interest in ways to monitor student preparedness during the school year. This study used 2014 and 2015 test data from of 154 students from one elementary school to measure the correlation between individually administered Kaufman Test of Educational Achievement Brief Reading and DIBELS Next reading assessments and outcomes on the high-stakes Utah SAGE test. This correlational study used Pearson correlation coefficients to determine redundancy across the tests, and used multiple regression to assess how well scores on the KTEA and DIBELS Next tests predict students' subsequent scores on the SAGE test. Results indicate that DIBELS Next was a strong predictor of SAGE outcomes while KTEA Brief results were moderate predictors.

Page generated in 0.0946 seconds