Spelling suggestions: "subject:"peerassessment"" "subject:"essessment""
11 |
Validity and Reliability of Peer Assessment Rating Index Scores of Digital and Plaster ModelsAndrews, Curtis Kyo-shin 24 June 2008 (has links)
No description available.
|
12 |
Undergraduate business and management students' experiences of being involved in assessmentTai, Chunming January 2012 (has links)
This study aimed to explore university undergraduates’ experiences of student involvement in assessment (SIA). Based on Biggs’ 3P model of student learning, this study focused on students’ experiences prior to SIA, during SIA and after SIA in three Business and Management modules. Applying this framework, different practices of involving students in assessment (peer assessment, self assessment or self designed assessment) were studied from the perspectives of the students concerned. Unlike other studies that normally test to what extent the designed outcomes of SIA have been met, the goal of this research was to reveal the inside picture of how students were coping with those SIA tasks and their learning. This picture was outlined from students’ perceptions of SIA, the main factors that might influence students’ engagement with SIA, and students’ reflections on SIA practice in the particular module. This study adopted mixed research methods with sequential explorative design. It employed the ETLA (Environment of Teaching, Learning and Assessment) questionnaire and follow up semi-structured interviews. There were in total 251 valid questionnaire responses from students and 18 valid student interviews. The data were collected from three undergraduate Business and Management degree modules in which different strategies were used to involve students in assessment. The three innovative modules were all from Scottish universities in which assessment practices were being re-engineered by involving students in assessment. Two of the modules had participated in the REAP (Re-engineering Assessment Practice) project. However, they were different from each other in terms of the way in which they involved students in assessment and the level or extent of student involvement in assessment that was entailed. The report and analysis of the findings has taken three main forms. First, the module context including the teaching, learning and assessment environment and student learning approaches and satisfactions in the particular module were compared and analysed using the questionnaire data. The results showed a strong association between the elements in the teaching and learning environment and student learning approaches. They also indicated that the quality of teaching, feedback and learning support played significant roles in the quality of student learning. Secondly, an analysis of the interview data was undertaken to examine why and how students would learn differently in different module contexts with different SIA practices, and how students were coping with their learning in the SIA tasks concerned. In addressing these questions, students’ previous experiences in SIA, and knowledge about SIA, peers’ influence, teachers’ support and training for SIA, interaction between and among students and teachers, the clarity of the module objectives and requirements and learning resources were found to be the major factors that might influence students’ engagement in the SIA. Additionally, the salient learning benefits and challenges of SIA as perceived by students were explored. Thirdly, based on the preceding findings, the analysis of each module aimed to further consider in what way the three modules differed from each other with respect to SIA practices, and how students responded in the three different module contexts in terms of their engagement with SIA. These three forms of analysis made it possible to gain a rich understanding of students’ experiences of SIA that could also feed into a consideration of what kind of support the students might need in order to better engage them into the SIA and better prepare them for life-long learning.
|
13 |
Att veta vägen till mål : Formativ bedömning i religionsundervisning på mellanstadietHautala, Susanna January 2016 (has links)
The purpose of this study is to examine what kindof formative assessments is used in the religious education at middle-schoolin Sweden. A further aim of the study is to examine whether any problems can be observed with formative assessment. Questions that will try to be answered are: What kindof formative assessment occurs in religious education at the middle school in a Swedish school? What problems can be observed with formative assessment in religious education at the middle school in a Swedish school? To answer these questions, six lessons in religious education were observedin total; three teachers held two lessons each which were observed. Earlier studies have showed that formative assessment was used by teachers to give feedback to improve pupils’ learning. In some cases, feedback was also givenwhich did not contribute to the improvement of pupils’learning. Earlier studies also conclude that formative assessment helped teachers to designfuture lessonsbased on pupils’ prior knowledge. Another conclusion highlights that teachers who made the goals of the lesson visible for pupils, resulted in an increased motivation and understanding of the purpose of the lesson. This study concludes that formative assessment has been usedby all three teachers, but in different ways. One conclusion is that when the goals of the lesson was not visible for the pupils, it affected the motivation of some pupils. Another conclusion is that teachers used formative assessment to modify lesson plans to meet the pupils’ current knowledge. Formative assessment was also used to encourage pupils to help each other by using peer-assessment. To improve pupils learning, teachers also used feedback as a strategy
|
14 |
Evaluating the effectiveness of live peer assessment as a vehicle for the development of higher order practice in computer science educationBennett, Steve January 2017 (has links)
This thesis concerns a longitudinal study of the practice of Live Peer Assessment on two University courses in Computer Science. By Live Peer Assessment I mean a practice of whole-class collective marking using electronic devices of student artefacts demonstrated in a class or lecture theatre with instantaneous aggregated results displayed on screen immediately after each grading decision. This is radically different from historical peer-assessment in universities which has primarily been asynchronous process of marking of students' work by small subsets of the cohort (e.g. 1 student artefact is marked by < 3 fellow students). Live Peer Assessment takes place in public, is marked by (as far as practically possible) the whole cohort, and results are instantaneous. This study observes this practice, first on a level 4 course in E-Media Design where students' main assignment is a multimedia CV (or resume) and secondly on a level 7 course in Multimedia Specification Design and Production where students produce a multimedia information artefact in both prototype and final versions. In both cases, students learned about these assignments from reviewing works done by previous students in Live Peer Evaluation events where they were asked to collectively publicly mark those works according to the same rubrics that the tutors would be using. In this level 4 course, this was used to help students get a better understanding of the marks criteria. In the level 7 course, this goal was also pursued, but was also used for the peer marking of students' own work. Among the major findings of this study are: • In the level 4 course student attainment in the final assessment improved on average by 13% over 4 iterations of the course, with very marked increase among students in the lower percentiles • The effectiveness of Live Peer Assessment in improving student work comes from o Raising the profile of the marking rubric o Establishing a repertoire of example work o Modelling the 'noticing' of salient features (of quality or defect) enabling students to self-monitor more effectively • In the major accepted measure of peer-assessment reliability (correlation between student awarded marks and tutor awarded marks) Live Peer Assessment is superior to traditional peer assessment. That is to say, students mark more like tutors when using Live Peer Assessment • In the second major measure (effect-size) which calculates if students are more strict or generous than tutors, (where the ideal would be no difference), Live Peer Assessment is broadly comparable with traditional peer assessment but this is susceptible to the conditions under which it takes place • The reason for the better greater alignment of student and tutor marks comes from the training sessions but also from the public nature of the marking where individuals can compare their marking practice with that of the rest of the class on a criterion by criterion basis • New measures proposed in this thesis to measure the health of peer assessment events comprise: Krippendorf's Alpha, Magin's Reciprocity Matrix, the median pairwise tutor student marks correlation, the Skewness and Kurtosis of the distribution of pairwise tutor student marking correlations • Recommendations for practice comprise that: o summative peer assessment should not take place under conditions of anonymity but that very light conditions of marking competence should be enforced on student markers (e.g. > 0.2 correlation between individual student marking and that of tutors) o That rubrics can be more suggestive and colloquial in the conditions of Live Peer Assessment because the marking criteria can be instantiated in specific examples of student attainment and therefore the criteria may be less legalistically drafted because a more holistic understanding of quality can be communicated.
|
15 |
Developing skills to explain scientific concepts during initial teacher education : the role of peer assessmentCabello Gonzalez, Valeria Magally January 2013 (has links)
Initial teacher education is an area of weakness within the Chilean education system. Yet it is highlighted as a crucial aspect of educational success. Success in educational improvement depends mainly on the teachers (because they enact a reform by putting it into practice), and teacher thinking is likely to influence teacher decision-making. How teacher conceptions and practice change, and how to facilitate this change, was the focus of this study. It explored to what extent peer assessment could facilitate change in pre-service science teachers’ conceptions and practices regarding conceptual explanations in science teaching.In a quasi-experimental design, a ten-session peer assessment intervention was carried out with thirty seven pre-service science teachers in three Chilean universities, each with an experimental and control group. The intervention sought to develop changes in teachers’ conceptions about the quality of explanations and in their skill of explaining scientific concepts. Teachers' thoughts were obtained through a peer assessment questionnaire, feedback sessions, focus groups and interviews. The quality of their explanations was measured at pre, post and follow-up in their eventual first job via video-recorded microteaching episodes using observational analysis. Inter-rater reliability was calculated on 5% of all qualitative data and all the videos were rated by two researchers in a blind process. Qualitative analysis indicated how teachers transformed their conceptions about the quality of explanations from general pedagogical knowledge into pedagogical content knowledge. A quantitative instrument was created to evaluate student teachers’ explanations in practice. Its reliability enables the assessment the skill of explaining based on ten elements (Cronbach’s alpha=.77). Results showed pre-service teachers significantly improved their explanations of scientific concepts in some practical aspects, although not all of them were transferred into real teaching contexts. The changes in student teachers’ conceptions and practice were analysed to indicate how the process occurred, to what extent peer assessment had a role on it, and which elements facilitated or made difficult the transference of the skill of explaining into real teaching. These results indicated that peer assessment can play a noteworthy role in teacher education to develop skills. There are implications for policy and practice in this study, not only for teacher education but also for in-service teacher professional development, not only for Chile but also for other countries.
|
16 |
Själv- och kamratbedömning : En undersökning av lärares och elevers uppfattningar kring själv- och kamratbedömning. / Self-and peer-assessment : a study of teachers and students perceptions of self-and peer-assessmentLeijon, Nathalie, Spindelberger, Theresa January 2013 (has links)
No description available.
|
17 |
Assessing Scientific Literacy as Participation in Civic Practices : Affordances and constraints for developing a practice for authentic classroom assessment of argumentation, source critique and decision-makingAnker-Hansen, Jens January 2015 (has links)
This thesis takes a departure from a view of scientific literacy as situated in participation in civic practices. From such a view, it becomes problematic to assess scientific literacy through decontextualised test items only dealing with single aspects of participation in contexts concerned with science. Due to the complexity of transferring knowledge, it is problematic to assume that people who can explain scientific theories will automatically apply those theories in life or that knowledge will influence those people’s behaviour. A common way to more fully include the complexity of using science in different practices is to focus participation around issues and study how students use multiple sources to reflect critically and ethically on that issue. However, participation is situated in practices and thus becomes something specific within those practices. For instance, shopping for groceries for the family goes beyond reflecting critically and ethically on health and environment since it involves considering the family economy and the personal tastes of the family members. I have consequently chosen to focus my studies on how to assess scientific literacy as participation in civic practices. The thesis describes a praxis development research study where I, in cooperation with teachers, have designed interventions of assessments in lower secondary science classrooms. In the research study I use the theory of Community of Practice and Expansive Learning to study affordances and constraints for assessing communication, source critique and decision-making in the science classroom. The affordances and constraints for students’ participation in assessments are studied through using a socio-political debate as an assessment tool. The affordances and constraints for communicating assessment are studied through peer assessments of experimental design. The affordances and constraints for teachers to expand their assessment repertoire are studied through assessment moderation meetings. Finally, the affordances and constraints for designing authentic assessments of scientific literacy are studied through a review of different research studies’ use of authenticity in science education. The studies show that tensions emerge between purposes of practices outside the classroom and practices inside the classroom that students negotiated when participating in the assessments. Discussion groups were influential on students’ decisions on how to use feedback. Feedback that was not used to amend the designs was still used to discuss what should count as quality of experiments. Teachers used the moderation meetings to refine their assessments and teaching. However, conflicting views of scientific literacy as either propositional or procedural knowledge were challenging to overcome. Different publications in science education research emphasised personal or cultural aspects of authenticity. The different uses of authenticity have implications for authentic assessments, regarding the affordances and constraints for how to reify quality from external practices and through students’ engagement in practices. The results of the studies point to gains of focussing the assessment on how students negotiate participation in different civic practices. However, this approach to assessment puts different demands on assessment design than assessments in which students’ participation is compared with predefined ideals for performance. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Accepted. Paper 2: Submitted. Paper 3: Submitted. Paper 4: Manuscript.</p>
|
18 |
A CASE STUDY OF PEER ASSESSMENT IN A MOOC-BASED COMPOSITION COURSE: STUDENTS’ PERCEPTIONS, PEERS’ GRADING SCORES VERSUS INSTRUCTORS’ GRADING SCORES, AND PEERS’ COMMENTARY VERSUS INSTRUCTORS’ COMMENTARYVu, Lan Thi 01 May 2017 (has links)
Although the use of peer assessment in MOOCs is common, there has been little empirical research about peer assessment in MOOCs, especially composition MOOCs. This study aimed to address issues in peer assessment in a MOOC-based composition course, in particular student perceptions, peer-grading scores versus instructor-grading scores, and peer commentary versus instructor commentary. The findings provided evidence that peer assessment was well received by the majority of student participants from their perspective as both peer evaluators of other students’ papers and as students being evaluated by their peers. However, many student participants also expressed negative feelings about certain aspects of peer assessment, for example peers’ lack of qualifications, peers’ negative and critical comments, and unfairness of peer grading. Statistical analysis of grades given by student peers and instructors revealed a consistency among grades given by peers but a low consistency between grades given by peers and those given by instructors, with the peer grades tending to be higher than those assigned by instructors. In addition, analysis of peer and instructor commentary revealed that peers’ commentary differed from instructors’ on specific categories of writing issues (idea development, organization, or sentence-level). For instance, on average peers focused a greater percentage of their comments (70%) on sentence-level issues than did instructors (64.7%), though both groups devoted more comments to sentence-level issues than to the two other issue categories. Peers’ commentary also differed from instructors’ in the approaches their comments took to communicating the writing issue (through explanation, question, or correction). For example, in commenting on sentence-level errors, on average 85% of peers’ comments included a correction as compared to 96% of instructors’ comments including that approach. In every comment category (idea development, organization, sentence-level), peers used a lower percentage of explanation—at least 10% lower—than did instructors. Overall, findings and conclusions of the study have limitations due to (1) the small size of composition MOOC studied and small sample size of graded papers used for the analysis, (2) the lack of research and scarcity of document archives on issues the study discussed, (3) the lack of examination of factors (i.e. level of education, cultural background, and English language proficiency) that might affect student participants’ perception of peer assessment, and (4) the lack of analysis of head notes, end notes, and length of comments. However, the study has made certain contributions to the existing literature, especially student perception of peer assessment in the composition MOOC in this study. Analysis of the grades given by peers and instructors in the study provides evidence-based information about whether online peer assessment should be used in MOOCs, especially composition MOOCs and what factors might affect the applicability and consistency of peer grading in MOOCs. In addition, analysis of the data provides insights into types of comments students in a composition MOOC made as compared to those instructors made. The findings of the study as a whole can inform the design of future research on peer assessment in composition MOOCs and indicate questions designers of peer assessment training and practice in such MOOCs could find helpful to consider.
|
19 |
”Man utvecklar sitt arbete när man har ett bollplank” : Elva gymnasieelevers erfarenheter av kamratrespons i svenskämnet / ”You improve your text when you have a sounding board” : Eleven upper secondary students’ experiences of peer response in the Swedish classroom.Stroka, Maria January 2018 (has links)
”Man utvecklar sitt arbete när man har ett bollplank” är en kvalitativ studie där studiens syfte är att undersöka vilka erfarenheter elva gymnasieelever har av kamratrespons och hur de ser på att agera responsgivare och responstagare i ämnet svenska, samt vilka förväntningar eleverna har av hur kamratrespons ska se ut.Semistrukturerade gruppintervjuer utgör grunden till studien, och tidigare forskning presenteras och diskuteras tillsammans med intervjuresultaten. Studiens resultat visar att eleverna har positiva erfarenheter av kamratrespons men att det skiljer sig beroende på om de ska agera responsgivare eller responstagare. Deras erfarenheter innehåller en stor del känslor och de upplever det som jobbigt och svårt att agera responsgivare, eftersom de är rädda att responsen kan uppfattas som kritik. Samtidigt visar studiens resultat att de tycker kamratrespons är en metod som utvecklar deras lärande, och det finns en förväntan hos eleverna att få respons som innehåller reflektionsfrågor och utvecklade svar på vad de kan förbättra.
|
20 |
How can peer assessment be used in ways which enhance the quality of younger children's learning in primary schools?Boon, Stuart Ian January 2016 (has links)
Peer assessment actively engages peers in the formative assessment and evaluation of work produced by a peer. This thesis explores how social processes, such as classroom talk, influence the quality of children’s learning in more interactive contexts of PA. This focus is needed since children often find PA challenging as they may not have the interpersonal skills to collaborate effectively leading them to use talk ineffectively as a tool for learning. This research was interventionist and children in the year three and four classes I taught received Thinking Together lessons as a strategy to enhance the quality of their talk in contexts of peer assessment. Methods used to examine the impact of the talk intervention, and to gain greater insights into the role that the social context plays in peer assessment, included transcribed digital audio recordings, open ended observations, semi-structured interviews, mind maps and children’s work. Qualitative data were analysed using thematic coding analysis whilst data in transcripts were quantitatively analysed to calculate the frequency of words and phrases associated with exploratory talk before and after the intervention. Findings suggest that children’s characteristics influence the way they communicate in contexts of PA and some of the most challenging learners seemed to benefit most from the talk intervention in terms of its influence on their ability to collaborate, hypothesise and reason throughout the peer assessment tasks. The findings also draw attention to previously under-researched PA social processes such as discussion, negotiation and peer questioning that lead to outcomes for learners such as self assessment. The main conclusions drawn are that more interactive kinds of peer assessment might be viewed as a differentiated and discursive practice where teachers consider the various needs of learners, based on their individual characteristics, and provide appropriate support so they are able to collaborate and use language for mediating effective PA practice.
|
Page generated in 0.0875 seconds