• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7819
  • 3033
  • 1141
  • 738
  • 575
  • 552
  • 158
  • 120
  • 96
  • 88
  • 77
  • 63
  • 58
  • 57
  • 56
  • Tagged with
  • 18723
  • 3033
  • 2723
  • 2313
  • 2254
  • 1939
  • 1846
  • 1778
  • 1659
  • 1650
  • 1424
  • 1322
  • 1299
  • 1190
  • 1185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Existing assessment induction programmes and assessment literacy as co-determinants for developing an assessment induction programme for Midrand Graduate Institute / Maria Johanna Pienaar

Pienaar, Maria Johanna January 2014 (has links)
Many lecturers at South African Higher Education Institutions (HEIs) are not necessarily equipped for the challenges imposed on them. Some academic staff join HEIs as subject specific experts from industry and the corporate world and do not necessarily have education qualifications or experience in lecturing and assessing students. This research was prompted by the researcher’s observations that newly appointed academic staff at Midrand Graduate Institute (MGI) are not formally inducted into their primary duties as lecturers encompassing general classroom practices related to teaching, learning and assessment. Academic staff at MGI have also reported specific concerns about their preparedness to utilize assessment effectively. As a result, there appeared to be a need to gather information which could inform the development of an assessment induction programme for MGI. By conducting a literature and an empirical study, existing assessment induction programmes and assessment literacy as co-determinants for developing an assessment induction programme for MGI were investigated. The literature study focused on the theoretical foundations of induction programmes, assessment and assessment literacy. For the empirical part of the study a mixed method, multiphase design was applied. By means of a document analysis the nature and scope of existing assessment induction programmes at purposively selected South African HEIs was examined. The quality of assessment literacy of academic staff at MGI was determined through questionnaires and interviews. A total number of 101 academic staff, representing various post levels, participated in the research. The key findings of the empirical study revealed that existing assessment induction programmes at South African HEIs are offered at times when academic staff are available and that the duration of such programmes differs significantly from institution to institution. It is expected that new and experienced staff must attend the programmes and although the programmes appear to be unique, they all share common content. In all cases, Staff Development Units are responsible for facilitating the assessment induction programmes. With regard to the assessment literacy of academic staff at MGI, it was determined that their assessment literacy is not compatible with the levels on which they lecture. This was revealed through the challenges they experienced when they were required to explain the assessment process, order the levels of Bloom’s taxonomy and match assessment concepts with appropriate explanations. It was further discovered that the respondents regarded induction programmes which are specifically aimed at academic elements such as lecturing responsibilities, classroom management and assessment as essential for their personal development. From the research findings the researcher developed a set of guidelines which are proposed for developing an assessment induction programme for MGI. / PhD (Learning and Teaching), North-West University, Vaal Triangle Campus, 2014
512

Existing assessment induction programmes and assessment literacy as co-determinants for developing an assessment induction programme for Midrand Graduate Institute / Maria Johanna Pienaar

Pienaar, Maria Johanna January 2014 (has links)
Many lecturers at South African Higher Education Institutions (HEIs) are not necessarily equipped for the challenges imposed on them. Some academic staff join HEIs as subject specific experts from industry and the corporate world and do not necessarily have education qualifications or experience in lecturing and assessing students. This research was prompted by the researcher’s observations that newly appointed academic staff at Midrand Graduate Institute (MGI) are not formally inducted into their primary duties as lecturers encompassing general classroom practices related to teaching, learning and assessment. Academic staff at MGI have also reported specific concerns about their preparedness to utilize assessment effectively. As a result, there appeared to be a need to gather information which could inform the development of an assessment induction programme for MGI. By conducting a literature and an empirical study, existing assessment induction programmes and assessment literacy as co-determinants for developing an assessment induction programme for MGI were investigated. The literature study focused on the theoretical foundations of induction programmes, assessment and assessment literacy. For the empirical part of the study a mixed method, multiphase design was applied. By means of a document analysis the nature and scope of existing assessment induction programmes at purposively selected South African HEIs was examined. The quality of assessment literacy of academic staff at MGI was determined through questionnaires and interviews. A total number of 101 academic staff, representing various post levels, participated in the research. The key findings of the empirical study revealed that existing assessment induction programmes at South African HEIs are offered at times when academic staff are available and that the duration of such programmes differs significantly from institution to institution. It is expected that new and experienced staff must attend the programmes and although the programmes appear to be unique, they all share common content. In all cases, Staff Development Units are responsible for facilitating the assessment induction programmes. With regard to the assessment literacy of academic staff at MGI, it was determined that their assessment literacy is not compatible with the levels on which they lecture. This was revealed through the challenges they experienced when they were required to explain the assessment process, order the levels of Bloom’s taxonomy and match assessment concepts with appropriate explanations. It was further discovered that the respondents regarded induction programmes which are specifically aimed at academic elements such as lecturing responsibilities, classroom management and assessment as essential for their personal development. From the research findings the researcher developed a set of guidelines which are proposed for developing an assessment induction programme for MGI. / PhD (Learning and Teaching), North-West University, Vaal Triangle Campus, 2014
513

Evaluation of the effectiveness and predictive validity of English language assessment in two colleges of applied sciences in Oman

Al Hajri, Fatma Said Mohammed January 2013 (has links)
This thesis investigates the effectiveness of English language assessment in the Foundation Programme (FP) and its predictive validity for academic achievement in the First Year (FY) at two Colleges of Applied Sciences (CAS) in Oman. The objectives of this study are threefold: (1) Identify how well the FP assessment has met its stated and unstated objectives and evaluate its intended and unintended outcomes using impact evaluation approaches. (2) Study the predictive validity of FP assessment and analyse the linguistic needs of FY academic courses and assessment. (3) Investigate how FP assessment and its impact are perceived by students and teachers. The research design was influenced by Messick‟s (1989; 1994; 1996) unitary concept of validity, by Norris (2006; 2008; 2009) views on validity evaluation and by Owen‟s (2007) ideas on impact evaluation. The study was conducted in two phases using five different methods: questionnaires, focus groups, interviews, document analysis and a correlational study. In the first phase, 184 students completed a questionnaire and 106 of these participated in 12 focus groups, whilst 27 teachers completed a different questionnaire and 19 of these were interviewed. The aim of this phase was to explore the perceptions of the students and teachers on the FP assessment instruments in terms of their validity and reliability, structure, and political and social impact. The findings indicated a general positive perception of the instruments, though more so for the Academic English Skills course (AES) than the General English Skills course (GES). There were also calls for increasing the quantity and quality of the assessment instruments. The political impact of the English language FP assessment was strongly felt by the participants. In the second phase, 176 students completed a questionnaire and 83 of them participated in 15 focus groups; 29 teachers completed a different questionnaire and of these 23 teachers were interviewed. The main focus was on students and teachers‟ perceptions of FP assessment, and how language accuracy should be considered in marking academic written courses. One finding was that most students in FY tended to face difficulties not only in English but also in what could be called „study skills‟; some of these were attributed to the leniency of FP assessment exit criteria. Throughout the two phases, 118 documents on FP assessment at CAS were thematically analysed. The objective was to understand the official procedures prescribed for writing and using assessment instruments in FP and compare them against actual test papers and classroom practices. The findings revealed the use of norm-referenced assessment instead of criterion referenced, incompatibility between what was assessed and what was taught, inconsistency in using assessment criteria and in the unhelpful verbatim replication of national assessment standards. The predictive validity studies generally found a low overall correlation between students‟ scores in English language assessment instruments and their scores in academic courses. The findings of this study are in line with most but not all previous studies. The strength of predictive validity was dependent on a number of variables especially the students‟ specializations, and their self-evaluations of their own English language levels. Some recommendations are offered for the reform of entry requirements of the Omani higher education.
514

Skrivbedömning och validitet : Fallstudier av skrivbedömning i svenskundervisning på gymnasiet / Validity and classroom-based writing assessment : Case studies of writing instruction and assessment in upper secondary school

Skar, Gustaf January 2013 (has links)
This doctoral dissertation reports on results from three explorative case studies of teacher assessment practice within upper secondary school writing instruction. In Sweden, almost all responsibility for constructing, administering, and scoring assessments lies with the individual teacher. Unfortunately, little is known about classroom-based writing assessment and even less is known about the validity of such assessment. The aim of this dissertation is to build validity arguments based on classroom assessment practice concerning achievement tests in upper secondary school. Three research questions were formulated in relation to this aim: (1): To what extent can interpretation of scores be argued for? (2): To what extent can students be said to have had equal opportunities to learn what is later assessed? (3) To what extent can suggested and observed usage of scores be argued for, given the relationship between instruction and assessment?   The data for the studies consists of audio recorded observations, student texts, teacher comments, and scoring rubrics, and was gathered within writing units in three upper secondary schools. Altogether the observations comprise 17 lessons (or 19.6 hours). Data was also collected in interviews with three teachers and their students. The data on instructed and assessed writing was analyzed by conceptual tools related to a theoretical model of writing, the so-called Writing Wheel. The validity argument was built using Bachman’s (2005) Assessment Use Argument (AUA) model. On an aggregated level, the results indicate threats to the validity of interpretation of scores, to the validity of usage of scores, and threats associated with inequitable assessment. The first types of threats stem, for example, from scoring rubrics that are not aligned to the assessment tasks at hand, and a low degree of standardization in the administration of the assessment tasks. The second type of threat is related to this; for example, low standardization led to incomparable student marks. While some students could benefit from contacts with able peers (and/or parents) others could not. The third type relates to possibilities to learn what is later assessed, which was not fully evident in some cases. Finally, the results also implicate that the building of an AUA can serve as a syllabus-design tool for practitioners as well as a design tool in intervention studies. The closing chapter of the dissertation presents a number of hypotheses based on the case study findings. Concluding remarks suggest how these could be tested.
515

THE PEER ASSESSMENT PROCESS: A CASE STUDY OF UNIVERSITY STUDENTS RECEIVING PEER FEEDBACK

KATSOULAS, ELENI 04 January 2013 (has links)
The ability to receive regular peer feedback on learning should, in theory, be valuable to learners. A formative view will be presented in this study in which information is collected and used as feedback for student learning. This differs from summative practices where the purpose is to make judgments about the extent to which learning has taken place. This case study takes place in a first year master’s Occupational Therapy (OT) course where the focus is on the development of communication skills. These skills are developed through interviewing and assessment strategies. This case focuses on the feedback received by students from their peers based on the clinical interviews that were conducted. Peers in this study are members of the same learning team who have been divided into these groups for the purpose of learning together. Students in this course receive both written and oral peer feedback during peer assessment exercises. This feedback is formally reflected on by students as self-assessment. Although, both peer and self-assessments are used for formative purposes in this course, the primary focus of this study is on peer assessment. Six participants were recruited for this study. The data for this inquiry consisted of transcripts from six semi-structured interviews and a focus group as well as written artifacts from the course. The data analysis revealed three core themes related to both the peer assessment process and peer feedback. Motivation for Learning and Awareness of Growth or Development were identified as two key themes relating to student learning. The third theme identified was Factors that Impacted the Learning Experience which had to do with how students felt about having engaged in the peer assessment process. A unique finding regarding the latter theme centered around the time factor required to take on the roles, inherent in peer assessment activities. Students offered insights into the relationship between stress and motivation for learning when taking on peer assessment responsibilities. This study contributes to our understanding of the meaning and consequences of implementing peer assessment into the communication module of the OT course. Insights on the implications of this study to higher education in relation to peer assessment are also explored. / Thesis (Master, Education) -- Queen's University, 2012-12-29 00:11:03.187
516

VALIDATING THE CANADIAN ACADEMIC ENGLISH LANGUAGE ASSESSMENT FOR DIAGNOSTIC PURPOSES FROM THREE PERSPECTIVES: SCORING, TEACHING, AND LEARNING

Doe, Christine 30 April 2013 (has links)
Large-scale assessments are increasingly being used for more than one purpose, such as admissions, placement, and diagnostic decision-making, with each additional use requiring validation regardless of previous studies investigating other purposes. Despite this increased multiplicity of test use, there is limited validation research on adding diagnostic purposes—with the intention of directly benefiting teaching and learning—to existing large-scale assessments designed for high-stakes decision-making. A challenge with validating diagnostic purposes is to adequately balance investigations into the score interpretations and the intended beneficial consequences for teachers and students. The Assessment Use Argument (AUA) makes explicit these internal and consequential validity questions through a two-stage validation argument (Bachman & Palmer, 2010). This research adopted the AUA to examine the appropriateness of the Canadian Academic English Language (CAEL) Assessment for diagnostic purposes, by forming a validity argument that asked, to what extent did the CAEL essay meet the new diagnostic scoring challenges from the rater perspective, and a utilization argument centered on teachers' and students’ uses of the diagnostic information obtained from the assessment. This study employed three research phases at an English for Academic Purposes (EAP) program in one Canadian university. Data collection strategies included interview and verbal protocol data from two raters (Phase 1), interview and classroom observation data from one EAP course instructor (Phase 2), and interview and open-ended survey data from 47 English Language Learners (Phase 3). A multifaceted perception of CAEL for diagnostic purposes was observed: raters noted the greatest diagnostic potential at higher score levels, and teacher and student perceptions were largely influenced by previous diagnostic assessment experiences. This research emphasized the necessity of including multiple perspectives across contexts to form a deeper realization of the inferences and decisions made from diagnostic results. / Thesis (Ph.D, Education) -- Queen's University, 2013-04-29 09:40:22.649
517

Relatively idiosyncratic : exploring variations in assessors' performance judgements within medical education

Yeates, Peter January 2013 (has links)
Background: Whilst direct-observation, workplace-based (or performance) assessments, sit at the conceptual epitome of assessment within medical education, their overall utility is limited by high-inter-assessor score variability. We conceptualised this issue as one of problematic judgements by assessors. Existing literature and evidence about judgements within performance appraisal and impression formation, as well as the small evolving literature on raters’ cognition within medical education, provided the theoretical context to study assessor’s judgement processes.Methods and Results: In this thesis we present three studies. The first study adopted an exploratory approach to studying assessors’ judgements in direct observation performance assessments, by asking assessors to describe their thoughts whilst assessing standard videoed performances by junior doctors. Comments and follow up interviews were analysed qualitatively using grounded theory principles. Results showed that assessors attributed different levels of salience to different aspects of performances, understood criteria differently (often comparing performance against other trainees) and expressed their judgements in unique narrative language. Consequently assessors’ judgements were comparatively idiosyncratic, or unique.The two subsequent follow up studies used experimental, internet based, experimental designs to further investigate the comparative judgements demonstrated in study 1. In study 2, participants were primed with either good or poor performances prior to watching intermediate (borderline) performances. In study 3 a similar design was employed but participants watched identical performances in either increasing or decreasing levels of proficiency. Collectively, the results of these two studies showed that recent experiences influenced assessors’ judgements, repeatedly showing a contrast effect (performances were scored unduly differently from earlier performances). These effects were greater than participants’ consistent tendency to be either lenient or stringent and occurred at multiple levels of performance. The effect appeared to be robust despite our attempting to reduce participants’ reliance on the immediate context. Moreover, assessors appeared to lack insight into the effect on their judgements.Discussion: Collectively, these results indicate that assessors score variations can be substantially explained by idiosyncrasy in cognitive representations of the judgement task, and susceptibility to contrast effects through comparative judgements. Moreover, assessors appear to be incapable of judging in absolute terms, instead judging normatively. These findings have important implications for theory and practice and suggest numerous further lines of research.
518

An investigation into teachers' perceptions of classroom-based assessment of English as a foreign language in Korean primary education

Shim, Kyu Nam January 2008 (has links)
This study aims to investigate Korean teachers’ beliefs and their practice with respect to classroom-based English language assessment; thus it examines the teachers’ current working principles of assessment and their practices. This study also sets out to uncover, and to gain an in-depth understanding of further issues which emerged from the dissonance between the teachers’ beliefs and their practice. Following a discussion of the English teaching and assessment context, the first part of the study examines mainstream theories of language testing or assessment; it then considers how closely classroom-based assessment in Korean primary schools conforms to these theoretical principles. The second part of the study presents a small-scale research project. Four stages in teachers’ classroom-based assessment were examined; planning, implementation, monitoring, and recording and dissemination. A questionnaire was developed reflecting these stages; its findings were analyzed statically and qualitatively. Further qualitative data was also collected and analyzed through interviews with volunteer participants. This is based on an analysis of teachers’ firsthand experience and their opinions of the assessment of English as a foreign language. The results of the study revealed that generally the teachers hold and exercise their own firm beliefs regarding classroom-based assessment, and have a good knowledge of assessment or testing principles; thus they carried out their assessment using appropriate procedures taking into account the context of English teaching and assessment in which they operate. However, there were a number of issues which emerged from their assessment beliefs and their practice. It became clear that they did not put some of their principles into practice; a number of important factors, which are normally outside the teachers’ control, were found to be responsible for this, these include: overcrowded classrooms, heavy teaching loads, the central bureaucracy of the education system which controls primary education, and a shortage of funding for foreign language teaching. Teachers were also affected by the rather complex relationship with other teachers, head teachers, and even the parents of the students. However, it is evident that the teachers are constantly developing their skills and knowledge regarding assessment in order to address any possible challenges or tasks given to them. In addition, certain areas needing further investigation were identified. Based on the literature review and the findings of the research, tentative implications and recommendations for the development of classroom-based language assessment are discussed.
519

Multiple-respondent anecdotal assessments for behavior disorders; An analysis of interrater agreement and correspondence with treatment outcomes.

Wolf, Roxanne 05 1900 (has links)
The current study was designed to further evaluate the usefulness of anecdotal assessments. The goal of this study was to evaluate the overall agreement between multiple respondents on the primary function of aberrant behavior using the Motivation Assessment Scale (MAS) and the Functional Analysis Screening Tool (FAST) and, if agreement was obtained, to assess the effectiveness of treatment based on the outcome of the assessments. Results showed that anecdotal assessments were able to identify the general type of contingency maintaining two participants' problem behavior. However, for one participant the assessments did not correctly identify the specific form of reinforcement (attention or tangible items) that maintained the aberrant behavior.
520

Formativ bedömning : En kvalitativ studie om hur fem lågstadielärare reflekterar kring formativ bedömning och formativt arbetssätt i sin undervisning

Renlund, Julia January 2016 (has links)
In this thesis I intend to write about how five primary school teachers reflect on formative assessment and how the formative assessment of students can take place practically according to teachers. My questions are: How do the five teachers perceive on formative assessment as a pedagogical approach? How do the teachers do to assess students’ knowledge? In what way do the teachers give feedback to students?The work is based on qualitative interview method where five elementary school teachers were able to reflect on their classroom practice with a focus on formative assessment. The theoryI have chosen to work from is the so-called five key strategies of formative assessment.These strategies are about; clarifying, communicating and creating understanding of the learning objectives and criteria for progress, to achieve effective classroom discussions, activities and learning data to show that learning has taken place, to provide feedback for learning forward, toenable students to become learning resources for one another, and to enable students to own their own learning. The results of the interviews showed that a formative approach lasted in a more implicit than an explicit meaning in classrooms. The formative approach was fairly a new phenomenon and had not begun in the larger extent of the interviewed teachers’ schools yet. Assessment materials were used to clarify the learning outcomes for students. Methods andactivities in the classrooms took place on a smaller scale in the form of discussions between students. Self-assessment was considered by most teachers as something they could not imagine or was considered to be too difficult for the students because of the children's young age. The results also showed that there is a great need to make time for evaluation and to gain knowledge of articulate, reflective subject issues to provide students with

Page generated in 0.0562 seconds