• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 15
  • 12
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Writing Values: Between Composition and The Disciplines

Gooch, Jocelyn Joann 05 October 2006 (has links)
No description available.
32

Affective Possibilities for Rhetoric and Writing: How We Might Self-Assess Potentiality in Composition

Schaffer, Martha Wilson 16 April 2014 (has links)
No description available.
33

Reading Beyond The Folder: Classroom Portfolio Assessment As A Literacy Event

Greve, Curt Michael 27 July 2016 (has links)
No description available.
34

“She is such a B!” – “Really? How can you tell?” : A qualitive study into inter-rater reliability in grading EFL writing in a Swedish upper-secondary school

Mård Grinde, Josefin January 2019 (has links)
This project investigates the extent to which EFL teachers’ assessment practices of two students’ written texts differ in a Swedish upper-secondary school. It also seeks to understand the factors influencing the teachers regarding inter-rater reliability in their assessment and marking process. The results show inconsistencies in the summative grades given by the raters; these inconsistencies include what the raters deem important in the rubric; however, the actual assessment process was very similar for different raters. Based on the themes found in the content analysis regarding what perceived factors affected the raters, the results showed that peer-assessment, assessment training, context, and time were of importance to the raters. Emerging themes indicate that the interpretation of rubrics, which should actually matter the most when it comes to assessment, causes inconsistencies in summative marking, regardless of the use of the same rubrics, criteria and instructions by the raters. The results suggest a need for peer-assessment as a tool in the assessment and marking of students’ texts to ensure inter-rater reliability, which would mean that more time needs to be allocated to grading.
35

Students and Faculty Indivisible: Crafting a Higher Education Culture of Flourishing

Camfield, Eileen K. 01 January 2015 (has links)
This dissertation is comprised of three separate articles addressing related issues central to the culture and future of higher education. The questions that animate the investigations are: In what ways is writing self-efficacy forged in the learning relationships between student and instructor? In what ways, if any, do traditional assessment practices impact student development? In what ways, if any, does institutional culture shape faculty identity, and what is gained or lost in the process? These queries stem from concerns about possible disconnects between visions of higher education's potential and actual practices in the classroom. The dissertation uses grounded theory to explore the deep nature of student learning needs as articulated by the students themselves, seeks alignment between pedagogical and assessment protocols that foster writing expertise, and uses social reproduction theory and intersectionality to reveal the foundations of faculty identity development that can work across student development needs. Specific recommendations for meaningful reform are identified with an eye on cultivating a culture of collegiality and mutual trust where learning relationships can flourish.
36

Assisting Novice Raters in Addressing the In-Between Scores When Rating Writing

Greer, Brittney 16 June 2013 (has links) (PDF)
In the research regarding rating ESL writing assessments, borderline writing samples are mentioned, but a solution has yet to be addressed. Borderline samples are writing samples that do not perfectly fit a set level within the rubric, but instead have characteristics from multiple levels. The aim of this thesis is to provide an improved training module in the setting of an Intensive English Program by exposing new raters to borderline samples and rating rationale from experienced raters. The purpose of this training is to increase the confidence, consistency, and accuracy of novice raters when rating borderline samples of writing. The training consists of a workbook with a rubric and instructions for use, benchmark examples of writing, borderline examples of writing with comments from experienced raters defending the established scores, then a variety of writing samples for practice. The selection of the benchmark and the borderline examples of writing was informed by the fit statistic from existing datasets that had been analyzed with many-facet Rasch measurement. Eight experienced raters participated in providing rationale based upon the rubric explaining why each borderline sample was rated with its established score, and describing why the sample could be considered at a different level. In order to assess the effectiveness of the training workbook, it was piloted by 10 novice raters who rated a series of essays and responded to a survey. Results of the survey demonstrated that rater confidence increased following the training, but that they needed more time with the training materials to use them properly. The statistical analyses showed insignificant changes, which could be due to the limitations of the data collection. Further research regarding the effectiveness of this training workbook is necessary, as well as an increased discussion in the field regarding the prevalent issue of rating borderline samples of writing.
37

Exploring Uses of Automated Essay Scoring for ESL: Bridging the Gap between Research and Practice

Tesh, Geneva Marie 07 1900 (has links)
Manually grading essays and providing comprehensive feedback pose significant challenges for writing instructors, requiring subjective assessments of various writing elements. Automated essay scoring (AES) systems have emerged as a potential solution, offering improved grading consistency and time efficiency, along with insightful analytics. However, the use of AES in English as a Second Language (ESL) remains rare. This dissertation aims to explore the implementation of AES in ESL education to enhance teaching and learning. The dissertation presents a study involving ESL teachers who learned to use a specific AES system called LightSide, a free and open text mining tool, to enhance writing instruction. The study involved observations, interviews, and a workshop where teachers learned to build their own AES using LightSide. The study aimed to address questions related to teacher interest in using AES, challenges faced by teachers, and the influence of the workshop on teachers' perceptions of AES. By exploring the use of AES in ESL education, this research provides valuable insights to inform the integration of technology and enhance the teaching and learning of writing skills for English language learners.
38

Dynamic Criteria Mapping: A Study of the Rhetorical Values of Placement Evaluators

Stalions, Eric Wesley 20 June 2007 (has links)
No description available.
39

Rhetorical Concerns in a Set of Ninth Grade Compositions, Optimal Revisions and Non-optimal Revisions

Sewell, Judith A. 30 April 2009 (has links)
No description available.
40

The Effect of Elaborative Interrogation on the Synthesis of Ideas from Multiple Sources of Information

Farooq, Omer 02 May 2018 (has links)
No description available.

Page generated in 0.0754 seconds