1 |
Scaling Undergraduate Scientific Writing via Prominent Feature AnalysisGallo, Katarzyna Zaruska 14 December 2018 (has links)
Prominent Feature Analysis (PFA) is a reliable and valid writing assessment tool, derived from the writing it is used to assess. PFA, used to assess on-demand expository essays in Grades 3-12, uncovers positive and negative characteristics of a sample. To extend PFA to a new academic level and genre, I assessed scientific writing of 208 undergraduates, identifying 35 linguistic and 20 scientific prominent features. An essay could earn up to 28 positive (24 linguistic and four scientific), and up to 27 negative marks (11 linguistic and 16 scientific). The minimum prominent features number in a paper was 3, the maximum was 25 (M = 12.45, SD = 3.88). The highest positive and negative prominent features numbers noted were 17 (M = 4.11, SD = 3.96), and 16 (M = 8.34, SD = 3.25) respectively. Rasch analysis revealed a good data-model fit, with item separation of 5.81 (.97 reliability). The estimated feature difficulty of items spanned over 10 logits; common errors were easier to avoid than “good writing” characteristics to exhibit. Significant correlations among linguistic, but not between linguistic and scientific features, suggest writing proficiency does not assure excellence in scientific writing in novices. Ten linguistic features significantly strongly and moderately inter-correlated with each other, appearing to represent writing proficiency. Student GPA correlated significantly with the raw prominent features scores (r = .37; p < .01), and negatively with the sum of negative linguistic features (r = -.40, p < .01), providing support for scale’s validity, and suggesting that good students are better at avoiding common writing errors than less able learners. Additionally, PFA scores positively significantly correlated with composite ACT scores. To investigate PFA’s ability to track change in writing over time, I compared 2 sets of prominent features scores of 25 students. In comparison with earlier essays, later (longer) essays exhibited significantly more positive, and more negative features. Prominent features scores did not correlate significantly between the sets. This suggests, that while PFA is a valid and appropriate tool for analysis of undergraduate scientific writing, it was not suitable for tracking change in writing ability in this small sample.
|
2 |
"It's a matter of individual taste, I guess" : secondary school English teachers' and students' conceptualisations of quality in writingLines, Helen Elizabeth January 2014 (has links)
This thesis presents an investigation into secondary school English teachers’ and students’ conceptualisations of good writing, and how they might use their understandings of quality in writing for the purpose of improving writing. By focusing on the views and classroom practices of twelve-year-old students and their teachers, the research aims to advance understanding of teachers’ and students’ conceptual thinking about writing quality, and the underlying constructs. The research utilises data from an ESRC-funded project titled Grammar for Writing?: The Impact of Contextualised Grammar Teaching on Pupils’ Writing and Pupils’ Metalinguistic Understanding (grant number RES-062-23-0775). This data was gathered from thirty-one teachers and their Year 8 students over three terms. Lesson observations took place once each term, and were followed by interviews with each project teacher and one teacher-chosen student from each class. Interview questions relating to beliefs about good writing were included in the project schedules and were inductively analysed to discern themes in participants’ responses. Interviews with students took the form of ‘writing conversations’ during which students commented on samples of their own and their peers’ writing. A small-scale follow-up study with three Year 8 classes in one secondary school was used to confirm initial findings and to provide additional data on students’ beliefs about good writing. The research found that teachers’ conceptualisations of writing quality were internally consistent but that variation between teachers was marked. Teachers not only valued different qualities in writing but experienced different degrees of conflict and ambiguity when relating their personal construct of quality to the official, public construct, as embodied in national assessment criteria. The findings support earlier views of teacher judgement as richly textured and complex, drawing on different available indexes, including idiosyncratic conceptualisations of writing quality. Whilst students’ criteria for good writing echoed their teachers’ criteria to some extent, there was also evidence of students drawing on their own conceptualisations of quality, especially in relation to the intended impact of writing on the reader. Many students expressed a strong awareness of writing for an audience and clearly valued writing as a social practice. They especially valued peer judgement of their writing. However, students’ strategies for improving writing were often difficult to articulate, formulaic and generalised, or circumscribed by limited linguistic subject knowledge. The study is significant in offering an insight into teachers’ and students’ conceptualisations of writing quality and how these might be brought into play in the writing classroom. The findings may have particular resonance since they are reported at a time of radical change to assessment policy and practice in secondary schools in England.
|
Page generated in 0.0688 seconds