• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Examining the Role of Linguistic Flexibility in the Text Production Process

January 2017 (has links)
abstract: A commonly held belief among educators, researchers, and students is that high-quality texts are easier to read than low-quality texts, as they contain more engaging narrative and story-like elements. Interestingly, these assumptions have typically failed to be supported by the writing literature. Research suggests that higher quality writing is typically associated with decreased levels of text narrativity and readability. Although narrative elements may sometimes be associated with high-quality writing, the majority of research suggests that higher quality writing is associated with decreased levels of text narrativity, and measures of readability in general. One potential explanation for this conflicting evidence lies in the situational influence of text elements on writing quality. In other words, it is possible that the frequency of specific linguistic or rhetorical text elements alone is not consistently indicative of essay quality. Rather, these effects may be largely driven by individual differences in students' ability to leverage the benefits of these elements in appropriate contexts. This dissertation presents the hypothesis that writing proficiency is associated with an individual's flexible use of text properties, rather than simply the consistent use of a particular set of properties. Across three experiments, this dissertation relies on a combination of natural language processing and dynamic methodologies to examine the role of linguistic flexibility in the text production process. Overall, the studies included in this dissertation provide important insights into the role of flexibility in writing skill and develop a strong foundation on which to conduct future research and educational interventions. / Dissertation/Thesis / Doctoral Dissertation Psychology 2017
2

An Analysis of the Relationship Between 4 Automated Writing Evaluation Software and the Outcomes in the Writing Program Administrator’s “WPA Outcomes for First Year Composition”

January 2017 (has links)
abstract: My study examined Automated Writing Evaluation tools (AWE) and their role within writing instruction. This examination was framed as a comparison of 4 AWE tools and the different outcomes in the Writing Program Administrators “Outcomes Statement for First Year Composition” (the OS). I also reviewed studies that identify feedback as an effective tool within composition instruction as well as literature related to the growth of AWE and the 2 different ways that these programs are being utilized: to provide scoring and to generate feedback. My research focused on the feedback generating component of AWE and their relationship with helping students to meet the outcomes outlined in the OS. To complete this analysis, I coded the OS, using its outcomes as a reliable indicator of the perspectives of the academic community regarding First Year Composition (FYC). This coding was applied to text associated with two different kinds of feedback related AWEs. Two of the AWE used in this study facilitated human feedback using analytical properties: Writerkey and Eli Review. While the other 2 generated automated feedback: WriteLab and PEG Writing Scholar. I also reviewed instructional documents associated with each AWE and used the coding to compare the features described in each text with the different outcomes in the OS. The most frequently occurring coding from the feedback was related to Rhetorical Knowledge and other outcomes associated with revision, while the most common codes from the instructional documents were associated with feedback and collaboration. My research also revealed none of these AWE were capable of addressing certain outcomes, these were mostly related to activities outside of the actual process of composing, like the act of reading and the various writing mediums. / Dissertation/Thesis / Masters Thesis Composition 2017
3

The Android English Teacher: Writing Education in the Age of Automation

Daniel C Ernst (9155498) 23 July 2020 (has links)
<p>In an era of widespread automation—from grocery store self-checkout machines to self-driving cars—it is not outrageous to wonder: can teachers be automated? And more specifically, can automated computer teachers instruct students how to write? Automated computer programs have long been used in summative writing evaluation efforts, such as scoring standardized essay exams, ranking placement essays, or facilitating programmatic outcomes assessments. However, new claims about automated writing evaluation’s (AWE) formative educational potential mark a significant shift. My project questions the effectiveness of using AWE technology for formative educational efforts such as improving and teaching writing. Taken seriously, these efforts portend a future embrace of semi, or even fully, automated writing classes, an unprecedented development in writing pedagogy.</p><p>Supported by a summer-long grant from the Purdue Research Foundation, I conducted a small-<i>n </i>quasi-experiment to test claims by online college tutoring site Chegg.com that its EasyBib Plus AWE tool can improve both writing and writers. The experiment involved four college English instructors reading pairs of essays comprising one AWE-treated and untreated version per pair. Using a comparative judgment model, a rubric-free method of writing assessment based on Thurstone’s law, raters read and designated one of each pair “better.” Across four raters and 160 essays, I found that AWE-treated essays were designated better only 30% of the time (95% confidence interval: 20-40%), a statistically significant difference from the null hypothesis of 50%. The results suggest that Chegg’s EasyBib Plus tool offers no discernible improvement to student writing, and potentially even worsens it.</p><p>Finally, I analyze Chegg’s recent partnership with the Purdue Writing Lab and Online Writing Lab (OWL). The Purdue-Chegg partnership offers a useful test case for anticipating the effects of higher education’s embrace of automated educational technology going forward. Drawing on the history of writing assessment and the results of the experiment, I argue against using AWE for formative writing instruction. In an era of growing automation, I maintain that a human-centered pedagogy remains one of the most durable, important, effective, and transformative ingredients of a quality education.</p>

Page generated in 0.1205 seconds