• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 15
  • 12
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Rubric Rating with MFRM vs. Randomly Distributed Comparative Judgment: A Comparison of Two Approaches to Second-Language Writing Assessment

Sims, Maureen Estelle 01 April 2018 (has links)
The purpose of this study is to explore a potentially more practical approach to direct writing assessment using computer algorithms. Traditional rubric rating (RR) is a common yet highly resource-intensive evaluation practice when performed reliably. This study compared the traditional rubric model of ESL writing assessment and many-facet Rasch modeling (MFRM) to comparative judgment (CJ), the new approach, which shows promising results in terms of reliability and validity. We employed two groups of raters”novice and experienced”and used essays that had been previously double-rated, analyzed with MFRM, and selected with fit statistics. We compared the results of the novice and experienced groups against the initial ratings using raw scores, MFRM, and a modern form of CJ”randomly distributed comparative judgment (RDCJ). Results showed that the CJ approach, though not appropriate for all contexts, can be valid and as reliable as RR while requiring less time to generate procedures, train and norm raters, and rate the essays. Additionally, the CJ approach is more easily transferable to novel assessment tasks while still providing context-specific scores. Results from this study will not only inform future studies but can help guide ESL programs to determine which rating model best suits their specific needs.
12

Mapping the Relationships among the Cognitive Complexity of Independent Writing Tasks, L2 Writing Quality, and Complexity, Accuracy and Fluency of L2 Writing

Yang, Weiwei 12 August 2014 (has links)
Drawing upon the writing literature and the task-based language teaching literature, the study examined two cognitive complexity dimensions of L2 writing tasks: rhetorical task varying in reasoning demand and topic familiarity varying in the amount of direct knowledge of topics. Four rhetorical tasks were studied: narrative, expository, expo-argumentative, and argumentative tasks. Three topic familiarity tasks were investigated: personal-familiar, impersonal-familiar, and impersonal-less familiar tasks. Specifically, the study looked into the effects of these two cognitive complexity dimensions on L2 writing quality scores, their effects on complexity, accuracy, and fluency (CAF) of L2 production, and the predictive power of the CAF features on L2 writing scores for each task. Three hundred and seventy five Chinese university EFL students participated in the study, and each student wrote on one of the six writing tasks used to study the cognitive complexity dimensions. The essays were rated by trained raters using a holistic scale. Thirteen CAF measures were used, and the measures were all automated through computer tools. One-way ANOVA tests revealed that neither rhetorical task nor topic familiarity had an effect on the L2 writing scores. One-way MANOVA tests showed that neither rhetorical task nor topic familiarity had an effect on accuracy and fluency of the L2 writing, but that the argumentative essays were significantly more complex in global syntactic complexity features than the essays on the other rhetorical tasks, and the essays on the less familiar topic were significantly less complex in lexical features than the essays on the more familiar topics. All-possible subsets regression analyses revealed that the CAF features explained approximately half of the variance in the writing scores across the tasks and that writing fluency was the most important CAF predictor for five tasks. Lexical sophistication was however the most important CAF predictor for the argumentative task. The regression analyses further showed that the best regression models for the narrative task were distinct from the ones for the expository and argumentative types of tasks, and the best models for the personal-familiar task were distinct from the ones for the impersonal tasks.
13

Exploring secondary writing teachers’ metacognition: an avenue to professional development

Martin, Joy Alison January 1900 (has links)
Doctor of Philosophy / Curriculum and Instruction / Lotta Larson / Writing teachers teach students to read, write, and think through text. They draw upon their own comprehension to determine if, when, and how to intervene in directing students to deeper, more thoughtfully written texts by encouraging them to monitor and regulate their thoughts—to be metacognitive. Writing itself has been called “applied metacognition,” for it is essentially the production of thought (Hacker, Keener, & Kircher, 2009, p. 154). Yet little is known about the metacognitive practices and behaviors of those who teach writing. The purpose of this instrumental, collective case study was to explore and describe writing teachers’ metacognition as they took part in two range-finding events in a midwestern school district. Participants were tasked with reading and scoring student essays and providing narrative feedback to fuel training efforts for future scorers of the district’s writing assessments. Each range-finding event constituted a case with fourteen participants. Three administrative facilitators and four retired English teachers participated in both events, along with seven different practicing teachers per case. The study concluded that, indeed, participants perceived and regulated their thinking in numerous ways while reading and responding to student essays. With Flavell’s (1979) theoretical model of metacognition as a framework for data analysis, 28 distinct content codes emerged in the data: 1) twelve codes under metacognitive knowledge of person, task, and strategy, 2) seven codes under metacognitive experiences, 3) six codes under metacognitive goals (tasks), and 4) three codes under metacognitive actions (strategies). In addition, three dichotomous themes emerged across the cases indicating transformational distinctions in teachers’ thinking: 1) teaching writing and scoring writing, 2) confusion and clarity, and 3) frustrations and fruits. The study highlighted the potential of improving teachers’ meta-thinking about teaching and assessing writing through dialectic conversations with other professionals. Its findings and conclusions implicate teacher educators, practicing teachers, and school district administrators to seek opportunities for cultivating teachers’ awareness, monitoring, and regulation of their thoughts about content, instruction, and selves to better serve their students.
14

Testing the Test: Expanding the Dialogue on Workplace Writing Assessment

Tanner, Lindsay Elizabeth 01 December 2017 (has links)
This project is a case study of writing assessment practices in a particular workplace called "High Hits," a local search engine optimization (SEO) company. The writing tests given to new hires serve a parallel purpose to academic placement exams, in that they are a high-stress, high-risk situation that aims to evaluate writer ability rather than the quality of the completed task (Haswell 242, Elbow 83, Moss 110). However, while academic assessment measures ability with the aim to improve the students' learning, workplace assessment is driven by market forces and is seen in terms of return on investment. This case study used qualitative and quantitative measurements to examine the writing tests of employees; this examination was followed by analyzing a random sampling of subsequent writing tasks of copywriters to determine whether the assessment methods being used by the company to assess the writing tests adequately predicted the writing ability of the copywriters.
15

Investigating the development of syntactic complexity in L2 Chinese writing

Pan, Xiaofei 01 May 2018 (has links)
This present study investigates the development of second language (L2) Chinese learners’ writing by 1) subjective ratings of essay quality, 2) a battery of objective measures representing the general syntactic complexity as well as specific syntactic features, and 3) the sources of verb phrase complexity used by learners of different institutional levels. This study first compares the subjective ratings of the essays written by learners across four institutional levels and then uses Cumulative Linked Model to examine the contribution of the objective measures of linguistic features to the essay ratings. This study further identifies a number of sources used by learners to construct complex verb phrases, which is an important contributor of the essay rating, and compares the amount of usages by learners at different institutional levels. The purpose of the study is to better understand L2 Chinese learners’ syntactic development in writing from multi-dimensional perspectives, and to identify the most crucial elements that determine the quality of writing. This study recruits 105 L2 Chinese college learners to write a narrative essay and an argumentative essay according to the prompts. Each of the writing sample is rated by two independent raters according to the holistic ACTFL Proficiency Guidelines, as well as the analytic rubric which was adapted from the ESL Composition Profile for this study. The derivation of syntactic complexity measures was based on the rank scales of lexicogrammar in Systemic Functional Linguistics (Halliday & Matthiessen, 2014), involving 12 features at the levels of clause complex, clause, and verb phrase, some of which represent constructions unique to Chinese. A series of statistical tests, including Kruskal-Wallis tests, Dunn’ tests, Spearman’ correlation tests, and CLM are performed to answer that research questions. The findings show that 1) learners’ overall writing quality measured by holistic and analytic ratings do not show significant differences across the first several academic years; 2) higher-level learners are more heterogeneous in writing ability than lower-level learners; 3) phrasal complexity contributes more to the essay quality than clausal complexity; 4) syntactic complexity features that learners develop fastest hardly overlap with those that contribute most to the essay rating; 5) complex verbal phrases come from 10 different sources and the composition of complex verbal phrases remain stable across the groups; and 6) essay types makes significant differences in terms of holistic and analytic ratings, use of syntactic complexity features, as well as their contribution to the essay ratings. From the pedagogical view, this study points out that instruction should focus more on complexity at the phrasal level, especially nominalization and complex verb phrases, that play a more important role to determine the writing quality. Some of the current focus in instruction may not necessarily lead to better quality or higher proficiency in Chinese writing.
16

A comparability study on differences between scores of handwritten and typed responses on a large-scale writing assessment

Rankin, Angelica Desiree 01 July 2015 (has links)
As the use of technology for personal, professional, and learning purposes increases, more and more assessments are transitioning from a traditional paper-based testing format to a computer-based one. During this transition, some assessments are being offered in both paper and computer formats in order to accommodate examinees and testing center capabilities. Scores on the paper-based test are often intended to be directly comparable to the computer-based scores, but such claims of comparability are often unsupported by research specific to that assessment. Not only should the scores be examined for differences, but the thought processes used by raters while scoring those assessments should also be studied to better understand why raters might score response modes differently. Previous comparability literature can be informative, but more contemporary, test-specific research is needed in order to completely support the direct comparability of scores. The goal of this thesis was to form a more complete understanding of why analytic scores on a writing assessment might differ, if at all, between handwritten and typed responses. A representative sample of responses to the writing composition portion of a large-scale high school equivalency assessment were used. Six trained raters analytically scored approximately six-hundred examinee responses each. Half of those responses were typed, and the other half were the transcribed handwritten duplicates. Multiple methods were used to examine why differences between response modes might exist. A MANOVA framework was applied to examine score differences between response modes, and the systematic analyses of think-alouds and interviews were used to explore differences in rater cognition. The results of these analyses indicated that response mode was of no practical significance, meaning that domain scores were not notably dependent on whether or not a response was presented as typed or handwritten. Raters, on the other hand, had a more substantial effect on scores. Comments from the think-alouds and interviews suggest that, while the scores were not affected by response mode, raters tended to consider certain aspects of typed responses differently than handwritten responses. For example, raters treated typographical errors differently from other conventional errors when scoring typed responses, but not while scoring the handwritten duplicates. Raters also indicated that they preferred scoring typed responses over handwritten ones, but felt they could overcome their personal preferences to score both response modes similarly. Empirical investigations on the comparability of scores, combined with the analysis of raters’ thought processes, helped to provide a more evidence-based answer to the question of why scores might differ between response modes. Such information could be useful for test developers when making decisions regarding what mode options to offer and how to best train raters to score such assessments. The design of this study itself could be useful for testing organizations and future research endeavors, as it could be used as a guide for exploring score differences and the human-based reasons behind them.
17

An Argument-based Validity Inquiry into the Empirically-derived Descriptor-based Diagnostic (EDD) Assessment in ESL Academic Writing

Kim, Youn-Hee 13 August 2010 (has links)
This study built and supported arguments for the use of diagnostic assessment in English as a second language (ESL) academic writing. In the two-phase study, a new diagnostic assessment scheme, called the Empirically-derived Descriptor-based Diagnostic (EDD) checklist, was developed and validated for use in small-scale classroom assessment. The checklist assesses ESL academic writing ability using empirically-derived evaluation criteria and estimates skill parameters in a way that overcomes the problems associated with the number of items in diagnostic models. Interpretations of and uses for the EDD checklist were validated using five assumptions: (a) that the empirically-derived diagnostic descriptors that make up the EDD checklist are relevant to the construct of ESL academic writing; (b) that the scores derived from the EDD checklist are generalizable across different teachers and essay prompts; (c) that performance on the EDD checklist is related to performance on other measures of ESL academic writing; (d) that the EDD checklist provides a useful diagnostic skill profile for ESL academic writing; and (e) that the EDD checklist helps teachers make appropriate diagnostic decisions and has the potential to positively impact teaching and learning ESL academic writing. Using a mixed-methods research design, four ESL writing experts created the EDD checklist from 35 descriptors of ESL academic writing. These descriptors had been elicited from nine ESL teachers’ think-aloud verbal protocols, in which they provided diagnostic feedback on ESL essays. Ten ESL teachers utilized the checklist to assess 480 ESL essays and were interviewed about its usefulness. Content reviews from ESL writing experts and statistical dimensionality analyses determined that the underlying structure of the EDD checklist consists of five distinct writing skills: content fulfillment, organizational effectiveness, grammatical knowledge, vocabulary use, and mechanics. The Reduced Reparameterized Unified Model (Hartz, Roussos, & Stout, 2002) then demonstrated the diagnostic quality of the checklist and produced fine-grained writing skill profiles for individual students. Overall teacher evaluation further justified the validity claims for the use of the checklist. The pedagogical implications of the use of diagnostic assessment in ESL academic writing were discussed, as were the contributions that it would make to the theory and practice of second language writing instruction and assessment.
18

An Argument-based Validity Inquiry into the Empirically-derived Descriptor-based Diagnostic (EDD) Assessment in ESL Academic Writing

Kim, Youn-Hee 13 August 2010 (has links)
This study built and supported arguments for the use of diagnostic assessment in English as a second language (ESL) academic writing. In the two-phase study, a new diagnostic assessment scheme, called the Empirically-derived Descriptor-based Diagnostic (EDD) checklist, was developed and validated for use in small-scale classroom assessment. The checklist assesses ESL academic writing ability using empirically-derived evaluation criteria and estimates skill parameters in a way that overcomes the problems associated with the number of items in diagnostic models. Interpretations of and uses for the EDD checklist were validated using five assumptions: (a) that the empirically-derived diagnostic descriptors that make up the EDD checklist are relevant to the construct of ESL academic writing; (b) that the scores derived from the EDD checklist are generalizable across different teachers and essay prompts; (c) that performance on the EDD checklist is related to performance on other measures of ESL academic writing; (d) that the EDD checklist provides a useful diagnostic skill profile for ESL academic writing; and (e) that the EDD checklist helps teachers make appropriate diagnostic decisions and has the potential to positively impact teaching and learning ESL academic writing. Using a mixed-methods research design, four ESL writing experts created the EDD checklist from 35 descriptors of ESL academic writing. These descriptors had been elicited from nine ESL teachers’ think-aloud verbal protocols, in which they provided diagnostic feedback on ESL essays. Ten ESL teachers utilized the checklist to assess 480 ESL essays and were interviewed about its usefulness. Content reviews from ESL writing experts and statistical dimensionality analyses determined that the underlying structure of the EDD checklist consists of five distinct writing skills: content fulfillment, organizational effectiveness, grammatical knowledge, vocabulary use, and mechanics. The Reduced Reparameterized Unified Model (Hartz, Roussos, & Stout, 2002) then demonstrated the diagnostic quality of the checklist and produced fine-grained writing skill profiles for individual students. Overall teacher evaluation further justified the validity claims for the use of the checklist. The pedagogical implications of the use of diagnostic assessment in ESL academic writing were discussed, as were the contributions that it would make to the theory and practice of second language writing instruction and assessment.
19

Exploring Teachers' Writing Assessment Literacy in Multilingual First-Year Composition: A Qualitative Study on e-Portfolios

January 2018 (has links)
abstract: This project investigated second language writing teachers’ writing assessment literacy by looking at teachers’ practices of electronic writing portfolios (e-WPs), as well as the sources that shape L2 writing teachers’ knowledge of e-WPs in the context of multilingual First-Year Composition (FYC) classrooms. By drawing on Borg’s (2003) theory of teacher cognition and Crusan, Plakans, and Gebril’s (2016) definition of assessment literacy, I define L2 teachers’ writing assessment literacy as teachers’ knowledge, beliefs and practices of a particular assessment tool, affected by institutional factors. While teachers are the main practitioners who help students create e-WPs (Hilzensauer & Buchberger, 2009), studies on how teachers actually incorporate e-WPs in classes and what sources may influence teachers’ knowledge of e-WPs, are scant. To fill in this gap, I analyzed data from sixteen teachers’ semi-structured interviews. Course syllabi were also collected to triangulate the interview data. The interview results indicated that 37.5 % of the teachers use departmental e-WPs with the goal of guiding students throughout their writing process. 43.7 % of the teachers do not actively use e-WPs and have students upload their writing projects only to meet the writing program’s requirement at the end of the semester. The remaining 18.7 % use an alternative platform other than the departmental e-WP platform, throughout the semester. Sources influencing teachers’ e-WP knowledge included teachers’ educational and work experience, technical difficulties in the e-WP platform, writing program policies and student reactions. The analysis of the course syllabi confirmed the interview results. Based on the findings, I argue that situated in the context of classroom assessment, institutional factors plus teachers’ insufficient knowledge of e-WPs limit the way teachers communicate with students, whose reactions cause teachers to resist e-WPs. Conversely, teachers’ sufficient knowledge of e-WPs enables them to balance the pressure from the institutional factors, generating positive reactions from the students. Students’ positive reactions encourage teachers to accept the departmental e-WPs or use similar alternative e-WP platforms. Pedagogical implications, limitations of the study and suggestions for future research are reported to conclude the dissertation. / Dissertation/Thesis / Doctoral Dissertation Linguistics and Applied Linguistics 2018
20

Hur tänker du? : En kvalitativ studie av sambedömning vid nationella prov i svenska

Odelius, Jenny January 2017 (has links)
No description available.

Page generated in 0.1727 seconds