Spelling suggestions: "subject:"educationization assessment anda evaluation"" "subject:"educationization assessment ando evaluation""
11 |
Cultures of writing: The state of transfer at state comprehensive universitiesDerek R Sherman (10947219) 04 August 2021 (has links)
<p>The Elon Research Seminar, <i>Critical Transitions: Writing and the Question of Transfer</i>, was a coalition of rhetoric and composition scholars’ attempt at codifying writing transfer knowledge for teaching and research purposes. Although the seminar was an important leap in transfer research, many ‘behind the scenes’ decisions of writing transfer, often those not involving the writing program, go unnoticed, yet play a pivotal role in how writing programs encourage and reproduce writing transfer in the classroom. This dissertation study, inspired by a pilot study conducted in Fall 2018 on writing across the curriculum programs and their role in writing transfer, illustrates how an institution’s context systems (e.g., macrosystem, mesosystem, microsystem, etc.) affect writing programs’ processes—i.e., curriculum components, assessment, and administrative structure and budget—and vice versa. Using Bronfenbrenner and Morris’ (2006) bioecological model, I show how writing programs and their context systems interact to reproduce writing transfer practices. Through ten interviews with writing program administrators at state comprehensive universities, I delineate specific actions that each writing program could take to encourage writing transfer. I develop a list of roles and responsibilities a university’s context systems play in advocating writing transfer practices. The results of the study show that research beyond the writing classroom and students is necessary to understand how writing transfer opportunities arise in university cultures of writing.</p>
|
12 |
1500 Students and Only a Single Cluster? A Multimethod Clustering Analysis of Assessment Data from a Large, Structured Engineering CourseTaylor Williams (13956285) 17 October 2022 (has links)
<p> </p>
<p>Clustering, a prevalent class of machine learning (ML) algorithms used in data mining and pattern-finding—has increasingly helped engineering education researchers and educators see and understand assessment patterns at scale. However, a challenge remains to make ML-enabled educational inferences that are useful and reliable for research or instruction, especially if those inferences influence pedagogical decisions or student outcomes. ML offers an opportunity to better personalizing learners’ experiences using those inferences, even within large engineering classrooms. However, neglecting to verify the trustworthiness of ML-derived inferences can have wide-ranging negative impacts on the lives of learners. </p>
<p><br></p>
<p>This study investigated what student clusters exist within the standard operational data of a large first-year engineering course (>1500 students). This course focuses on computational thinking skills for engineering design. The clustering data set included approximately 500,000 assessment data points using a consistent five-scale criterion-based grading framework. Two clustering techniques—N-TARP profiling and K-means clustering—examined criterion-based assessment data and identified student cluster sets. N-TARP profiling is an expansion of the N-TARP binary clustering method. N-TARP is well suited to this course’s assessment data because of the large and potentially high-dimensional nature of the data set. K-means clustering is one of the oldest and most widely used clustering methods in educational research, making it a good candidate for comparison. After finding clusters, their interpretability and trustworthiness were determined. The following research questions provided the structure for this study: RQ1 – What student clusters do N-TARP profiling and K-means clustering identify when applied to structured assessment data from a large engineering course? RQ2 – What are the characteristics of an average student in each cluster? and How well does the average student in each cluster represent the students of that cluster? And RQ3 – What are the strengths and limitations of using N-TARP and K-means clustering techniques with large, highly structured engineering course assessment data?</p>
<p><br></p>
<p>Although both K-means clustering and N-TARP profiling did identify potential student clusters, the clusters of neither method were verifiable or replicable. Such dubious results suggest that a better interpretation is that all student performance data from this course exist in a single homogeneous cluster. This study further demonstrated the utility and precision of N-TARP’s warning that the clustering results within this educational data set were not trustworthy (by using its W value). Providing this warning is rare among the thousands of available clustering methods; most clustering methods (including K-means) will return clusters regardless. When a clustering algorithm identifies false clusters that lack meaningful separation or differences, incorrect or harmful educational inferences can result. </p>
|
13 |
Capturing L2 Oral Proficiency with CAF Measures as Predictors of the ACTFL OPI RatingMayu Miyamoto (6634307) 14 May 2019 (has links)
<p>Despite an emphasis on oral communication in most foreign language classrooms, the resource-intensive nature (i.e. time and manpower) of speaking tests hinder regular oral assessments. A possible solution is the development of a (semi-) automated scoring system. When it is used in conjunction with human raters, the consistency of computers can complement human raters’ comprehensive judgments and increase efficiency in scoring (e.g., Enright & Quinlan, 2010). In search of objective and quantifiable variables that are strongly correlated with overall oral proficiency, a number of studies have reported that some utterance fluency variables (e.g., speech rate and mean length of run) might be strong predictors for L2 learners’ speaking ability (e.g., Ginther et al., 2010; Hirotani et al., 2017). However, these findings are difficult to generalize due to small sample sizes, narrow ranges of proficiency levels, and/or a lack of data from languages other than English. The current study analyzed spontaneous speech samples collected from 170 Japanese learners at a wide range of proficiency levels determined by a well-established speaking test, the American Council on the Teaching of Foreign Languages’ (ACTFL) Oral Proficiency Interview (OPI). Prior to analysis, 48 <i>Complexity, Accuracy, Fluency</i> (CAF) measures (with a focus on fluency variables) were calculated from the speech samples. First, the study examined the relationships among the CAF measures and learner oral proficiency assessed by the ACTFL OPI. Then, using an empirically-based approach, a feasibility of using a composite measure to predict L2 oral proficiency was investigated. The results revealed that <i>Speech Speed</i> and <i>Complexity</i> variables demonstrated strong correlation to the OPI levels, and moderately strong correlations were found for the variables in the following categories: <i>Speech Quantity, Pause</i>, <i>Pause Location</i> (i.e., Silent pause ratio within AS-unit), <i>Dysfluency</i> (i.e., Repeat ratio), and <i>Accuracy.</i> Then, a series of multiple regression analyses revealed that a combination of five CAF measures (i.e., Effective articulation rate, Silent pause ratio, Repeat ratio, Syntactic complexity, and Error-free AS-unit ratio) can predict 72.3% of the variance of the OPI levels. This regression model includes variables that correspond to Skehan’s (2009) proposed three categories of fluency (speed, breakdown, and repair) and variables that represent CAF, supporting the literature (e.g., Larsen-Freeman, 1978, Skehan, 1996).</p>
|
14 |
A systems analysis of selection for tertiary education: Queensland as a case studyMaxwell, Graham Samuel Unknown Date (has links)
No description available.
|
15 |
A systems analysis of selection for tertiary education: Queensland as a case studyMaxwell, Graham Samuel Unknown Date (has links)
No description available.
|
16 |
A systems analysis of selection for tertiary education: Queensland as a case studyMaxwell, Graham Samuel Unknown Date (has links)
No description available.
|
17 |
The Technical Qualities of the Elicited Imitation Subsection of The Assessment of College English, International (ACE-In)Xiaorui Li (9025040) 25 June 2020 (has links)
<p>The present study investigated technical qualities of the elicited imitation (EI) items used by the Assessment of College English – International (ACE-In), a locally developed English language proficiency test used in the undergraduate English Academic Purpose Program at Purdue University. EI is a controversial language assessment tool that has been utilized and examined for decades. The simplicity of the test format and the ease of rating place EI in an advantageous position to be widely implemented in language assessment. On the other hand, EI has received a series of critiques, primarily questioning its validity. To offer insights into the quality of the EI subsection of the ACE-In and to provide guidance for continued test development and revision, the present study examined the measurement qualities of the items by analyzing the pre- and post-test performance of 100 examines on EI. The analyses consist of an item analysis that reports item difficulty, item discrimination, and total score reliability; an examination of pre-post changes in performance that reports a matched pairs t-test and item instructional sensitivity; and an analysis of the correlation patterns between EI scores and TOEFL iBT total and subsection scores.</p><p>The results of the item analysis indicated that the current EI task was slightly easy for the intended population, but test items functioned satisfactorily in terms of separating examinees of higher proficiency from those of lower proficiency. The EI task was also found to have high internal consistency across forms. As for the pre-post changes, a significant pair-wise difference was found between the pre- and post-performance after a semester of instruction. However, the results also reported that over half of the items were relatively insensitive to instruction. The last stage of the analysis indicated that while EI scores had a significant positive correlation with TOEFL iBT total scores and speaking subsection scores, EI scores were negatively correlated with TOEFL iBT reading subsection scores. </p><p>Findings of the present study provided evidence in favor of the use of EI as a measure of L2 proficiency, especially as a viable alternative to free-response items. EI is also argued to provide additional information regarding examinees’ real-time language processing ability that standardized language tests are not intended to measure. Although the EI task used by the ACE-In is generally suitable for the targeted population and testing purposes, it can be further improved if test developers increase the number of difficult items and control the contents and the structures of sentence stimuli. </p><p>Examining the technical qualities of test items is fundamental but insufficient to build a validity argument for the test. The present EI test can benefit from test validation studies that exceed item analysis. Future research that focuses on improving item instructional sensitivity is also recommended.</p>
|
18 |
A Variability Analysis of Grading Open-Ended Tasks with Rubrics Across Many GradersNathan M Hicks (9183533) 30 July 2020 (has links)
Grades serve as one of the primary indicators of student learning, directing subsequent actions for students, instructors, and administrators, alike. Therefore, grade validity—that is, the extent to which grades communicate a meaningful and credible representation of what they purport to measure—is of utmost importance. However, a grade cannot be valid if one cannot trust that it will consistently and reliably result in the same value, regardless of who makes a measure or when they make it. Unfortunately, such reliability becomes increasingly challenging to achieve with larger class sizes, especially when utilizing multiple evaluators, as is often the case with mandatory introductory courses at large universities. Reliability suffers further when evaluating open-ended tasks, as are prevalent in authentic, high-quality engineering coursework.<div><br></div><div>This study explores grading reliability in the context of a large, multi-section engineering course. Recognizing the number of people involved and the plethora of activities that affect grading outcomes, the study adopts a systems approach to conduct a human reliability analysis using the Functional Resonance Analysis Method. Through this method, a collection of data sources, including course materials and observational interviews with undergraduate teaching assistant graders, are synthesized to produce a general model for how actions vary and affect subsequent actions within the system under study. Using a course assignment and student responses, the model shows how differences in contextual variables affect expected actions within the system. Next, the model is applied to each of the observational interviews with undergraduate teaching assistants to demonstrate how these actions occur in practice and to compare graders to one another and with expected behaviors. These results are further related to the agreement in system outcomes, or grades, assigned by each grader to guide analysis of how actions within the system affect its outcome.<br></div><div><br></div><div>The results of this study connect and elaborate upon previous models of grader cognition by analyzing the phenomenon in engineering, a previously unexplored context. The model presented can be easily generalized and adapted to smaller systems with fewer actors to understand sources of variability and potential threats to outcome reliability. The analysis of observed outcome instantiations guides a set of recommendations for minimizing grading variability.<br></div>
|
19 |
AN ENHANCED LEARNING ENVIRONMENT FOR MECHANICAL ENGINEERING TECHNOLOGY STUDENTS: AN ENERGY TRANSFORMATIONCole M Maynard (6622457) 14 May 2019 (has links)
The desire to produce a learning environment which promotes student motivation, collaboration, and higher order thinking is common within the higher education system of today. Such learning environments also have the ability to address challenges’ Mechanical Engineering Technology (MET) students face entering the workforce. Through the vertical and horizontal integration of courses, this research presents how a scaffolded learning environment with a centralized theme of energy can increase motivation and conceptual retention within students. The integration of courses allows students to systematically translate their competency of concepts between energy based courses through experiential learning. The goal of this work is to develop a competency based learning model where students earn a professionally recognizable credential. The credential is earned through demonstrating their mastery of industry desired skills at a level that goes above and beyond the stock curriculum. The result is a more continuous curriculum that enhances multi-disciplinary problem solving while better preparing MET students for the workforce.
|
20 |
The Impact of Participation in a Service-learning Program on University Students' Motivation for Learning JapaneseNagi Fujie (5930621) 15 May 2019 (has links)
<div>Service-learning is an organized volunteer activity in which learners serve the community while utilizing and enhancing their own skills, thus benefiting both the learners and the community. Studies have shown that students gain various benefits from participating in a service-learning activity, especially in their academic skills and civic growth through continued reflections (Eyler, Giles, & Braxton, 1997; Eyler & Giles, 1999; Billig, 2000; Grassi et al., 2004; Steinberg, Bringle, & Williams, 2010), often increasing their motivation to learn the related subject (Steinberg et al., 2010). Service-learning has been implemented in foreign language courses in the United States, especially Spanish (Barreneche & Ramos-Flores, 2013). However, service-learning literature on Japanese as a foreign language is limited.</div><div>The researcher founded a service-learning program in the Japanese language. In the program, the university students enrolled in intermediate- or higher-level Japanese courses help Japanese children with their schoolwork as volunteer tutors. The researcher conducted a qualitative case study on four of the student-tutors to examine the program's potential benefits to maintain and enhance the student-tutors' various motivations toward learning Japanese. The Volunteer Functions Inventory (VFI) (Clary, Snyder, & Ridge, 1998) was used as an analysis scheme, which reports six most commonly found functions, or varying motivations, for participating in a volunteer activity. The student-tutors indicated five out of the six VFI functions, showing a connection between their service-learning experience and their personal growth. They built strong connections with the Japanese community and kept their motivation to improve their Japanese skills to better help the children. It is hoped that the present research will contribute to providing an example of Japanese service-learning in the U.S.</div>
|
Page generated in 0.2048 seconds