• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Investigating Differences in Formative Critiquing between Instructors and Students in Graphic Design

Liwei Zhang (6635930) 15 May 2019 (has links)
<p>Critique is an essential skill of professional designers to communicate success and failure of a design with others. For graphic design educators, including critique in their pedagogical approaches enables students to improve both their design capability and critique skills. Adaptive Comparative Judgment (ACJ) is an innovative approach of assessment where students and instructors make comparisons between two designs and choose the better of the two. The purpose of this study was to investigate the differences between instructors’ and students’ critiquing practices. The data was collected through think-aloud protocol methods while both groups critiqued the same design projects. </p> <p>The results indicate that it took students longer to finish the same amount of critiques as those completed by instructors. Students spent more time describing their personal feelings, evaluating each individual design, and looking for the right phrases to precisely express their thoughts on a design. Instructors, with more teaching experience, were able to complete the critique more quickly and justify their critique decisions more succinctly with efficient use of terminology and a reliance on their instincts. </p>
2

THE RELATIONSHIP BETWEEN COLLEGE STUDENT CRITIQUE ABILITY AND DESIGN ABILITY

Cameron Moon (8097956) 11 December 2019 (has links)
While industry is looking to graphic design education for the next top designers who have the knowledge and skills to be successful in their field (Bridges, King, Brown, & Luedeman, 2013), graphic design instructors often have a limited time to teach students the knowledge and skills they need to become successful designers (Landa, 2010; Kennedy et al., 2012; Liu, & Tourtellott, 2011). Most university-level graphic design courses, the traditional preparation pathway for future designers, focus on improving student’s design ability through hands-on projects that teach students how to use graphic design technology (Motley, 2017). In addition to hands-on graphic design experiences, many classrooms also use peer critique to allow students to critique and give feedback to peers while identifying the positive aspects of a design and suggesting improvements to be made (Motley, 2017). Students tend to improve their design when a classroom implements critique, including self and peer assessment, into the curriculum (Wanner, & Palmer, 2018). However, little is known about the relationship, if any exists, between a student’s ability to design and a student’s ability to critique. Therefore, this study will investigate the correlation between student critique and student design abilities with the intent of improving graphic design educational practices. Understanding this correlation may assist those involved with graphic design education to better prepare students for future employment by assisting instructors in using their limited teaching time most effectively. Specifically, a relationship between graphic design critique and graphic design skill may suggest that the limited time available for teaching should emphasize improving critique skills with the goal of also improving graphic design abilities. If no relationship between critique and design abilities exists, this may suggest that limited time should be spent engaging students in critique and other forms of teaching should be emphasized.
3

Rubric Rating with MFRM vs. Randomly Distributed Comparative Judgment: A Comparison of Two Approaches to Second-Language Writing Assessment

Sims, Maureen Estelle 01 April 2018 (has links)
The purpose of this study is to explore a potentially more practical approach to direct writing assessment using computer algorithms. Traditional rubric rating (RR) is a common yet highly resource-intensive evaluation practice when performed reliably. This study compared the traditional rubric model of ESL writing assessment and many-facet Rasch modeling (MFRM) to comparative judgment (CJ), the new approach, which shows promising results in terms of reliability and validity. We employed two groups of raters”novice and experienced”and used essays that had been previously double-rated, analyzed with MFRM, and selected with fit statistics. We compared the results of the novice and experienced groups against the initial ratings using raw scores, MFRM, and a modern form of CJ”randomly distributed comparative judgment (RDCJ). Results showed that the CJ approach, though not appropriate for all contexts, can be valid and as reliable as RR while requiring less time to generate procedures, train and norm raters, and rate the essays. Additionally, the CJ approach is more easily transferable to novel assessment tasks while still providing context-specific scores. Results from this study will not only inform future studies but can help guide ESL programs to determine which rating model best suits their specific needs.
4

How Item Response Theory can solve problems of ipsative data

Brown, Anna 25 October 2010 (has links)
Multidimensional forced-choice questionnaires can reduce the impact of numerous response biases typically associated with Likert scales. However, if scored with traditional methodology these instruments produce ipsative data, which has psychometric problems, such as constrained total test score and negative average scale inter-correlation. Ipsative scores distort scale relationships and reliability estimates, and make interpretation of scores problematic. This research demonstrates how Item Response Theory (IRT) modeling may be applied to overcome these problems. A multidimensional IRT model for forced-choice questionnaires is introduced, which is suitable for use with any forced-choice instrument composed of items fitting the dominance response model, with any number of measured traits, and any block sizes (i.e. pairs, triplets, quads etc.). The proposed model is based on Thurstone's framework for comparative data. Thurstonian IRT models are normal ogive models with structured factor loadings, structured uniquenesses, and structured local dependencies. These models can be straightforwardly estimated using structural equation modeling (SEM) software Mplus. Simulation studies show how the latent traits are recovered from the comparative binary data under different conditions. The Thurstonian IRT model is also tested with real participants in both research and occupational assessment settings. It is concluded that when the recommended design guidelines are met, scores estimated from forced-choice questionnaires with the proposed methodology reproduce the latent traits well.

Page generated in 0.1314 seconds