Return to search

A User-Centered Design Approach to Evaluating the Usability of Automated Essay Scoring Systems

In recent years, rapid advancements in computer science, including increased capabilities of machine learning models like Large Language Models (LLMs) and the accessibility of large datasets, have facilitated the widespread adoption of AI technology, such as ChatGPT, underscoring the need to design and evaluate these technologies with ethical considerations for their impact on students and teachers. Specifically, the rise of Automated Essay Scoring (AES) platforms have made it possible to provide real-time feedback and grades for student essays. Despite the increasing development and use of AES platforms, limited research has specifically focused on AI explainability and algorithm transparency and their influence on the usability of these platforms. To address this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, with a primary focus to explore the experiences of students and graders. The study aimed to explore the usability aspects related to explainability and transparency and their implications for computer science education. Participants took part in surveys, semi-structured interviews, and a focus group. The findings reveal important considerations for evaluating AES systems, including the clarity of feedback and explanations, impact and actionability of feedback and explanations, user understanding of the system, trust in AI, major issues and user concerns, system strengths, user interface, and areas of improvement. These proposed key considerations can help guide the development of effective essay feedback and grading tools that prioritize explainability and transparency to improve usability in computer science education. / Master of Science / In recent years, rapid advancements in computer science have facilitated the widespread adoption of AI technology across various educational applications, highlighting the need to design and evaluate these technologies with ethical considerations for their impact on students and teachers. Nowadays, there are Automated Essay Scoring (AES) platforms that can instantly provide feedback and grades for student essays. AES platforms are computer programs that use artificial intelligence to automatically assess and score essays written by students. However, not much research has looked into how these platforms work and how understandable they are for users. Specifically, AI explainability refers to the ability of AES platforms to provide clear and coherent explanations of how they arrive at their assessments. Algorithm transparency, on the other hand, refers to the degree to which the inner workings of these AI algorithms are open and understandable to users. To fill this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, aiming to understand the experiences of students and graders. We wanted to explore how clear and transparent the platform's feedback and explanations were. Participants shared their thoughts through surveys, interviews, and a focus group. The study uncovered important factors to consider when evaluating AES systems. These factors include the clarity of the feedback and explanations provided by the platform, the impact and actionality of the feedback, how well users understand the system, their level of trust in AI, the main issues and concerns they have, the strengths of the system, the user interface's effectiveness, and areas that need improvement. By considering these findings, developers can create better essay feedback and grading tools that are easier to understand and use.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/116313
Date21 September 2023
CreatorsHall, Erin Elizabeth
ContributorsComputer Science and Applications, Seyam, Mohammed Saad Mohamed Elmahdy, Dunlap, Daniel Ray, Yao, Danfeng
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0015 seconds