• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A User-Centered Design Approach to Evaluating the Usability of Automated Essay Scoring Systems

Hall, Erin Elizabeth 21 September 2023 (has links)
In recent years, rapid advancements in computer science, including increased capabilities of machine learning models like Large Language Models (LLMs) and the accessibility of large datasets, have facilitated the widespread adoption of AI technology, such as ChatGPT, underscoring the need to design and evaluate these technologies with ethical considerations for their impact on students and teachers. Specifically, the rise of Automated Essay Scoring (AES) platforms have made it possible to provide real-time feedback and grades for student essays. Despite the increasing development and use of AES platforms, limited research has specifically focused on AI explainability and algorithm transparency and their influence on the usability of these platforms. To address this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, with a primary focus to explore the experiences of students and graders. The study aimed to explore the usability aspects related to explainability and transparency and their implications for computer science education. Participants took part in surveys, semi-structured interviews, and a focus group. The findings reveal important considerations for evaluating AES systems, including the clarity of feedback and explanations, impact and actionability of feedback and explanations, user understanding of the system, trust in AI, major issues and user concerns, system strengths, user interface, and areas of improvement. These proposed key considerations can help guide the development of effective essay feedback and grading tools that prioritize explainability and transparency to improve usability in computer science education. / Master of Science / In recent years, rapid advancements in computer science have facilitated the widespread adoption of AI technology across various educational applications, highlighting the need to design and evaluate these technologies with ethical considerations for their impact on students and teachers. Nowadays, there are Automated Essay Scoring (AES) platforms that can instantly provide feedback and grades for student essays. AES platforms are computer programs that use artificial intelligence to automatically assess and score essays written by students. However, not much research has looked into how these platforms work and how understandable they are for users. Specifically, AI explainability refers to the ability of AES platforms to provide clear and coherent explanations of how they arrive at their assessments. Algorithm transparency, on the other hand, refers to the degree to which the inner workings of these AI algorithms are open and understandable to users. To fill this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, aiming to understand the experiences of students and graders. We wanted to explore how clear and transparent the platform's feedback and explanations were. Participants shared their thoughts through surveys, interviews, and a focus group. The study uncovered important factors to consider when evaluating AES systems. These factors include the clarity of the feedback and explanations provided by the platform, the impact and actionality of the feedback, how well users understand the system, their level of trust in AI, the main issues and concerns they have, the strengths of the system, the user interface's effectiveness, and areas that need improvement. By considering these findings, developers can create better essay feedback and grading tools that are easier to understand and use.
2

Exploring interactive features in auto-generated articles through article visualization

Abdel-Rehim, Ali January 2019 (has links)
News articles generated by artificial intelligence rather than human reporters are referred to as automated journalism. This thesis explores how to create a trustworthy representation of news articles that mainly are generated by algorithmic decisions. The hypothesis of this thesis takes the background (characteristics of the underlying system design) and the foreground (millennials news consumption behaviour) contexts into consideration in order to provide an optimal approach for trustworthy representation of auto-generated articles. A theory about algorithmic transparency in the news media has been investigated to reveal information about the systems selection processes. The principles of glanceability and the heuristic principles are applied to proposed design solutions (interactive features). The outcomes show that newsreaders are positive towards a system that is trying to encourage them to fact-check the articles. Additionally, the outcomes also contributed to the understanding of how newsreaders can consume auto-generated news.
3

[en] A CRITICAL VIEW ON THE INTERPRETABILITY OF MACHINE LEARNING MODELS / [pt] UMA VISÃO CRÍTICA SOBRE A INTERPRETABILIDADE DE MODELOS DE APRENDIZADO DE MÁQUINA

JORGE LUIZ CATALDO FALBO SANTO 29 July 2019 (has links)
[pt] À medida que os modelos de aprendizado de máquina penetram áreas críticas como medicina, sistema de justiça criminal e mercados financeiros, sua opacidade, que impede que as pessoas interpretem a maioria deles, se tornou um problema a ser resolvido. Neste trabalho, apresentamos uma nova taxonomia para classificar qualquer método, abordagem ou estratégia para lidar com o problema da interpretabilidade de modelos de aprendizado de máquina. A taxonomia proposta que preenche uma lacuna existente nas estruturas de taxonomia atuais em relação à percepção subjetiva de diferentes intérpretes sobre um mesmo modelo. Para avaliar a taxonomia proposta, classificamos as contribuições de artigos científicos relevantes da área. / [en] As machine learning models penetrate critical areas like medicine, the criminal justice system, and financial markets, their opacity, which hampers humans ability to interpret most of them, has become a problem to be solved. In this work, we present a new taxonomy to classify any method, approach or strategy to deal with the problem of interpretability of machine learning models. The proposed taxonomy fills a gap in the current taxonomy frameworks regarding the subjective perception of different interpreters about the same model. To evaluate the proposed taxonomy, we have classified the contributions of some relevant scientific articles in the area.

Page generated in 0.0782 seconds