Return to search

A Framework for Assessing and Designing Human Annotation Practices in Human-AI Teaming

This thesis work examines how people accomplish annotation tasks (i.e., labelling data based on content) while working with an artificial intelligence (AI) system. When people and AI systems work together to accomplish a task, this is referred to as human-AI teaming. This study reports on the results of an interview and observation study of 15 volunteers from the Washington DC area as the volunteers annotated Twitter messages (tweets) about the COVID-19 pandemic. During the interviews, researchers observed the volunteers as they annotated tweets, noting any needs, frustrations, or confusion that the volunteers expressed about the task itself or when working with the AI. This research provides the following contributions: 1) an examination of annotation work in a human-AI teaming context; 2) the HATA (human-AI teaming annotation) framework with five key factors that affect the way people annotate while working with AI systems--background, task interpretation, training, fatigue, and the annotation system; 3) a set of questions that will help guide users of the HATA framework as they create or assess their own human-AI annotation teams; 4) design recommendations that will give future researchers, designers, and developers guidance for how to create a better environment for annotators to work with AI; and 5) HATA framework implications when it is put into practice.

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-10137
Date15 June 2021
CreatorsStevens, Suzanne Ashley
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rightshttps://lib.byu.edu/about/copyright/

Page generated in 0.0024 seconds