Return to search

A Performance Survey of Text-Based Sentiment Analysis Methods for Automating Usability Evaluations

Usability testing, or user experience (UX) testing, is increasingly recognized as an important part of the user interface design process. However, evaluating usability tests can be expensive in terms of time and resources and can lack consistency between human evaluators. This makes automation an appealing expansion or alternative to conventional usability techniques.
Early usability automation focused on evaluating human behavior through quantitative metrics but the explosion of opinion mining and sentiment analysis applications in recent decades has led to exciting new possibilities for usability evaluation methods.
This paper presents a survey of modern, open-source sentiment analyzers’ usefulness in extracting and correctly identifying moments of semantic significance in the context of recorded mock usability evaluations. Though our results did not find a text-based sentiment analyzer that could correctly parse moments as well as human evaluators, one analyzer was found to be able to parse positive moments found through audio-only cues as well as human evaluators. Further research into adjusting settings on current sentiment analyzers for usability evaluations and using multimodal tools instead of text-based analyzers could produce valuable tools for usability evaluations when used in conjunction with human evaluators.

Identiferoai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-3905
Date01 June 2021
CreatorsVan Damme, Kelsi
PublisherDigitalCommons@CalPoly
Source SetsCalifornia Polytechnic State University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceMaster's Theses

Page generated in 0.0019 seconds