Return to search

Case Study: Detecting End-User Problems. Performance of Heuristic Evaluation Compared to Think Aloud.

There  are  currently  multiple  usability  evaluation methods  in  use. Some  of  them  involve  users  whereas  others  do  not.  It  is commonly assumed that the usability problems found via user-involved methods are real problems that the end-users may face when using the product in real situations. In this study, one usability evaluation method that does not involve  users,  Heuristic  Evaluation,  was  evaluated  by  comparing the results  it  provided to the results provided by a user-involved method, Think Aloud. Heuristic Evaluation’s performance in detecting usability problems of a system was evaluated in two cases: taking all of the usability problems of the interface into account and only taking the serious usability problems of the interface into account. A system called Nytt, provided by Nytt Ab, was used as the evaluated system. The values of thoroughness, validity and effectiveness were calculated based on the numbers of actual usability problems found via Heuristic Evaluation, actual problems missed by Heuristic Evaluation and false problems found via Heuristic Evaluation. The findings of this study suggest that Heuristic Evaluation does not detect all of the problems that Think Aloud does but instead offers a lot of other findings as problems. Moreover, the performance of Heuristic Evaluation is not drastically affected by  whether detecting only serious usability problems or usability problems in general.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-447073
Date January 2021
CreatorsSorvari, Sonja
PublisherUppsala universitet, Institutionen för informatik och media
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0018 seconds