• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Usability and Reliability of the User Action Framework: A Theoretical Foundation for Usability Engineering Activities

Sridharan, Sriram 18 December 2001 (has links)
Various methods exist for performing usability evaluations, but there is no systematic framework for guiding and structuring assessment and reporting activities (Andre et al., 2000). Researchers at Virginia Tech have developed a theoretical foundation called the User Action Framework (UAF), which is an adaptation and extension of Norman's action model (1986). The main objective of developing the User Action Framework was to provide usability practitioners with a reliable and structured tool set for usability engineering support activities like classifying and reporting usability problems. In practice, the tool set has a web-based interface, with the User Action Framework serving as an underlying foundation. To be an effective classification and reporting tool, the UAF should be usable and reliable. This work addressed two important research activities to help determine the usability and reliability of the User Action Framework. First, we conducted a formative evaluation of the UAF Explorer, a component of the UAF, and its content. This led a re-design effort to fix these problems and to provide an interface that resulted in a more efficient and satisfying user experience. Another purpose of this research was to conduct a reliability study to determine if the User Action Framework showed significantly better than chance agreement when usability practitioners classified a given set of usability problem descriptions according to the structure of the UAF. The User Action Framework showed higher agreement scores compared to previous work using the tool. / Master of Science
2

Usability Problem Description and the Evaluator Effect in Usability Testing

Capra, Miranda Galadriel 05 April 2006 (has links)
Previous usability evaluation method (UEM) comparison studies have noted an evaluator effect on problem detection in heuristic evaluation, with evaluators differing in problems found and problem severity judgments. There have been few studies of the evaluator effect in usability testing (UT), task-based testing with end-users. UEM comparison studies focus on counting usability problems detected, but we also need to assess the content of usability problem descriptions (UPDs) to more fully measure evaluation effectiveness. The goals of this research were to develop UPD guidelines, explore the evaluator effect in UT, and evaluate the usefulness of the guidelines for grading UPD content. Ten guidelines for writing UPDs were developed by consulting usability practitioners through two questionnaires and a card sort. These guidelines are (briefly): be clear and avoid jargon, describe problem severity, provide backing data, describe problem causes, describe user actions, provide a solution, consider politics and diplomacy, be professional and scientific, describe your methodology, and help the reader sympathize with the user. A fourth study compared usability reports collected from 44 evaluators, both practitioners and graduate students, watching the same 10-minute UT session recording. Three judges measured problem detection for each evaluator and graded the reports for following 6 of the UPD guidelines. There was support for existence of an evaluator effect, even when watching pre-recorded sessions, with low to moderate individual thoroughness of problem detection across all/severe problems (22%/34%), reliability of problem detection (37%/50%) and reliability of severity judgments (57% for severe ratings). Practitioners received higher grades averaged across the 6 guidelines than students did, suggesting that the guidelines may be useful for grading reports. The grades for the guidelines were not correlated with thoroughness, suggesting that the guideline grades complement measures of problem detection. A simulation of evaluators working in groups found a 34% increase in severe problems found by adding a second evaluator. The simulation also found that thoroughness of individual evaluators would have been overestimated if the study had included a small number of evaluators. The final recommendations are to use multiple evaluators in UT, and to assess both problem detection and description when measuring evaluation effectiveness. / Ph. D.

Page generated in 0.1464 seconds