1 |
Vizability: Visualizing Usability Evaluation Data Based on the User Action FrameworkCatanzaro, Christopher David 08 July 2005 (has links)
Organizations have recognized usability engineering as a needed step in the development process to ensure the success of any product. As is the case in all competitive settings areas for improvement are scouted and always welcomed. In the case of usability engineering a lot of time, money, equipment, and other resources are spent to gather usability data to identify and resolve usability problems in order to improve their product. The usability data gained from the expenditure of resources is often only applied to the development effort at hand and not reused across projects and across different development groups within the organization. More over, the usability data are often used at a level that forces the organization to only apply the data to that specific development effort. However, if usability data can be abstracted from the specific development effort and analyzed in relation to the process that created and identified the data; the data can then be used and applied over multiple development efforts. The User Action Framework (UAF) is a hierarchical framework of usability concepts that ensures consistency through completeness and precision. The UAF by its nature classifies usability problems at a high level. This high level classification affords usability engineers to not only apply the knowledge gained to the current development effort but to apply the knowledge across multiple development efforts. This author presents a mechanism and a process to allow usability engineers to find insights in their usability data to identify both strengths and weaknesses in their process. In return usability practitioners and companies can increase their return on investment by extending the usefulness of usability data over multiple development efforts. / Master of Science
|
2 |
The User-Reported Critical Incident Method for Remote Usability EvaluationCastillo, Jose Carlos 29 January 1999 (has links)
Much traditional user interface evaluation is conducted in usability laboratories, where a small number of selected users is directly observed by trained evaluators. However, as the network itself and the remote work setting have become intrinsic parts of usage patterns, evaluators often have limited access to representative users for usability evaluation in the laboratory and the users' work context is difficult or impossible to reproduce in a laboratory setting. These barriers to usability evaluation led to extending the concept of usability evaluation beyond the laboratory, typically using the network itself as a bridge to take interface evaluation to a broad range of users in their natural work settings. The over-arching goal of this work is to develop and evaluate a cost-effective remote usability evaluation method for real-world applications used by real users doing real tasks in real work environments. This thesis reports the development of such a method, and the results of a study to:
• investigate feasibility and effectiveness of involving users with to identify and report critical incidents in usage
• investigate feasibility and effectiveness of transforming remotely-gathered critical incidents into usability problem descriptions
• gain insight into various parameters associated with the method. / Master of Science
|
Page generated in 0.1089 seconds