• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Remote Usability Evaluation Tool

Kodiyalam, Narayanan Gopalakrishnan 27 June 2003 (has links)
Interactive system developers spend most of their time and resources on user interface evaluation in traditional usability laboratories. Since the network itself and the remote work setting have become parts of usage patterns, evaluators do not have unlimited access to representative users for user interface evaluation. Reproducing the user's work context in a laboratory setting is also difficult. These problems have led to the concept of Remote usability evaluation that takes interface evaluation of any application beyond the laboratory setting. The main aim of this thesis work is to develop a tool that can record problems faced by remote users in the form of text and video. The text report and video, which is a sequence of the user's actions while encountering the problem, would help evaluators in preparing usability problem descriptions. This thesis reports the development of the remote usability evaluation method and the process of usability evaluation performed in enhancing features offered by the tool. / Master of Science
2

Comparative Study of Synchronous Remote and Traditional In-Lab Usability Evaluation Methods

Selvaraj, Prakaash V. 28 May 2004 (has links)
Traditional in lab usability evaluation has been used as the 'standard' evaluation method for evaluating and improving usability of software user interfaces (Andre, Williges, & Hartson, 2000). However, traditional in lab evaluation has its drawbacks such as availability of representative end users, high cost of testing and lack of true representation of a user's actual work environment. To counteract these issues various alternative and less expensive usability evaluation methods (UEMs) have been developed over the past decade. One such UEM is the Remote Usability Evaluation method. Remote evaluation is a relatively new area and lacks empirical data to support the approach. The need for empirical support was addressed in this study. The overall purpose of this study was to determine the differences in the effectiveness of the two evaluation types, the remote evaluation approach (SREM) and the traditional evaluation approach, in collecting usability data. This study also compared the effectiveness between the two methods based on user type, usability novice users and usability experienced users. Finally, the hypothesis that users, in general, will prefer the remote evaluation approach of reporting to the traditional in-lab evaluation approach was also tested. Results indicated that, in general, the synchronous remote approach is at least as effective as the traditional in lab usability evaluation approach in collecting usability data across all user types. However, when user type was taken into consideration, it was found that there was a significant difference in the high severity negative critical incident data collected between the two approaches for the novice user group. The traditional approach collected significantly more high severity negative critical incident data than the remote approach. Additionally, results indicate that users tend to be more willing to participate in the same approach as the one they participated previously. Recommendations for usability evaluators for conducting the SREM approach and areas for future research are identified in the study. / Master of Science
3

The User-Reported Critical Incident Method for Remote Usability Evaluation

Castillo, Jose Carlos 29 January 1999 (has links)
Much traditional user interface evaluation is conducted in usability laboratories, where a small number of selected users is directly observed by trained evaluators. However, as the network itself and the remote work setting have become intrinsic parts of usage patterns, evaluators often have limited access to representative users for usability evaluation in the laboratory and the users' work context is difficult or impossible to reproduce in a laboratory setting. These barriers to usability evaluation led to extending the concept of usability evaluation beyond the laboratory, typically using the network itself as a bridge to take interface evaluation to a broad range of users in their natural work settings. The over-arching goal of this work is to develop and evaluate a cost-effective remote usability evaluation method for real-world applications used by real users doing real tasks in real work environments. This thesis reports the development of such a method, and the results of a study to: • investigate feasibility and effectiveness of involving users with to identify and report critical incidents in usage • investigate feasibility and effectiveness of transforming remotely-gathered critical incidents into usability problem descriptions • gain insight into various parameters associated with the method. / Master of Science

Page generated in 0.1254 seconds