• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 20
  • 9
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 129
  • 47
  • 36
  • 34
  • 31
  • 31
  • 29
  • 21
  • 21
  • 21
  • 18
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An Assessment of the Usability Quality Attribute in Open Source Software

Yelleswarapu, Mahesh Chandra January 2010 (has links)
Usability is one of the important quality attributes. Open source software products are well known for their efficiency and effectiveness. Lack of usability in OSS (Open Source Software) products will result in poor usage of the product. In OSS development there is no usability team, and one could therefore expect that the usability would be low for these products. In order to find out if this was really the case we made a usability evaluation using a questionnaire for four OSS products. The questionnaire was based on a review of existing literature. This questionnaire was presented to 17 people who are working with open source products. This evaluation showed that the overall usability was above average for all the four products. It seems, however, that the lack of a usability team has made the OSS products less easy to use for inexperienced users. Based on the responses to questionnaire and a literature review, a set of guidelines and hints for increasing the usability of OSS products was defined.
12

Determining the Effectiveness of the Usability Problem Inspector: A Theory-Based Model and Tool for Finding Usability Problems

Andre, Terence Scott 17 April 2000 (has links)
The need for cost-effective usability evaluation has led to the development of methodologies to support the usability practitioner in finding usability problems during formative evaluation. Even though various methods exist for performing usability evaluation, practitioners seldom have the information needed to decide which method is appropriate for their specific purpose. In addition, most methods do not have an integrated relationship with a theoretical foundation for applying the method in a reliable and efficient manner. Practitioners often have to apply their own judgment and techniques, leading to inconsistencies in how the method is applied in the field. Usability practitioners need validated information to determine if a given usability evaluation method is effective and why it should be used instead of some other method. Such a desire motivates the need for formal, empirical comparison studies to evaluate and compare usability evaluation methods. In reality, the current data for comparing usability evaluation methods suffers from a lack of consistent measures, standards, and criteria for identifying effective methods. The work described here addresses three important research activities. First, the User Action Framework was developed to help organize usability concepts and issues into a knowledge base that supports usability methods and tools. From the User Action Framework, a mapping was made to the Usability Problem Inspector; a tool to help practitioners conduct a highly focused inspection of an interface design. Second, the reliability of the User Action Framework was evaluated to determine if usability practitioners could use the framework in a consistent manner when classifying a set of usability problems. Third, a comprehensive comparison study was conducted to determine if the Usability Problem Inspector, based on the User Action Framework, could produce results just as effective as two other inspection methods (i.e., the heuristic evaluation and the cognitive walkthrough). The comparison study used a new comparison approach with standards, measures, and criteria to prove the effectiveness of methods. Results from the User Action Framework reliability study showed higher agreement scores at all classification levels than was found in previous work with a similar classification tool. In addition, agreement using the User Action Framework was stronger than the results obtained from the same experts using the heuristic evaluation. From the inspection method comparison study, results showed the Usability Problem Inspector to be more effective than the heuristic evaluation and consistent with effectiveness scores from the cognitive walkthrough. / Ph. D.
13

Usability and Reliability of the User Action Framework: A Theoretical Foundation for Usability Engineering Activities

Sridharan, Sriram 18 December 2001 (has links)
Various methods exist for performing usability evaluations, but there is no systematic framework for guiding and structuring assessment and reporting activities (Andre et al., 2000). Researchers at Virginia Tech have developed a theoretical foundation called the User Action Framework (UAF), which is an adaptation and extension of Norman's action model (1986). The main objective of developing the User Action Framework was to provide usability practitioners with a reliable and structured tool set for usability engineering support activities like classifying and reporting usability problems. In practice, the tool set has a web-based interface, with the User Action Framework serving as an underlying foundation. To be an effective classification and reporting tool, the UAF should be usable and reliable. This work addressed two important research activities to help determine the usability and reliability of the User Action Framework. First, we conducted a formative evaluation of the UAF Explorer, a component of the UAF, and its content. This led a re-design effort to fix these problems and to provide an interface that resulted in a more efficient and satisfying user experience. Another purpose of this research was to conduct a reliability study to determine if the User Action Framework showed significantly better than chance agreement when usability practitioners classified a given set of usability problem descriptions according to the structure of the UAF. The User Action Framework showed higher agreement scores compared to previous work using the tool. / Master of Science
14

The LibX Edition Builder

Gaat, Tilottama 07 January 2009 (has links)
LibX is a browser plugin that allows users to access library resources directly from their browser. Many libraries that wished to adopt LibX needed to customize a version of LibX for their own institution. Most librarians did not possess the necessary knowledge of XML, running scripts and the underlying implementation of LibX required to create customized, functional LibX versions for their own institutions. Therefore, we have developed a web-based tool called the LibX Edition Builder that empowers librarians to create their own customized LibX version (editions), effortlessly. The Edition Builder provides rich interactivity to its users by exploiting the ZK AJAX framework whose components we adapted. The Edition Builder provides automatic detection of relevant library resources based on several heuristics which we have developed, which reduces the time and effort required to configure these resources. We have used sound software engineering techniques such as agile development principles, code generation techniques, and the model-view-controller design paradigm to maximize maintainability of the Edition Builder, which enables us to easily incorporate changing functional requirements in the Edition Builder. The LibX Edition Builder is currently used by over 800 registered users who have created over 400 editions. We have carried out a custom log-based usability evaluation that examined the interactions of our users over a 5 month period. This evaluation has shown that the Edition Builder can dramatically reduce the time needed to customize LibX editions and is being increasingly adopted by the library community. / Master of Science
15

Usability Problem Diagnosis tool: Development and Evaluation

Mahajan, Reenal R. 15 July 2003 (has links)
Usability evaluation results in several usability problems and the non-UE developer is often not a part of the evaluation as it might deter the participant from reporting all the errors and also, conducting usability evaluation is a usability engineer's responsibility. Thus the evaluator needs to create unambiguous usability problem reports, which will help the developer fix the usability problems. This research involves the development and evaluation of the Usability Problem Diagnosis tool, which supports problem diagnosis through analysis and storage in a common database shared between the evaluation and the development team. This tool uses the User Action Framework as an underlying knowledge base to support problem diagnosis. / Master of Science
16

Investigating the Effectiveness of Applying the Critical Incident Technique to Remote Usability Evaluation

Thompson, Jennifer Anne 06 January 2000 (has links)
Remote usability evaluation is a usability evaluation method (UEM) where the experimenter, performing observation and analysis, is separated in space and/or time from the user. There are several approaches by which to implement remote evaluation, limited only by the availability of supporting technology. One such implementation method is RECITE (the REmote Critical Incident TEchnique), an adaptation of the user-reported critical incident technique developed by Castillo (1997). This technique requires that trained users, working in their normal work environment, identify and report critical incidents. Critical incidents are interactions with a system feature that prove to be particularly easy or difficult, leading to extremely good or extremely poor performance. Critical incident reports are submitted to the experimenter using an on-line reporting tool, who is responsible for their compilation into a list of usability problems. Support for this approach to remote evaluation has been reported (Hartson, H.R., Castillo, J.C., Kelso, J., and Neale, W.C., 1996; Castillo, 1997). The purpose of this study was to quantitatively assess the effectiveness of RECITE with respect to traditional, laboratory-based applications of the critical incident technique. A 3x2x 5 mixed-factor experimental design was used to compare the frequency and severity ratings of critical incidents reported by remote versus laboratory-based users. Frequency was measured according to the number of critical incident reports submitted and severity was rated along four dimensions: task frequency, impact on task performance, impact on satisfaction, and error severity. This study also compared critical incident data reported by trained users versus by usability experts observing end-users. Finally, changes in critical incident data reported over time were evaluated. In total, 365 critical incident reports were submitted, containing 117 unique usability problems and 50 usability success descriptions. Critical incidents were classified using the Usability Problem Inspector (UPI). A higher number of web-based critical incidents occurred during Planning than expected. The distribution of voice-based critical incidents differed among participant groups: users reported a greater than expected number of Planning incidents while experts reported fewer than expected Assessment incidents. Usability expert performance was not correlated, requiring that separate analyses be conducted for each expert data set. Support for the effectiveness in applying critical incidents to remote usability was demonstrated, with all research hypotheses at least partially supported. Usability experts gave significantly different ratings of impact on task performance than did user reporters. Remote user performance versus laboratory-based users failed to reveal differences in all but one measure: laboratory-based users reported more positive critical incidents for the voice interface than did remote users. In general, the number of negative critical incidents decreased over time; a similar result did not apply to the number of positive critical incidents. It was concluded that RECITE is an effective means of capturing problem-oriented data over time. Recommendations for its use as a formative evaluation method applied during the latter stages of product development (i.e. when a high fidelity prototype is available) are made. Opportunities for future research are identified. / Master of Science
17

A Usability Problem Inspection Tool: Development and Formative Evaluation

Colaso, Vikrant 20 June 2003 (has links)
Usability inspection methods of user interaction designs have gained importance as an alternative to traditional laboratory-based testing methods because of their cost-effectiveness. However, methods like the heuristic evaluation are ad-hoc, lacking a theoretical foundation. Other, more formal approaches like the cognitive walkthrough are tedious to perform and operate at a high-level, making it difficult to sub-classify problems. This research involves the development and formative evaluation of the Usability Problem Inspection tool — a cost-effective, structured, flexible usability inspection tool that uses the User Action Framework as an underlying knowledge base. This tool offers focused inspections guided by a particular task or a combination of tasks. It is also possible to limit the scope of inspection by applying filters or abstracting lower level details. / Master of Science
18

A Tale of Two Sites: An Explorative Study of the Design and Evaluation of Social Network Sites

Ahuja, Sameer 21 August 2009 (has links)
Social Network Sites allow individuals to construct a public or semi-public profile within a bounded system, articulate a list of other users with whom they share a connection, and view and traverse their list of connections and those made by others within a system. Such sites are generally centered around a particular activity, such as maintaining social relationships or uploading user created content. Increasingly, niche domains such as education, healthcare and software development have been exploring the creation of social network sites centered around the activities of the domain. This has led to an increasing focus on the processes involved in designing and evaluating these sites. We argue that social network sites require a specialized focus in their design and evaluation on the social utility of the features on the site. We have created two social network sites for niche communities: Colloki, a conversation platform designed for members of local communities; and CATspace, a social repository of Computer Science assignments, designed for use by CS instructors and students. In this thesis, we describe the motivation, design and implementation of these two sites. We provide a formative evaluation of these two sites, wherein we evaluate the usability, and study the perceived social affordances of individual features across the two site. Finally, we discuss future work towards building a framework for evaluating the social utility of Social Network Sites at a formative stage. / Master of Science
19

Blindenspezifische Methoden für das User-Centred Design multimodaler Anwendungen

Miao, Mei 18 November 2014 (has links) (PDF)
Multimodale Anwendungen bieten den blinden Benutzern neue Möglichkeiten und Chancen, die durch Verlust des Sehsinnes entstandenen Defizite über andere Sinneskanäle auszugleichen. Die benutzerorientierte Gestaltung ist der sicherste Weg, um interaktive Systeme gebrauchstauglich zu gestalten. Dabei sind die Benutzer hauptsächlich an zwei Aktivitäten beteiligt. Dies sind die Nutzungsanforderungsanalyse und die Evaluation. Hinsichtlich dieser zwei Aktivitäten wurden in der vorliegenden Arbeit Usability-Methoden untersucht bzw. neu entwickelt, um die nutzerzentrierte Gestaltung multimodaler Anwendungen für blinde Benutzer zu unterstützen. Bezogen auf die Aktivität Nutzungsanforderungsanalyse wurde ein Verfahren entwickelt, welches speziell die Besonderheiten blinder Benutzer und multimodaler Anwendungen bei der Nutzungsanforderungsanalyse berücksichtigt. Zusätzlich wurden zwei Schritte des Verfahrens, die Erstellung mentaler Modelle und die Modalitätsauswahl, die speziell auf den Kontext multimodaler Anwendungen für blinde Benutzer ausgerichtet sind, weiter vertiefend untersucht. Für den Schritt Erstellung mentaler Modelle wurden zwei Erstellungsmethoden, Teaching-Back und Retrospective Think-Aloud, mit blinden Benutzern untersucht. Dabei sind sowohl die Gestaltung vom Teaching-Back als auch der Vergleich beider Methoden von Interesse. Für den Schritt Modalitätsauswahl stand die Analyse des multimodalen Nutzerverhaltens blinder Benutzer im Mittelpunkt. Vier Eingabemodalitäten, Sprache, Touchscreen-Gesten, Touchscreen-Tastatur und Touchscreen-Braille bzw. deren Kombinationen wurden unter Einfluss von acht Aufgabentypen bei der Bedienung einer mobilen multimodalen Navigationsanwendung untersucht. In Hinblick auf die Usability-Evaluationsmethoden wurde zuerst das Augenmerk auf die Auswertung und die Erhebung mentaler Karten von blinden Benutzern gerichtet, da sie eine wichtige Rolle bei der Entwicklung von Navigationssystemen spielen. Zwei Auswertungsmethoden für mentale Karten hinsichtlich des Überblicks- und Routenwissens wurden entwickelt. Beide Methoden ermöglichen es, die mentalen Karten anhand speziell entwickelter Bewertungskriterien, wie Anzahl der Elemente und Eigenschaften der Straßen, quantitativ zu bewerten. Bezüglich der Erhebung mentaler Karten wurden zwei Erhebungsmethoden – Rekonstruktion mit Magnetstreifen und verbale Beschreibung – mit blinden Probanden hinsichtlich unterschiedlicher Aspekten untersucht. In zwei weiteren Untersuchungen wurden taktiles Paper-Prototyping und computerbasiertes Prototyping für die frühen Entwicklungsphasen bzw. Labor- und synchroner Remote-Test für die späteren Entwicklungsphasen mit blinden Benutzern verglichen. Dabei wurden die Effektivität der Evaluation, die Erkenntnisse und Erfahrungen der Probanden sowie des Testleiters als Vergleichskriterien in beiden Untersuchungen eingesetzt.
20

Automated Field Usability Evaluation Using Generated Task Trees

Harms, Patrick 17 December 2015 (has links)
Jedes Produkt hat eine Gebrauchstauglichkeit (Usability). Das umfasst auch Software,Webseiten und Apps auf mobilen Endgeräten und Fernsehern. Im heutigen Anbieterwettbewerb kann Usability ein entscheidender Faktor für den Erfolg eines Produktes sein. Dies gilt speziell für Software, da alternative Angebote meist schnell und einfach verfügbar sind. Daher sollte jede Softwareentwicklung Gebrauchstauglichkeit als eines ihrer Ziele definieren. Um dieses Ziel zu erreichen, wird beim Usability Engineering während der Entwicklung und der Nutzung eines Produkts kontinuierlich dessen Gebrauchstauglichkeit erfasst und verbessert. Hierfür existiert eine Reihe von Methoden, mit denen in allen Projektphasen entsprechende Probleme erkannt und gelöst werden können. Die meisten dieser Methoden sind jedoch nur manuell einsetzbar und daher kostspielig in der Anwendung.    Die vorliegende Arbeit beschreibt ein vollautomatisiertes Verfahren zur Bewertung der Usability von Software. Das Verfahren zählt zu den nutzerorientierten Methoden und kann für Feldstudien eingesetzt werden. In diesem Verfahren werden zunächst detailliert die Aktionen der Nutzer auf der Oberfläche einer Software aufgezeichnet. Aus diesen Aufzeichnungen berechnet das Verfahren ein Modell der Nutzeroberfläche sowie sogenannte Task-Bäume, welche ein Modell der Nutzung der Software sind. Die beiden Modelle bilden die Grundlage für eine anschließende Erkennung von 14 sogenannten Usability Smells. Diese definieren unerwartetes Nutzerverhalten, das auf ein Problem mit der Gebrauchstauglichkeit der Software hinweist. Das Ergebnis des Verfahrens sind detaillierte Beschreibungen zum Auftreten der Smells in den Task-Bäumen und den aufgezeichneten Nutzeraktionen. Dadurch wird ein Bezug zwischen den Aufgaben des Nutzers, den entsprechenden Problemen sowie ursächlichen Elementen der graphischen Oberfläche hergestellt.    Das Verfahren wird anhand von zwei Webseiten und einer Desktopanwendung validiert. Dabei wird zunächst die Repräsentativität der generierten Task-Bäume für das Nutzerverhalten überprüft. Anschließend werden Usability Smells erkannt und die Ergebnisse manuell analysiert sowie mit Ergebnissen aus der Anwendung etablierter Methoden des Usability Engineerings verglichen. Daraus ergeben sich unter anderem Bedingungen, die bei der Erkennung von Usability Smells erfüllt sein müssen.    Die drei Fallstudien, sowie die gesamte Arbeit zeigen, dass das vorgestellte Verfahren fähig ist, vollautomatisiert unterschiedlichste Usabilityprobleme zu erkennen. Dabei wird auch gezeigt, dass die Ergebnisse des Verfahrens genügend Details beinhalten, um ein gefundenes Problem genauer zu beschreiben und Anhaltspunkte für dessen Lösung zu liefern. Außerdem kann das Verfahren andere Methoden der Usabilityevaluation ergänzen und dabei sehr einfach auch im großen Umfang eingesetzt werden.

Page generated in 0.1058 seconds