Spelling suggestions: "subject:"confusability"" "subject:"focusability""
31 |
Erstellung und Evaluation von Prototypen in der Softwareentwicklung am Beispiel eines mobilen ZeiterfassungssystemsWinkler, Janin 05 July 2023 (has links)
Die vorliegende Arbeit befasst sich mit dem Vorgehen des Prototyping und wie dieses
in die Softwareentwicklung integriert werden kann. Dies wird anhand eines praktischen
Beispiels erläutert, indem Prototypen für ein mobiles, projektbasiertes Zeiterfassungssystem erstellt und anschließend evaluiert werden. Zu Beginn erfolgt die
Einordnung des Begriffs Prototyp und eine Beschreibung, wie Prototypen kategorisiert
werden können. Prozessmodelle des Usability Engineering, zum Beispiel nach
Jakob Nielsen, werden aufgegriffen und die Rolle des Prototyping innerhalb dieser
Prozesse erläutert, was die Grundlage für den praktischen Teil der Arbeit bildet. Weiterhin setzt sich der theoretische Teil mit den Gründen für das Prototyping sowie
geeigneten Evaluationsmethoden auseinander.
Im praktischen Teil erfolgt die Erstellung von Prototypen für das mobile Zeiterfassungssystem
auf Basis einer Konkurrenzanalyse. Dabei bilden die Prototypen unterschiedliche
User Journeys ab. Die anschließende Evaluation, bestehend aus einem
Cognitive Walkthrough sowie einem vergleichenden, nutzerbasierten Usability-Test,
ermittelt, welche Verbesserungen an der Gestaltung vorgenommen werden müssen
und welche User Journey von Benutzern bevorzugt wird. Ein Ausblick zeigt, hinsichtlich welcher Aspekte die mobile Zeiterfassung verbessert sowie erweitert werden könnte und wie weitere Erkenntnisse über die Gebrauchstauglichkeit gewonnen werden können.:Abkürzungsverzeichnis
Abbildungsverzeichnis
Tabellenverzeichnis
Glossar
1 Einleitung
1.1 Relevanz des Themas
1.2 Ziel der Arbeit
1.3 Methodisches Vorgehen und Aufbau der Arbeit
2 Einführung in den Prozess des Prototyping
2.1 Einordnung des Begriffs „Prototyp“
2.2 Prototyping als Teil des Usability Engineering
2.2.1 Prozessmodell „Usability Engineering“ nach Sarodnick und Brau
2.2.2 Usability Engineering Lifecycle nach Nielsen
2.3 Kategorisierung von Prototypen
2.3.1 nach Funktionsumfang und -tiefe
2.3.2 nach Darstellungstreue
2.4 Gründe für das Prototyping
3 Evaluation von Prototypen
3.1 Gründe für die Evaluation von Prototypen
3.2 Geeignete Evaluationsmethoden
3.2.1 Prüfung mit Benutzern
3.2.2 Inspektionsbasierte Evaluierung
4 Vorgehensweise und Methodik
4.1 Konkretisierung der Problemstellung
4.2 Anforderungen des Unternehmens
4.3 Ausführliche Beschreibung der Untersuchungsmethodik
5 Erstellung von Prototypen für ein mobiles Zeiterfassungssystem
5.1 Zweck der Prototypen im Projekt NewTimePLUS
5.2 Konkurrenzanalyse und Ableitung von Anforderungen
5.3 Erläuterung der Gestaltung der Prototypen
5.3.1 Gestaltung der User Journey 1
5.3.2 Gestaltung der User Journey 2
5.3.3 Gestaltung der User Journey 3
5.3.4 Gestaltung der Suchfunktion
6 Evaluation der Prototypen
6.1 Beschreibung des Evaluationsverfahrens
6.1.1 Cognitive Walkthrough
6.1.1.1 Vorbereitungsphase
6.1.1.2 Analysephase
6.1.2 Usability-Test mit System Usability Scale
6.2 Darstellung der Ergebnisse
6.2.1 Ergebnisse des Cognitive Walkthroughs
6.2.2 Ergebnisse des Usability-Tests
7 Diskussion der Ergebnisse
7.1 Bewertung der Ergebnisse
7.2 Schlussfolgerungen
8 Fazit der Untersuchung
8.1 Zusammenfassung der Ergebnisse
8.2 Ausblick
Literaturverzeichnis
Selbstständigkeitserklärung
Anlagenverzeichnis
|
32 |
Implementing Usability Testing Of Technical Documents At Any Company And On Any BudgetCollins, Meghan 01 January 2010 (has links)
In my thesis I discuss the cost effectiveness of usability testing of technical documents and how any size company with any size budget can implement usability testing. Usability is achieved when the people who use products or technical documents can do so quickly and easily to accomplish their own tasks. Usability testing is best defined as the process of studying users to determine a documentation project's effectiveness for its intended audience. Users are tired of dealing with confusing and unintuitive technical documentation that forces them to either call customer service for help on simple issues or throw out the product in favor of one that is more usable or provides better technical documentation. That is why all technical communicators should include usability testing as part of the technical documentation production cycle. To help technical communicators understand the importance of usability testing, I discuss the cost effectiveness of usability testing and share ways that companies with large budgets and companies with small budgets can begin incorporating usability testing. Then I provide information on all the steps that are necessary for technical communicators to implement usability testing of technical documentation at their company. Options are presented for everything from bare minimum usability testing with a shoe-string budget with pencils, note pads, and only a handful of users to full scale usability testing in large laboratories with the latest equipment and a wide variety of users. The research provides examples from real companies, advice from experienced technical communicators and usability experts, and research demonstrating how many resources are truly required to benefit from usability testing. By showing technical communicators that usability testing is cost effective and that there are many options for implementing usability testing no matter how large or small their budget is, I hope to empower technical communicators to start including usability testing as part of the documentation production cycle at their companies.
|
33 |
Determining the Effectiveness of the Usability Problem Inspector: A Theory-Based Model and Tool for Finding Usability ProblemsAndre, Terence Scott 17 April 2000 (has links)
The need for cost-effective usability evaluation has led to the development of methodologies to support the usability practitioner in finding usability problems during formative evaluation. Even though various methods exist for performing usability evaluation, practitioners seldom have the information needed to decide which method is appropriate for their specific purpose. In addition, most methods do not have an integrated relationship with a theoretical foundation for applying the method in a reliable and efficient manner. Practitioners often have to apply their own judgment and techniques, leading to inconsistencies in how the method is applied in the field. Usability practitioners need validated information to determine if a given usability evaluation method is effective and why it should be used instead of some other method. Such a desire motivates the need for formal, empirical comparison studies to evaluate and compare usability evaluation methods. In reality, the current data for comparing usability evaluation methods suffers from a lack of consistent measures, standards, and criteria for identifying effective methods.
The work described here addresses three important research activities. First, the User Action Framework was developed to help organize usability concepts and issues into a knowledge base that supports usability methods and tools. From the User Action Framework, a mapping was made to the Usability Problem Inspector; a tool to help practitioners conduct a highly focused inspection of an interface design. Second, the reliability of the User Action Framework was evaluated to determine if usability practitioners could use the framework in a consistent manner when classifying a set of usability problems. Third, a comprehensive comparison study was conducted to determine if the Usability Problem Inspector, based on the User Action Framework, could produce results just as effective as two other inspection methods (i.e., the heuristic evaluation and the cognitive walkthrough). The comparison study used a new comparison approach with standards, measures, and criteria to prove the effectiveness of methods. Results from the User Action Framework reliability study showed higher agreement scores at all classification levels than was found in previous work with a similar classification tool. In addition, agreement using the User Action Framework was stronger than the results obtained from the same experts using the heuristic evaluation. From the inspection method comparison study, results showed the Usability Problem Inspector to be more effective than the heuristic evaluation and consistent with effectiveness scores from the cognitive walkthrough. / Ph. D.
|
34 |
Investigating the Effectiveness of Applying the Critical Incident Technique to Remote Usability EvaluationThompson, Jennifer Anne 06 January 2000 (has links)
Remote usability evaluation is a usability evaluation method (UEM) where the experimenter, performing observation and analysis, is separated in space and/or time from the user. There are several approaches by which to implement remote evaluation, limited only by the availability of supporting technology. One such implementation method is RECITE (the REmote Critical Incident TEchnique), an adaptation of the user-reported critical incident technique developed by Castillo (1997). This technique requires that trained users, working in their normal work environment, identify and report critical incidents. Critical incidents are interactions with a system feature that prove to be particularly easy or difficult, leading to extremely good or extremely poor performance. Critical incident reports are submitted to the experimenter using an on-line reporting tool, who is responsible for their compilation into a list of usability problems. Support for this approach to remote evaluation has been reported (Hartson, H.R., Castillo, J.C., Kelso, J., and Neale, W.C., 1996; Castillo, 1997).
The purpose of this study was to quantitatively assess the effectiveness of RECITE with respect to traditional, laboratory-based applications of the critical incident technique. A 3x2x 5 mixed-factor experimental design was used to compare the frequency and severity ratings of critical incidents reported by remote versus laboratory-based users. Frequency was measured according to the number of critical incident reports submitted and severity was rated along four dimensions: task frequency, impact on task performance, impact on satisfaction, and error severity. This study also compared critical incident data reported by trained users versus by usability experts observing end-users. Finally, changes in critical incident data reported over time were evaluated.
In total, 365 critical incident reports were submitted, containing 117 unique usability problems and 50 usability success descriptions. Critical incidents were classified using the Usability Problem Inspector (UPI). A higher number of web-based critical incidents occurred during Planning than expected. The distribution of voice-based critical incidents differed among participant groups: users reported a greater than expected number of Planning incidents while experts reported fewer than expected Assessment incidents. Usability expert performance was not correlated, requiring that separate analyses be conducted for each expert data set.
Support for the effectiveness in applying critical incidents to remote usability was demonstrated, with all research hypotheses at least partially supported. Usability experts gave significantly different ratings of impact on task performance than did user reporters. Remote user performance versus laboratory-based users failed to reveal differences in all but one measure: laboratory-based users reported more positive critical incidents for the voice interface than did remote users. In general, the number of negative critical incidents decreased over time; a similar result did not apply to the number of positive critical incidents.
It was concluded that RECITE is an effective means of capturing problem-oriented data over time. Recommendations for its use as a formative evaluation method applied during the latter stages of product development (i.e. when a high fidelity prototype is available) are made. Opportunities for future research are identified. / Master of Science
|
35 |
Developing and Evaluating the (LUCID/Star)*Usability Engineering Process ModelHelms, James W. 14 May 2001 (has links)
In recent years, interactive systems developers have increasingly included usability engineering and interaction design as an integral part of software development. With recognition of the importance of usability come attempts to structure this new aspect of system design, leading to a variety of processes and methodologies. Unfortunately, these processes have often lacked flexibility, completeness and breadth of coverage, customizability, and tool support. This thesis shows the development of a process model, that we call LUCID/Star*, which addresses and overcomes the characteristics lacking in existing methodologies and an evaluation of its application in a real-world development environment. To demonstrate the goal of this thesis, we have used a combination of empirical and analytical evidence.
The (LUCID/Star)* process model for usability engineering grew out of the examination, adaptation, and extension of several existing usability and software methodologies. The methods that most greatly impacted the creation of (LUCID/Star)*were the LUCID Framework of interaction design, the Star Life Cycle of usability engineering, and the Waterfall and Spiral models of Software engineering. Unlike most of these, we have found that a sequence of cycles (each of which produces a product evolution) is a more effective analogy for the interaction development process. A sequence of cycles is more modular and makes it easier to focus on each cycle separately. Working with Optim Systems, Inc. in Falls Church, VA we instantiated the process model and introduced it as a process to develop a web-based device management system. (LUCID/Star)* performed remarkably in the Optim case, overcoming the tight constraints of budget and schedule cuts to produce an excellent prototype of the system. / Master of Science
|
36 |
Enhancing usability using automated security interface adaptation (ASIA)Zaaba, Zarul Fitri January 2014 (has links)
Many users are now significantly dependent upon computer application. Whilst many aspects are now used very successfully, an area in which usability difficulties continue to be encountered is in relation to security. Thus can become particularly acute in situations where users are required to interact and make decisions, and a key context here is typically when they need to respond to security warnings. The current implementation of security warnings can often be considered as an attempt to offer a one size fits all solution. However, it can be argued that many implementations are still lacking the ability to provide meaningful and effective warnings. As such, this research focuses upon achieving a better understanding of the elements that aid end-users in comprehending the warnings, the difficulties with the current approaches, and the resulting requirements in order to improve the design and implementation of such security dialogues. In the early stage of research, a survey was undertaken to investigate perceptions of security dialogues in practice, with a specific focus upon security warnings issued within web browsers. This provided empirical evidence of end-users’ experiences, and revealed notable difficulties in terms of their understanding and interpretation of the security interactions. Building upon this, the follow-up research investigated understanding of application level security warnings in wider contexts, looking firstly at users’ interpretation of what constitutes a security warning and then at their level of comprehension when related warnings occurred. These results confirmed the need to improve the dialogues so that the end-users are able to act appropriately, and consequently promoted the design and prototype implementation of a novel architecture to improve security warnings, which has been titled Automated Security Interface Adaptation (ASIA). The ASIA approach aims to improve security warnings by tailoring the interaction more closely to individual user needs. By automatically adapting the presentation to match each user’s understanding and preferences, security warnings can be modified in ways that enable users to better comprehend them, and thus make more informed security decisions and choices. A comparison of the ASIA-adapted interfaces compared to standard versions of warnings revealed that the modified versions were better understood. As such, the ASIA approach has significant potential to assist (and thereby protect) the end-user community in their future interactions with security.
|
37 |
Shipping usability : How to evaluate a graphical user interface with little or no access to end usersSamuelsson, Annelie January 2010 (has links)
<p>Interaction design is about designing interactive things so that they become usable. An interaction designer’s goal is therefore to design things not only right but also to design the right things, this is called usability. In this thesis the aim is to examine how to best evaluate a user interface that is in the final design phase and that has not involved the end user in its development at all up to this stage. This thesis examined the graphical user interface of GACship III, a system used to accurately record, approve and request payment for all services/charges incurred during port/off-port calls. Three inspection methods and three test methods were investigated. This was done to determine which ones that is appropriate to use during an evaluation with little or no access to end users since this is one of the problem that GAC is facing and since this study only had access to two end users. The system, GACship III, is in the final development phase and so far the development has been made without involving the end users. A checklist for usability evaluations was developed through studying four renowned design principles in the form of Maeda’s, Raskin’s, Nielsen’s and Norman’s view of usability. The results showed that a heuristic evaluation identifies more usability problems than a digital questionnaire. Probably because the heuristic evaluation gave room for more reflections and comments and therefore turned out to be a more in depth evaluation technique. The digital questionnaire proved to be a weaker method under these conditions, but all in all, the two methods complemented each other. The results also indicated a number of usability problems in GACship III, which implied that the system is not fully efficient. The graphical user interface contained for example a severe mode error together with an unreliable drop down menu. The system consisted of parts where the usability was considered satisfactory. However, those findings will not be discussed in this thesis. In order to improve the systems usability GAC is encouraged to rectify the discrepancies. The result of the study is in addition a usability checklist that can be used during further and future graphical user interface development at GAC.</p><p><strong>Keywords: </strong>Usability, evaluation, interface, checklist, shipping. </p>
|
38 |
Reducing the risks of telehealthcare expansion through the automation of efficiency evaluationAlexandru, Cristina Adriana January 2015 (has links)
Several European countries, including the UK, are investing in large-scale telehealthcare pilots, to thoroughly evaluate the benefits of telehealthcare. Due to the high level of risk associated with such projects, it becomes desirable to be able to predict the success of telehealthcare systems in potential deployments, in order to inform investment and help save resources. An important factor for the success of any telehealthcare deployment is usability, as it helps to achieve the benefits of the technology through increased productivity, decreased error rates, and better acceptance. In particular, efficiency, one of the characteristics of usability, should be seen as a central measure for success, as the timely care of a high number of patients is one of the important claims of telehealthcare. Despite the recognized importance of usability, it is seen as secondary in the design of telehealthcare systems. The resulting problems are difficult to predict due to the heterogeneity of deployment contexts. This thesis proposes the automation of usability evaluation through the use of modelling and simulation techniques. It describes a generic methodology which can guide a modeller in reusing models for predicting characteristics of usability within different deployment sites. It also describes a modelling approach which can be used together with the methodology, to run in parallel a user model, inspired from a cognitive architecture, and a system model, represented as a basic labelled transition system. The approach simulates a user working with a telehealthcare system, and within her environment, to predict the efficiency of the system and work process surrounding it. The modeller can experiment with different inputs to the models in terms of user profile, workload, ways of working, and system design, to model different potential- real or hypothetical- deployments, and obtain efficiency predictions for each. A comparison of the predictions helps analyse the effects on efficiency of changes in deployments. The work is presented as an experimental investigation, but emphasises the great potential of modelling and simulation for helping to inform investment, help reduce costs, mitigate risks and suggest changes that would be necessary for improving the usability, and therefore success or telehealthcare deployments. My vision is that, if used commercially, the approaches presented in this thesis could help reduce risks for scaling up telehealthcare deployments.
|
39 |
HCI factors affecting the mobile internet uptake in JordanOmar, Firas Y. January 2012 (has links)
The aim of this research is to highlight the factors and barriers that render mobile phone users averse to using their mobile handsets as an internet platform in Jordan. Three studies were conducted to achieve the aim of the conducted research of this PhD thesis. Both quantitative and qualitative approaches were used in all studies. Data was collected from the participants using questionnaires, open-ended questions and sketching techniques. Firstly, mobile internet usage in Jordan was explored in its wider sense. On the basis of these results, the second study compared PC and mobile internet use. This comparison resulted in the preference of PC internet rather than mobile internet. The study covered many aspects such as usability, familiarity, achievement and satisfaction in dealing with both mobile and stationary tools internet. The third study was divided into two sections. The first part required participants to design (using a sketching technique) a mobile application with regard to handling a critical issue (car violations), to establish the possibility for internet users in Jordan to perform tasks on a mobile platform that they currently perform on stationary internet tools. The second part of the study was an evaluation of this prototype application. The results revealed that the application was found to be very easy and useful by the participants of the study. They added that they would benefit from using such applications in their lives. There was an observed issue of security and trust related to the payment option provided as an option in the application. Participants were cautious and declined to use any ―untrusted‖ method of payment. In addition to lacking trust in e-commerce, participants lack trust and confidence in online payment methods, and stated that they would not recommend the payment option to anyone. Finally, the outcome of the study showed that the application is a novel idea in Jordan, and it is very easy to handle and use. Participants commented that it was easy to interact with the mobile application in order to complete different tasks. The key benefit of the application for participants lies in saving time, by avoiding long queues at the Traffic Department.
|
40 |
En utvärdering av programmet Voddlers användbarhet / An usability evaluation of the program VoddlerSharifpour, Omid, Conradsson, Christian January 2009 (has links)
<p>The purpose of this essey is through empirical methods investigate usability factors on Video On Demand applications for the Internet. More specificly we will focus on a application called Voddler. The purpose is to identify usability problems that exist in Voddler, and present the reader with suggestions on possible solutions. This could be used as guidelines to how to design for usability in this kind of system. We will use an online survey to investigate Voddler usability and use this data as a basis for our analysis. We will also conduct a expert evaluation of the system. The data collected from the survey will be compared to the expert evalutation and different theories behind usability. We will come to the conclusion that through a usability perspective Voddler has designed the software as an interactive Video On Demand service meanwhile the target audience want the application to function more like an normal computer program. This causes a conflict between the two that has to be resolved if one wants to optimize usability this kind of software.</p>
|
Page generated in 0.067 seconds