1 |
An exploration of the challenges facing CEOs of privatised utilities and their response to those challenges in terms of actions and leadership styleDavies, Jonathan January 2003 (has links)
No description available.
|
2 |
Determining the Effectiveness of the Usability Problem Inspector: A Theory-Based Model and Tool for Finding Usability ProblemsAndre, Terence Scott 17 April 2000 (has links)
The need for cost-effective usability evaluation has led to the development of methodologies to support the usability practitioner in finding usability problems during formative evaluation. Even though various methods exist for performing usability evaluation, practitioners seldom have the information needed to decide which method is appropriate for their specific purpose. In addition, most methods do not have an integrated relationship with a theoretical foundation for applying the method in a reliable and efficient manner. Practitioners often have to apply their own judgment and techniques, leading to inconsistencies in how the method is applied in the field. Usability practitioners need validated information to determine if a given usability evaluation method is effective and why it should be used instead of some other method. Such a desire motivates the need for formal, empirical comparison studies to evaluate and compare usability evaluation methods. In reality, the current data for comparing usability evaluation methods suffers from a lack of consistent measures, standards, and criteria for identifying effective methods.
The work described here addresses three important research activities. First, the User Action Framework was developed to help organize usability concepts and issues into a knowledge base that supports usability methods and tools. From the User Action Framework, a mapping was made to the Usability Problem Inspector; a tool to help practitioners conduct a highly focused inspection of an interface design. Second, the reliability of the User Action Framework was evaluated to determine if usability practitioners could use the framework in a consistent manner when classifying a set of usability problems. Third, a comprehensive comparison study was conducted to determine if the Usability Problem Inspector, based on the User Action Framework, could produce results just as effective as two other inspection methods (i.e., the heuristic evaluation and the cognitive walkthrough). The comparison study used a new comparison approach with standards, measures, and criteria to prove the effectiveness of methods. Results from the User Action Framework reliability study showed higher agreement scores at all classification levels than was found in previous work with a similar classification tool. In addition, agreement using the User Action Framework was stronger than the results obtained from the same experts using the heuristic evaluation. From the inspection method comparison study, results showed the Usability Problem Inspector to be more effective than the heuristic evaluation and consistent with effectiveness scores from the cognitive walkthrough. / Ph. D.
|
3 |
Investigating the Effectiveness of Applying the Critical Incident Technique to Remote Usability EvaluationThompson, Jennifer Anne 06 January 2000 (has links)
Remote usability evaluation is a usability evaluation method (UEM) where the experimenter, performing observation and analysis, is separated in space and/or time from the user. There are several approaches by which to implement remote evaluation, limited only by the availability of supporting technology. One such implementation method is RECITE (the REmote Critical Incident TEchnique), an adaptation of the user-reported critical incident technique developed by Castillo (1997). This technique requires that trained users, working in their normal work environment, identify and report critical incidents. Critical incidents are interactions with a system feature that prove to be particularly easy or difficult, leading to extremely good or extremely poor performance. Critical incident reports are submitted to the experimenter using an on-line reporting tool, who is responsible for their compilation into a list of usability problems. Support for this approach to remote evaluation has been reported (Hartson, H.R., Castillo, J.C., Kelso, J., and Neale, W.C., 1996; Castillo, 1997).
The purpose of this study was to quantitatively assess the effectiveness of RECITE with respect to traditional, laboratory-based applications of the critical incident technique. A 3x2x 5 mixed-factor experimental design was used to compare the frequency and severity ratings of critical incidents reported by remote versus laboratory-based users. Frequency was measured according to the number of critical incident reports submitted and severity was rated along four dimensions: task frequency, impact on task performance, impact on satisfaction, and error severity. This study also compared critical incident data reported by trained users versus by usability experts observing end-users. Finally, changes in critical incident data reported over time were evaluated.
In total, 365 critical incident reports were submitted, containing 117 unique usability problems and 50 usability success descriptions. Critical incidents were classified using the Usability Problem Inspector (UPI). A higher number of web-based critical incidents occurred during Planning than expected. The distribution of voice-based critical incidents differed among participant groups: users reported a greater than expected number of Planning incidents while experts reported fewer than expected Assessment incidents. Usability expert performance was not correlated, requiring that separate analyses be conducted for each expert data set.
Support for the effectiveness in applying critical incidents to remote usability was demonstrated, with all research hypotheses at least partially supported. Usability experts gave significantly different ratings of impact on task performance than did user reporters. Remote user performance versus laboratory-based users failed to reveal differences in all but one measure: laboratory-based users reported more positive critical incidents for the voice interface than did remote users. In general, the number of negative critical incidents decreased over time; a similar result did not apply to the number of positive critical incidents.
It was concluded that RECITE is an effective means of capturing problem-oriented data over time. Recommendations for its use as a formative evaluation method applied during the latter stages of product development (i.e. when a high fidelity prototype is available) are made. Opportunities for future research are identified. / Master of Science
|
4 |
Integrating Usability Evaluation in an Agile Development ProcessNeveryd, Malin January 2014 (has links)
Medius is a software company which provides IT-solutions that streamlines and automates business processes. The purpose with this thesis was to investigate the possibility to integrate usability evaluation in the development process of Medius software MediusFlow. How such integration would be done, and which usability evaluation methods could be used. To be able to provide a suggestion, a prestudy was conducted, this in order to get a good overview of Medius as a company as well as the development process of MediusFlow. With the prestudy as a basis, the main study was conducted, and four different usability evaluation methods were chosen. In this study, the four chosen methods were Cognitive Walkthrough, Coaching Method, Consistency Inspection and Question-Asking protocol. These usability evaluation methods were carried out, evaluated and analyzed. Based on the studies and the literature, a suggestion regarding integration of usability evaluations was made. The result from this case study was presented as a process chart, where the different phases in Medius software development process are matched together with suiting usability evaluation methods. The relevant phases and their suggested methods: Preparation phase - Cognitive Walkthrough and Coaching Method in combination with Thinking-Aloud and Interviews Hardening phase - Coaching Method in combination with Thinking-Aloud and Interviews, as well as Consistency Inspection Maintenance - Field observation This result is a part of the overall work towards a more user-centered design of the software.
|
5 |
Validating the User-Centered Hybrid Assessment Tool (User-CHAT): a comparative usability evaluationElgin, Peter D. January 1900 (has links)
Doctor of Philosophy / Department of Psychology / John J. Uhlarik / Usability practitioners need effective usability assessment techniques in order to facilitate development of usable consumer products. Many usability evaluation methods have been promoted as the ideal. Few, however, fulfill expectations concerning effectiveness. Additionally, lack of empirical data forces usability practitioners to rely on personal judgments and/or anecdotal statements when deciding upon which usability method best suits their needs. Therefore the present study had two principal objectives: (1) to validate a hybrid usability technique that identifies important and ignores inconsequential usability problems, and (2) to provide empirical performance data for several usability protocols on a variety of contemporary comparative metrics. The User-Centered Hybrid Assessment Tool (User-CHAT) was developed to maximize efficient diagnosis of usability issues from a behaviorally-based perspective while minimizing time and resource limitations typically associated with usability assessment environments. Several characteristics of user-testing, the heuristic evaluation, and the cognitive walkthrough were combined to create the User-CHAT. Prior research has demonstrated that the User-CHAT supports an evaluation within 3-4 hrs, can be used by individuals with limited human factors / usability background, and requires little training to be used competently, even for complex systems. A state-of-the-art suite of avionics displays and a series of benchmark tasks provided the context where the User-CHAT’s performance was measured relative to its parent usability methods. Two techniques generated comparison lists of usability problems – user-testing data and various inclusion criteria for usability problems identified by the User-CHAT, heuristic evaluation, and cognitive walkthrough. Overall the results demonstrated that the User-CHAT attained higher effectiveness scores than the heuristic evaluation and cognitive walkthrough, suggesting that it helped evaluators identify many usability problems that actually impact users, i.e., higher thoroughness, while attenuating time and effort on issues that were not important, i.e., higher validity. Furthermore, the User-CHAT had the greatest proportion of usability problems that were rated as serious, i.e., usability issues that hinder performance and compromise safety. The User-CHAT’s performance suggests that it is an appropriate usability technique to implement into the product development lifecycle. Limitations and future research directions are discussed.
|
6 |
Usability Problem Description and the Evaluator Effect in Usability TestingCapra, Miranda Galadriel 05 April 2006 (has links)
Previous usability evaluation method (UEM) comparison studies have noted an evaluator effect on problem detection in heuristic evaluation, with evaluators differing in problems found and problem severity judgments. There have been few studies of the evaluator effect in usability testing (UT), task-based testing with end-users. UEM comparison studies focus on counting usability problems detected, but we also need to assess the content of usability problem descriptions (UPDs) to more fully measure evaluation effectiveness. The goals of this research were to develop UPD guidelines, explore the evaluator effect in UT, and evaluate the usefulness of the guidelines for grading UPD content.
Ten guidelines for writing UPDs were developed by consulting usability practitioners through two questionnaires and a card sort. These guidelines are (briefly): be clear and avoid jargon, describe problem severity, provide backing data, describe problem causes, describe user actions, provide a solution, consider politics and diplomacy, be professional and scientific, describe your methodology, and help the reader sympathize with the user. A fourth study compared usability reports collected from 44 evaluators, both practitioners and graduate students, watching the same 10-minute UT session recording. Three judges measured problem detection for each evaluator and graded the reports for following 6 of the UPD guidelines.
There was support for existence of an evaluator effect, even when watching pre-recorded sessions, with low to moderate individual thoroughness of problem detection across all/severe problems (22%/34%), reliability of problem detection (37%/50%) and reliability of severity judgments (57% for severe ratings). Practitioners received higher grades averaged across the 6 guidelines than students did, suggesting that the guidelines may be useful for grading reports. The grades for the guidelines were not correlated with thoroughness, suggesting that the guideline grades complement measures of problem detection.
A simulation of evaluators working in groups found a 34% increase in severe problems found by adding a second evaluator. The simulation also found that thoroughness of individual evaluators would have been overestimated if the study had included a small number of evaluators. The final recommendations are to use multiple evaluators in UT, and to assess both problem detection and description when measuring evaluation effectiveness. / Ph. D.
|
7 |
Avaliação de usabilidade de técnicas de visualização de informações multidimensionais / Usability evaluation of multidimensional visualization techniquesValiati, Eliane Regina de Almeida January 2008 (has links)
Técnicas de visualização de informações multidimensionais têm o potencial de auxiliar na análise visual e exploração de grandes conjuntos de dados, através do emprego de mecanismos que buscam tanto representar visualmente os dados quanto permitir ao usuário a interação com estas representações. Neste contexto, diversas técnicas têm sido desenvolvidas, muitas delas sem uma avaliação detalhada e aprofundada tanto de eficiência como de utilidade no suporte às necessidades dos usuários. Contudo, há relativamente pouco tempo começaram a ser publicados trabalhos abordando as diversas questões relacionadas à avaliação de usabilidade de sistemas ou das aplicações que implementam estas técnicas como forma de promover sua eficiente e efetiva utilização. A avaliação de usabilidade de interfaces de sistemas de visualização representa um desafio de pesquisa uma vez que elas apresentam significativas diferenças com relação a outros tipos de interface. Neste sentido, existe uma carência de sistematização (incluindo o uso de métodos e técnicas de avaliação de usabilidade) que explore e considere as características deste tipo de interface de maneira adequada. Esta tese investiga soluções viáveis para o desenvolvimento de uma abordagem sistemática para avaliação de usabilidade de técnicas de visualização de informações multidimensionais e apresenta as seguintes soluções ao problema em estudo: 1) determinação de uma taxonomia de tarefas específica relacionada ao uso de visualizações multidimensionais no processo de análise de dados e 2) adaptação de técnicas e métodos de avaliação de usabilidade, com o objetivo de torná-los mais efetivos ao contexto de sistemas de visualização de informações multidimensionais. / Multidimensional visualization techniques have the potential of supporting the visual analysis and exploration of large datasets, by means of providing visual representations and interaction techniques which allow users to interact with the data through their graphical representation. In this context, several techniques have been developed, most of them being reported without a broad and deep evaluation both regarding their efficiency and utility in supporting users tasks. Few years ago, thus quite recently, several works have been published reporting many issues related to the evaluation of visualization systems and applications, as a means of promoting their efficiency and effective use. In spite of these works, the usability evaluation of visualization systems’ graphical interfaces remains a challenge because of the significant differences between these interfaces and those of other systems. This way, there is a need of finding a systematic approach for such evaluations, including the definition of which usability methods and techniques are best suited for this kind of interfaces. This thesis reports our investigation of viable solutions for the development of a systematic approach for the usability evaluation of multidimensional information visualizations. We have conducted several case studies and experiments with users and have achieved the following contributions: 1) a taxonomy of visualization tasks, that is related to the use of interactive visualization techniques for the exploration and analysis of multidimensional datasets and 2) adaptation of usability evaluation techniques with the goal of making them more effective in the context of multidimensional information visualizations.
|
8 |
Avaliação de usabilidade de técnicas de visualização de informações multidimensionais / Usability evaluation of multidimensional visualization techniquesValiati, Eliane Regina de Almeida January 2008 (has links)
Técnicas de visualização de informações multidimensionais têm o potencial de auxiliar na análise visual e exploração de grandes conjuntos de dados, através do emprego de mecanismos que buscam tanto representar visualmente os dados quanto permitir ao usuário a interação com estas representações. Neste contexto, diversas técnicas têm sido desenvolvidas, muitas delas sem uma avaliação detalhada e aprofundada tanto de eficiência como de utilidade no suporte às necessidades dos usuários. Contudo, há relativamente pouco tempo começaram a ser publicados trabalhos abordando as diversas questões relacionadas à avaliação de usabilidade de sistemas ou das aplicações que implementam estas técnicas como forma de promover sua eficiente e efetiva utilização. A avaliação de usabilidade de interfaces de sistemas de visualização representa um desafio de pesquisa uma vez que elas apresentam significativas diferenças com relação a outros tipos de interface. Neste sentido, existe uma carência de sistematização (incluindo o uso de métodos e técnicas de avaliação de usabilidade) que explore e considere as características deste tipo de interface de maneira adequada. Esta tese investiga soluções viáveis para o desenvolvimento de uma abordagem sistemática para avaliação de usabilidade de técnicas de visualização de informações multidimensionais e apresenta as seguintes soluções ao problema em estudo: 1) determinação de uma taxonomia de tarefas específica relacionada ao uso de visualizações multidimensionais no processo de análise de dados e 2) adaptação de técnicas e métodos de avaliação de usabilidade, com o objetivo de torná-los mais efetivos ao contexto de sistemas de visualização de informações multidimensionais. / Multidimensional visualization techniques have the potential of supporting the visual analysis and exploration of large datasets, by means of providing visual representations and interaction techniques which allow users to interact with the data through their graphical representation. In this context, several techniques have been developed, most of them being reported without a broad and deep evaluation both regarding their efficiency and utility in supporting users tasks. Few years ago, thus quite recently, several works have been published reporting many issues related to the evaluation of visualization systems and applications, as a means of promoting their efficiency and effective use. In spite of these works, the usability evaluation of visualization systems’ graphical interfaces remains a challenge because of the significant differences between these interfaces and those of other systems. This way, there is a need of finding a systematic approach for such evaluations, including the definition of which usability methods and techniques are best suited for this kind of interfaces. This thesis reports our investigation of viable solutions for the development of a systematic approach for the usability evaluation of multidimensional information visualizations. We have conducted several case studies and experiments with users and have achieved the following contributions: 1) a taxonomy of visualization tasks, that is related to the use of interactive visualization techniques for the exploration and analysis of multidimensional datasets and 2) adaptation of usability evaluation techniques with the goal of making them more effective in the context of multidimensional information visualizations.
|
9 |
Avaliação de usabilidade de técnicas de visualização de informações multidimensionais / Usability evaluation of multidimensional visualization techniquesValiati, Eliane Regina de Almeida January 2008 (has links)
Técnicas de visualização de informações multidimensionais têm o potencial de auxiliar na análise visual e exploração de grandes conjuntos de dados, através do emprego de mecanismos que buscam tanto representar visualmente os dados quanto permitir ao usuário a interação com estas representações. Neste contexto, diversas técnicas têm sido desenvolvidas, muitas delas sem uma avaliação detalhada e aprofundada tanto de eficiência como de utilidade no suporte às necessidades dos usuários. Contudo, há relativamente pouco tempo começaram a ser publicados trabalhos abordando as diversas questões relacionadas à avaliação de usabilidade de sistemas ou das aplicações que implementam estas técnicas como forma de promover sua eficiente e efetiva utilização. A avaliação de usabilidade de interfaces de sistemas de visualização representa um desafio de pesquisa uma vez que elas apresentam significativas diferenças com relação a outros tipos de interface. Neste sentido, existe uma carência de sistematização (incluindo o uso de métodos e técnicas de avaliação de usabilidade) que explore e considere as características deste tipo de interface de maneira adequada. Esta tese investiga soluções viáveis para o desenvolvimento de uma abordagem sistemática para avaliação de usabilidade de técnicas de visualização de informações multidimensionais e apresenta as seguintes soluções ao problema em estudo: 1) determinação de uma taxonomia de tarefas específica relacionada ao uso de visualizações multidimensionais no processo de análise de dados e 2) adaptação de técnicas e métodos de avaliação de usabilidade, com o objetivo de torná-los mais efetivos ao contexto de sistemas de visualização de informações multidimensionais. / Multidimensional visualization techniques have the potential of supporting the visual analysis and exploration of large datasets, by means of providing visual representations and interaction techniques which allow users to interact with the data through their graphical representation. In this context, several techniques have been developed, most of them being reported without a broad and deep evaluation both regarding their efficiency and utility in supporting users tasks. Few years ago, thus quite recently, several works have been published reporting many issues related to the evaluation of visualization systems and applications, as a means of promoting their efficiency and effective use. In spite of these works, the usability evaluation of visualization systems’ graphical interfaces remains a challenge because of the significant differences between these interfaces and those of other systems. This way, there is a need of finding a systematic approach for such evaluations, including the definition of which usability methods and techniques are best suited for this kind of interfaces. This thesis reports our investigation of viable solutions for the development of a systematic approach for the usability evaluation of multidimensional information visualizations. We have conducted several case studies and experiments with users and have achieved the following contributions: 1) a taxonomy of visualization tasks, that is related to the use of interactive visualization techniques for the exploration and analysis of multidimensional datasets and 2) adaptation of usability evaluation techniques with the goal of making them more effective in the context of multidimensional information visualizations.
|
10 |
Performance Evaluation of Two Different Usability Evaluation Methods in the Context of Collaborative Writing SystemsBakhtyar, Shoaib, Afridi, Qaisar Zaman January 2010 (has links)
In today’s world of rapid technological development one cannot deny the importance of collaborative writing systems. Besides many advantages of a collaborative writing system the major one is to allow its end users to work in collaboration with each other without having to physically meet. In the past various researches has been carried out for the usability evaluation of collaborative writing systems using the think aloud protocol method however there is no study conducted on the comparison of different usability evaluation methods in the context of collaborative writing systems. In this thesis work the authors have tried to find the limitations and capabilities of think aloud protocol and co-discovery learning methods in the context of a collaborative writing system called ZOHO, as well as the usability evaluation of ZOHO using think aloud protocol and co-discovery learning methods. The authors found various usability errors in ZOHO. Apart from this the authors also observed the two usability evaluation methods when they were used for usability evaluation of ZOHO. The authors found that both the methods have its’ own benefits and drawbacks. While the co-discovery learning method was fast enough, it was expensive in terms of human resource. On the other hand think aloud protocol method was slow to perform but there was less human resource used. Both the usability methods found almost the same usability errors. / In this thesis work the primary objective was to figure out the limitations and capabilities of think aloud protocol and co-discovery learning methods in the context of ZOHO; a collaborative writing system. Apart from this the secondary objective of this thesis was to conduct the usability evaluation of ZOHO and to find out what makes ZOHO ineffective, inefficient and unsatisfactory. The authors carried out usability tests on ZOHO using the think aloud protocol and co-discovery learning methods. After the tests results’ analysis the effectiveness, efficiency and satisfaction level of ZOHO was figured in section 7.2.1, 7.2.2 and 7.2.3 while the usability problems that make ZOHO ineffective, inefficient and unsatisfactory are discussed in section 7.2.4 of this thesis. Apart from the usability of ZOHO, the authors were also able to identify strong and weak points of the think aloud protocol and co-discovery learning methods when used for the usability evaluation of a collaborative writing system. They found that think aloud protocol testing is better if the evaluator is cost cautious or if he is looking for a detailed usability problems but does not cares about the time taken by the test. However if the evaluator cares about the test time and he cares less about the cost in terms of participants required for the test then he should use the co-discovery method for testing a collaborative writing system.
|
Page generated in 0.1192 seconds