Spelling suggestions: "subject:"human–computer 1interaction"" "subject:"human–computer 3dinteraction""
811 |
Investigating user experience and user engagement for designHart, Jennefer January 2015 (has links)
Understanding the interactive experience of using digital technologies is a complex process. Traditional methods of evaluating interactive technologies originate from usability, which focuses on ease of use, ease of learning and performance. User Experience (UX) emerged from the recognition that usability alone does not account for the more subjective emotional responses experienced when interacting with a product. Although the term UX has become widely accepted within the area of Human Computer Interaction (HCI), its definition still remains unclear, making it difficult to evaluate and design for. This thesis adopts a hybrid perspective by bridging the division between the reductionist and holistic approaches to UX research. Using a multi-methods approach that combine the strengths of both quantitative (objective) and qualitative (subjective) methods, will provide deeper insights into the users’ judgement process of interactive products. Various theories have been proposed to understand UX, yet no consensual UX theory or model has emerged. The importance of aesthetics in influencing decisions about a products quality gained much attention in early UX research with conflicting results, sparking a surge of research into understanding the complexities of user quality judgement. Past UX research has focused on the multi-constructs of pragmatics, hedonics and aesthetics, and how these may influence user judgement, which can vary depending on the context, task and user background. However, little attention has been given to the impact of interactive design features upon UX. Findings from this thesis clearly show that interactivity is an important element within UX in both short and long-term usage. This thesis expands the existing process model of user quality judgement, through a series of three studies to reveal the importance of interactivity, and how initial perception and judgement of a products quality can change over time. The first two studies identify the importance of interactivity in positive influencing UX. Both studies revealed that affective and hedonic ratings increased as a result of interaction, demonstrating the powerful effect of interaction, and showed clear differences for websites that contained enhanced interactive features, despite the presence of usability problems. Further exploration using cluster analysis revealed three sub-groups that categorised users not only by their interactive style preferences, but also by their predispositions towards technology. This perspective of user sub-group analysis is a contribution to the field which bridges population-level quantitative analysis with qualitative findings that focuses on the individual ethnographic interpretations of experience. Considerable UX research has focused on short-term evaluations, based on users first impressions pre and post-interaction, with few studies capturing long-term usage. The third study reports on an ecological longitudinal investigation into how UX changes over time and long-term product use. A group of novice iPad users were tracked over six months to reveal that despite poor usability, hedonic ratings remained high, yet over time usefulness and utility were dominating factors affecting UX and product adoption. The influence of both device and app revealed that although users found the device more pleasurable, it was the variety of apps contained on the device that facilitated positive UX. The overall findings from this research provided some valuable methodological insights and aided the creation of set of practical UX heuristics that can be used to inform both future research and design practice.
|
812 |
Designing and implementing a virtual reality interaction frameworkRorke, Michael January 2000 (has links)
Virtual Reality offers the possibility for humans to interact in a more natural way with the computer and its applications. Currently, Virtual Reality is used mainly in the field of visualisation where 3D graphics allow users to more easily view complex sets of data or structures. The field of interaction in Virtual Reality has been largely neglected due mainly to problems with input devices and equipment costs. Recent research has aimed to overcome these interaction problems, thereby creating a usable interaction platform for Virtual Reality. This thesis presents a background into the field of interaction in Virtual Reality. It goes on to propose a generic framework for the implementation of common interaction techniques into a homogeneous application development environment. This framework adds a new layer to the standard Virtual Reality toolkit – the interaction abstraction layer, or interactor layer. This separation is in line with current HCI practices. The interactor layer is further divided into specific sections – input component, interaction component, system component, intermediaries, entities and widgets. Each of these performs a specific function, with clearly defined interfaces between the different components to promote easy objectoriented implementation of the framework. The validity of the framework is shown in comparison with accepted taxonomies in the area of Virtual Reality interaction. Thus demonstrating that the framework covers all the relevant factors involved in the field. Furthermore, the thesis describes an implementation of this framework. The implementation was completed using the Rhodes University CoRgi Virtual Reality toolkit. Several postgraduate students in the Rhodes University Computer Science Department utilised the framework implementation to develop a set of case studies. These case studies demonstrate the practical use of the framework to create useful Virtual Reality applications, as well as demonstrating the generic nature of the framework and its extensibility to be able to handle new interaction techniques. Finally, the generic nature of the framework is further demonstrated by moving it from the standard CoRgi Virtual Reality toolkit, to a distributed version of this toolkit. The distributed implementation of the framework utilises the Common Object Request Broker Architecture (CORBA) to implement the distribution of the objects in the system. Using this distributed implementation, we are able to ascertain that CORBA is useful in the field of distributed real-time Virtual Reality, even taking into account the extra overhead introduced by the additional abstraction layer. We conclude from this thesis that it is important to abstract the interaction layer from the other layers of a Virtual Reality toolkit in order to provide a consistent interface to developers. We have shown that our framework is implementable and useful in the field, making it easier for developers to include interaction in their Virtual Reality applications. Our framework is able to handle all the current aspects of interaction in Virtual Reality, as well as being general enough to implement future interaction techniques. The framework is also applicable to different Virtual Reality toolkits and development platforms, making it ideal for developing general, cross-platform interactive Virtual Reality applications.
|
813 |
Virtual identities : authoring interactive stories in virtual environmentsGreeff, Marde 15 February 2006 (has links)
Please read the abstract in the section 00front of this document / Dissertation (MSc (Computer Science))--University of Pretoria, 2006. / Computer Science / unrestricted
|
814 |
A device-free locator using computer vision techniquesVan den Bergh, Frans 20 November 2006 (has links)
Device-free locators allow the user to interact with a system without the burden of being physically in contact with some input device or without being connected to the system with cables. This thesis presents a device-free locator that uses computer vision techniques to recognize and track the user's hand. The system described herein uses a video camera to capture live video images of the user, which are segmented and processed to extract features that can be used to locate the user's hand within the image. Two types of features, namely moment based invariants and Fourier descriptors, are compared experimentally. An important property of both these techniques is that they allow the recognition of hand-shapes regardless of affine transformation, e.g. rotation within the plane or scale changes. A neural network is used to classify the extracted features as belonging to one of several hand signals, which can be used in the locator system as 'button clicks' or mode indicators. The Siltrack system described herein illustrates that the above techniques can be implemented in real-time on standard hardware. / Dissertation (MSc (Computer Science))--University of Pretoria, 2007. / Computer Science / unrestricted
|
815 |
Using catadioptrics for multidimensional interaction in computer graphicsLane, James Robert Timothy 23 November 2005 (has links)
This thesis introduces the use of catadioptrics for multidimensional interaction in the approach called Reflections. In computer graphics there is a need for multidimensional interaction that is not restricted by cabling connected to the input device. The use of a camera and computer vision presents a solution to the cabling problem. Unfortunately this solution presents an equally challenging problem: a single camera alone can not accurately calculate depth and is therefore not suitable for multidimensional interaction. This thesis presents a solution, called reflections to this problem. Reflections makes use of only a single camera and one or more mirrors to accurately calculate 3D, 5D, and 6D information in real time. Two applications in which this approach is used for natural, non-intrusive and multidimensional interaction are the Virtual Drums Project and Ndebele painting in virtual reality. The interaction in these applications and in particular the Virtual Drums is appropriate and intuitive, e.g. the user plays the drums with a real drumstick. Several computer vision algorithms are described in this thesis, which are used in the implementation of the Virtual Drums Project. / Dissertation (MSc ( Computer Science))--University of Pretoria, 2005. / Computer Science / unrestricted
|
816 |
Participant experience studies of interactive artworks : an investigation of laboratory-based methods used to study EchologyDeutscher, Meghan Catherine 05 1900 (has links)
We investigate the use of laboratory-based methodology for studying participant experience of interactive artworks. The investigation is motivated
by two goals: to inform the HCI practitioner of the role of participant experience studies in artwork from the perspective of the artist and to inform
the artist of how laboratory-based methodology can contribute to the refinement of their techniques and aesthetics. In this thesis three main purposes
for participant experience studies in the artist's process are derived from
the roles of artist, art object, and participants in an interactive artwork.
Common characteristics of participant experience studies are reviewed, with
three cases unique in their use of more formal methodologies examined in
detail. This thesis builds on a foundation set forth by these three cases in an
investigation of orientation media: media such as text, images, or video
designed by the artist to convey supplemental information to participants
and thus selectively influence their understanding of different elements in an interactive artwork. Orientation media in the form of instructions cards is used in a study of the interactive sound and video installation piece, Echology. The orientation media is successful in revealing elements of the artwork that, given explicit instructions or not, still cause confusion among participants. A general review of the study methodology is also provided. This includes observations of changes in participant behaviour due to their roles as subjects in a study and implications these changes have on using formal methodologies for studying participant experience. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
817 |
The selection and evaluation of a sensory technology for interaction in a warehouse environmentZadeh, Seyed Amirsaleh Saleh, Greyling, Jean January 2016 (has links)
In recent years, Human-Computer Interaction (HCI) has become a significant part of modern life as it has improved human performance in the completion of daily tasks in using computerised systems. The increase in the variety of bio-sensing and wearable technologies on the market has propelled designers towards designing more efficient, effective and fully natural User-Interfaces (UI), such as the Brain-Computer Interface (BCI) and the Muscle-Computer Interface (MCI). BCI and MCI have been used for various purposes, such as controlling wheelchairs, piloting drones, providing alphanumeric inputs into a system and improving sports performance. Various challenges are experienced by workers in a warehouse environment. Because they often have to carry objects (referred to as hands-full) it is difficult to interact with traditional devices. Noise undeniably exists in some industrial environments and it is known as a major factor that causes communication problems. This has reduced the popularity of using verbal interfaces with computer applications, such as Warehouse Management Systems. Another factor that effects the performance of workers are action slips caused by a lack of concentration during, for example, routine picking activities. This can have a negative impact on job performance and allow a worker to incorrectly execute a task in a warehouse environment. This research project investigated the current challenges workers experience in a warehouse environment and the technologies utilised in this environment. The latest automation and identification systems and technologies are identified and discussed, specifically the technologies which have addressed known problems. Sensory technologies were identified that enable interaction between a human and a computerised warehouse environment. Biological and natural behaviours of humans which are applicable in the interaction with a computerised environment were described and discussed. The interactive behaviours included the visionary, auditory, speech production and physiological movement where other natural human behaviours such paying attention, action slips and the action of counting items were investigated. A number of modern sensory technologies, devices and techniques for HCI were identified with the aim of selecting and evaluating an appropriate sensory technology for MCI. iii MCI technologies enable a computer system to recognise hand and other gestures of a user, creating means of direct interaction between a user and a computer as they are able to detect specific features extracted from a specific biological or physiological activity. Thereafter, Machine Learning (ML) is applied in order to train a computer system to detect these features and convert them to a computer interface. An application of biomedical signals (bio-signals) in HCI using a MYO Armband for MCI is presented. An MCI prototype (MCIp) was developed and implemented to allow a user to provide input to an HCI, in a hands-free and hands-full situation. The MCIp was designed and developed to recognise the hand-finger gestures of a person when both hands are free or when holding an object, such a cardboard box. The MCIp applies an Artificial Neural Network (ANN) to classify features extracted from the surface Electromyography signals acquired by the MYO Armband around the forearm muscle. The MCIp provided the results of data classification for gesture recognition to an accuracy level of 34.87% with a hands-free situation. This was done by employing the ANN. The MCIp, furthermore, enabled users to provide numeric inputs to the MCIp system hands-full with an accuracy of 59.7% after a training session for each gesture of only 10 seconds. The results were obtained using eight participants. Similar experimentation with the MYO Armband has not been found to be reported in any literature at submission of this document. Based on this novel experimentation, the main contribution of this research study is a suggestion that the application of a MYO Armband, as a commercially available muscle-sensing device on the market, has the potential as an MCI to recognise the finger gestures hands-free and hands-full. An accurate MCI can increase the efficiency and effectiveness of an HCI tool when it is applied to different applications in a warehouse where noise and hands-full activities pose a challenge. Future work to improve its accuracy is proposed.
|
818 |
The development and evaluation of gaze selection techniquesVan Tonder, Martin Stephen January 2009 (has links)
Eye gaze interaction enables users to interact with computers using their eyes. A wide variety of eye gaze interaction techniques have been developed to support this type of interaction. Gaze selection techniques, a class of eye gaze interaction techniques which support target selection, are the subject of this research. Researchers developing these techniques face a number of challenges. The most significant challenge is the limited accuracy of eye tracking equipment (due to the properties of the human eye). The design of gaze selection techniques is dominated by this constraint. Despite decades of research, existing techniques are still significantly less accurate than the mouse. A recently developed technique, EyePoint, represents the state of the art in gaze selection techniques. EyePoint combines gaze input with keyboard input. Evaluation results for this technique are encouraging, but accuracy is still a concern. Early trigger errors, resulting from users triggering a selection before looking at the intended target, were found to be the most commonly occurring errors for this technique. The primary goal of this research was to improve the usability of gaze selection techniques. In order to achieve this goal, novel gaze selection techniques were developed. New techniques were developed by combining elements of existing techniques in novel ways. Seven novel gaze selection techniques were developed. Three of these techniques were selected for evaluation. A software framework was developed for implementing and evaluating gaze selection techniques. This framework was used to implement the gaze selection techniques developed during this research. Implementing and evaluating all of the techniques using a common framework ensured consistency when comparing the techniques. The novel techniques which were developed were evaluated against EyePoint and the mouse using the framework. The three novel techniques evaluated were named TargetPoint, StaggerPoint and ScanPoint. TargetPoint combines motor space expansion with a visual feedback highlight whereas the StaggerPoint and TargetPoint designs explore novel approaches to target selection disambiguation. A usability evaluation of the three novel techniques alongside EyePoint and the mouse revealed some interesting trends. TargetPoint was found to be more usable and accurate than EyePoint. This novel technique also proved more popular with test participants. One aspect of TargetPoint which proved particularly popular was the visual feedback highlight, a feature which was found to be a more effective method of combating early trigger errors than existing approaches. StaggerPoint was more efficient than EyePoint, but was less effective and satisfying. ScanPoint was the least popular technique. The benefits of providing a visual feedback highlight and test participants' positive views thereof contradict views expressed in existing research regarding the usability of visual feedback. These results have implications for the design of future gaze selection techniques. A set of design principles was developed for designing new gaze selection techniques. The designers of gaze selection techniques can benefit from these design principles by applying them to their techniques
|
819 |
Explanations in hybrid expert systemsScott, Lawrence Gill January 1990 (has links)
This thesis addresses the problem of providing explanations for expert systems implemented in a shell that supports a hybrid knowledge representation architecture. Hybrid representations combine rules and frames and are the predominant architecture in intermediate and high-end commercial expert system shells. The main point of the thesis is that frames can be endowed with explanation capabilities on a par with rules. The point is illustrated by a partial specification for an expert system shell and sample explanations which could be generated by an expert system coded to that specification.
As background information, the thesis introduces expert systems and the standard knowledge representation schemes that support them: rule-only schemes, and hybrid schemes that combine rules with frames. Explanations for expert systems are introduced in the context of rules, since rules are the only representation for which explanations are supported, either in commercial tools or in the preponderance of research.
The problem addressed by the thesis, how to produce explanations for hybrid architectures, is analyzed in two dimensions. Research was surveyed in three areas for guiding principles toward solving the problem: frame logic, metalevel architectures, and reflective architectures. With the few principles that were discovered in hand, the problem is then
analyzed into a small number of subproblems, mainly concerning high-level architectural decisions.
The solution proposed to the problem is described in two ways. First a partial specification for expert system shell functionality is offered, which describes, first, object structures and, then, behaviors at three points in time—object compilation time, execution time, and explanation generation time. The second component of the description is a set of extended examples which illustrate explanation generation in a hypothetical expert system. The solution adopts principles of reflective architectures, storing metainformation for explanations
in metaobjects which are distinct from the object-level objects they explain. The most novel contribution of the solution is a scheme for relating all the ways that objects' slot values may be computed to the goal tree construct introduced by the seminal Mycin expert system.
The final chapter explores potential problems with the solution and the possibility of producing better explanations for hybrid expert system shell architectures. / Science, Faculty of / Computer Science, Department of / Graduate
|
820 |
The effectiveness of three dimensional interactionBoritz, James 05 1900 (has links)
Most interaction with computers today takes place in a two dimensional environment.
Even when using three dimensional graphics applications, input is often still restricted
to two dimensions. Many believe that the use of three dimensional input devices will alleviate
this restriction and allow for a much more natural human-machine dialog.
This thesis seeks to establish how factors dealing with visual feedback and task structure
affect the ability to perform interactive tasks in a three dimensional virtual environment.
The factors investigated were stereoscopic vision, motion parallax, stimulus arrangement
and stimulus complexity. Four tasks were studied. These tasks were: point location, docking,
line tracing and curve tracing. All the tasks used a six degree of freedom input device
to control a pointer in a three dimensional virtual environment.
Four experiments corresponding to the four tasks were conducted to investigate these
factors. Among other things the results showed the following. Stereoscopic vision provided
a strong benefit to positioning-based tasks, but this benefit was weakened in the case of tracing
tasks. Motion parallax via head-tracking often had no effect upon task performance and
where an effect was found it was often detrimental. The position of stimuli influenced performance
across all of the tasks. The orientation of stimuli influenced performance in the
task in which it was varied. / Science, Faculty of / Computer Science, Department of / Graduate
|
Page generated in 0.5202 seconds