Spelling suggestions: "subject:"human‐computer interaction"" "subject:"suman‐computer interaction""
161 |
Design for an interconnected world : home lighting as an immersive interactive systemVollmer, Florian 05 1900 (has links)
No description available.
|
162 |
The use of images and descriptive words for the development of an image database for product designersWu, Chun Ting January 2005 (has links)
This research aims to understand the role images currently play within the design process, in order to develop a classification of image types and reference keywords to construct an electronic image database for professional use in product design. Images play an important role in the design process, both in defining the context for designs and in informing the creation of individual design. They are also used to communicate with clients, to understand consumers, to assist in expressing the themes of the project, to understand the related environments, or to search for inspiration or functional solutions. Designers usually have their own collections of images, however for each project they still spend a significant amount of time searching images, either looking within their own collection or searching for new images. This study is based on the assumption that there is a structure that can show the relationship between the image itself and the information it conveys and can be used to develop the database. A product-image database will enable designers to consult images more easily and this will also facilitate communication of visual ideas among designers or between designers and their clients, thus augmenting its potential value in the professional design process. Also, the value of an image may be enhanced by applying its linguistic associations through descriptions and keywords which identify and interpret its content. Through a series of interviews, workshops, and understanding relevant issues, such as design method, linguistic theory, perception psychology and so on, a prototype database system was developed. It was developed based on three information divisions: SPECIFICATION, CHARACTERISTIC, and EMOTION. The three divisions construct a model of the information which an image conveys. The database prototype was tested and evaluated by groups of students and professional designers. The results showed that users understand the concept and working of the database and appreciated its value. They also indicated that the CHARACTERISTIC division was most valuable as it allows users to record images through their recollection of feelings.
|
163 |
Design and Realization of the Gesture-Interaction System Based on KinectXu, Jie January 2014 (has links)
In the past 20 years humans have mostly used a mouse to interact with computers. However, with the rapidly growing use of computers, a need for alternative means of interaction has emerged. With the advent of Kinect, a brand-new way of human- computer interaction has been introduced. It allows the use of gestures - the most natural body-language - to communicate with computers, helping us get rid of traditional constraints and providing an intuitive method for executing operations. This thesis presents how to design and implement a program to help people interact with computers, without the traditional mouse, and with the support and help of a Kinect device (an XNA Game framework with Microsoft Kinect SDK v1.7). For dynamic gesture recognition, the Hidden Markov Model (HMM) and Dynamic Time Warping (DTW), are suggested. The use of DTW is being motivated by experimental analysis. A dynamic-gesture-recognition program is developed, based on DTW, to help computers recognize customized gestures by users. The experiment also shows that DTW can have rather good performance. As for further development, the use of the XNA Game 4.0 framework, which integrates the Kinect body tracking into DTW gesture recognition technologies, is introduced. Finally, a functional test is conducted on the interaction system. In addition to summarizing the results, the thesis also discusses what can be improved in the future.
|
164 |
Improving understanding of website privacy policies2004 August 1900 (has links)
Machine-readable privacy policies have been developed to help reduce user effort in understanding how websites will use personally identifiable information (PII). The goal of these policies is to enable the user to make informed decisions about the disclosure of personal information in web-based transactions. However, these privacy policies are complex, requiring that a user agent evaluate conformance between the user’s privacy preferences and the site’s privacy policy, and indicate this conformance information to the user. The problem addressed in this thesis is that even with machine-readable policies and current user agents, it is still difficult for users to determine the cause and origin of a conflict between privacy preferences and privacy policies. The problem arises partly because current standards operate at the page level: they do not allow a fine-grained treatment of conformance down to the level of a specific field in a web form. In this thesis the Platform for Privacy Preferences (P3P) is extended to enable field-level comparisons, field-specific conformance displays, and faster access to additional field-specific conformance information. An evaluation of a prototype agent based on these extensions showed that they allow users to more easily understand how the website privacy policy relates to the user’s privacy preferences, and where conformance conflicts occur.
|
165 |
Supporting information retrieval system users by making suggestions and visualising resultsMorgan, Jeffrey James January 2000 (has links)
No description available.
|
166 |
An investigation of temporal and spatial limitations of haptic devicesWall, Steven A. January 2000 (has links)
No description available.
|
167 |
Towards the development of a model of user engagement with packaged softwareFinnerty, Cecilia January 2001 (has links)
No description available.
|
168 |
The nature of engagement and its role in hypermedia evaluation and designJacques, Richard David January 1996 (has links)
No description available.
|
169 |
Multi-modal usability evaluationHyde, Joanne Krysia January 2001 (has links)
Research into the usability of multi-modal systems has tended to be device-led, with a resulting lack of theory about multi-modal interaction and how it might differ from more conventional interaction. This is compounded by a confusion over the precise definition of modality between the various disciplines within the HCI community, how modalities can be effectively classified, and their usability properties. There is a consequent lack of appropriate methodologies and notations to model such interactions and assess the usability implications of these interfaces. The role of expertise and craft skill in using HCI techniques is also poorly understood. This thesis proposes a new definition of modality, and goes on to identify issues of importance to multi-modal usability, culminating in the development of a new methodology to support the identification of such usability issues. It additionally explores the role of expertise and craft skill in using usability modelling techniques to assess usability issues. By analysing the problems inherent in current definitions and approaches, as well as issues relevant to cognitive science, a clear understanding of both the requirements for a suitable definition of modality and the salient usability issues are obtained. A novel definition of modality, based on the three elements of sense, information form and temporal nature is proposed. Further, an associated taxonomy is produced, which categorises modalities within the sensory dimension as visual, acoustic and haptic. This taxonomy classifies modalities within the information form dimension as lexical, symbolic or concrete, and classifies the temporal form dimension modalities as discrete, continuous, or dynamic. This results in a twenty-seven cell taxonomy, with each cell representing one taxon, indicating one particular type of modality. This is a faceted classification system, with the modality named after the intersection of the categories, building the category names into a compound modality name. The issues surrounding modality are examined and refined into the concepts of modality types, properties and clashes. Modalities are identified as belonging to either the system or the user, and being expressive or receptive in type. Various properties are described based on issues of granularity and redundancy. The five different types of clashes are described. Problems relating to the modelling of multi-modal interaction are examined by means of a motivating case study based on a portion of an interface for a robotic arm. The effectiveness of five modelling techniques, STN, CW, CPM-GOMS, PUM and Z, in representing multi-modal issues are assessed. From this, and using the collated definition, taxonomy and theory, a new methodology, Evaluating Multi-modal Usability (EMU), is developed. This is applied to a previous case study of the robotic arm to assess its application and coverage. Both the definition and EMU are used by students in a case study to test the definition and methodology's effectiveness, and to examine the leverage such an approach may give. The results shows that modalities can be successfully identified within an interactive context, and that usability issues can be described. Empirical video data of the robotic arm in use is used to confirm the issues identified by the previous analyses, and to identify new issues. A rational re-analysis of the six approaches (STN, CW, CPM-GOMS, PUM, Z and EMU) is conducted in order to distinguish between issues identified through craft skill, based on general HCI expertise and familiarity with the problem, and issues identified due to the core of the method for each approach. This is to gain a realistic understanding of the validity of claims made by each method, and to identify how else issues might be identified, and the consequent implications. Craft skill is found to have a wider role than anticipated, and the importance of expertise in using such approaches emphasised. From the case study and the re-analyses the implications for EMU are examined, and suggestions made for future refinement. The main contributions of this thesis are the new definition, taxonomy and theory, which significantly contribute to the theoretical understanding of multi-modal usability, helping to resolve existing confusion in this area. The new methodology, EMU, is a useful technique for examining interfaces for multi-modal usability issues, although some refinement is required. The importance of craft skill in the identification of usability issues has been explicitly explored, with implications for future work on usability modelling and the training of practitioners in such techniques.
|
170 |
Musical vibrotactile feedbackBirnbaum, David M. January 2007 (has links)
This thesis discusses the prospect of integrating vibrotactile feedback into digital musical instruments. A holistic approach is taken, considering the role of new instruments in electronic music, as well as the concept of touch in culture and experience. Research about the human biological systems that enable vibrotactile perception is reviewed, with a special focus on its relevance to music. Out of this review, an approach to vibration synthesis is developed that integrates the current understanding of human vibrotactile perception. An account of musical vibrotactile interaction design is presented, which includes the implementation of a vibrotactile feedback synthesizer and the construction of two hardware prototypes that display musical vibration.
|
Page generated in 0.3824 seconds