• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 14
  • 9
  • 7
  • 6
  • 1
  • 1
  • Tagged with
  • 99
  • 99
  • 99
  • 21
  • 19
  • 19
  • 18
  • 17
  • 17
  • 15
  • 13
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Moderators Of Trust And Reliance Across Multiple Decision Aids

Ross, Jennifer 01 January 2008 (has links)
The present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness - a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate's actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., 'what' the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users' trust in automation so that system interfaces can be designed to facilitate users' calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems.
2

Problems associated with the process of educational software design

Boland, Robert John, n/a January 1985 (has links)
The problems associated w i t h the process of educational software design are complex and need to be considered from a number of different perspectives. In this study a number of factors are identified as contributing to difficulties generally experienced by software designers. It is suggested, however, that the factor which underlies all others is ineffective or inefficient communication. As the design of Educational Software Systems is a complex, multidisciplined process, the communication of primary interest is that between between experts from different disciplines. To help focus on such problems and processes most discussion is in terms of two representative experts: a Teacher or Educator, and a Computer Programmer or Systems Analyst. In the first chapter the complexity of the task of categorising and evaluating information about educational software is discussed. A need is recognised for some form of conceptual construct which would allow direction and progress in software design to be determined. The concept of a continuum between the &quoteComputer as Tool&quote and the &quoteComputer as Tutor&quote is introduced as a logical basis for such a construct. In this and several other chapters the focus is on the design of Intelligent Educational Software, while not intending to imply that this is its only useful or desirable form. If, however, the design of Intelligent Educational Software is better understood, the designing of less complex forms of software should become much easier, and will make possible the teaching of Educational Software Design as a topic for formal study. The second chapter addresses the problem of interpersonal communication between experts in different disciplines who have no common technical language. The design of educational software is made more difficult by the fact that teachers find it difficult to describe &quotewhat they do&quote when they teach. The concept of a language of accommodation is introduced and discussed. The general problem of software acquisition, design management, and evaluation is addressed in chapter Three. The interaction between the roles of Educator and System Analyst is considered in relation to the types of software available today. It is suggested that collaborative design between experts from different fields can be described and analysed as a set of complex learning behaviours. The process of design is recognised as a learning process which, if better understood, can be improved and taught. Chapter four considers the problem of human/machine interaction. An operational model, or designer's check list, to aid in the design of a Student/Machine software interface is discussed on the assumption that the student, the computer, and the software interface, can be considered as three independent, but interacting systems. By way of illustration a model is developed which could be used to design software for use in adult education. Chapter Five is in two parts, each part dealing with essentially the same concept - the transmission of knowledge about the process of educational software design. Two major strategies are considered. Firstly, the concept of a Microfactor is introduced as a way in which practitioners in the field of educational software design might communicate about solutions to certain problems. The chapter then proposes and discusses a unit of study for teachers on the topic of Educational Software Design in which practitioners communicate with beginners. The main focus of this unit, to be called &quoteEducational Software Design&quote, is on (1) Need for problem solving skills in educational Software Design; (2) Need for communication skills to facilitate collaboration between experts; (3) Need for a schema which will assist in the structuring of knowledge about educational software design. It is modelled on an existing unit in a BA(TAFE/ADULT) course which has been running for several years. A detailed description of this prototype unit and its design is given in appendix A and B. To conclude the study, Chapter Six considers some of the possible attitudinal barriers which can severely restrict the use of educational software. Even the most expertly designed software will be of no benefit if it is not used.
3

Integrated Framework Design for Intelligent Human Machine Interaction

Abou Saleh, Jamil January 2008 (has links)
Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction.
4

Integrated Framework Design for Intelligent Human Machine Interaction

Abou Saleh, Jamil January 2008 (has links)
Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction.
5

Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive / New gestural interaction techniques for interactive television

Vo, Dong-Bach 24 September 2013 (has links)
La télévision n’a cessé de se populariser et d’évoluer en proposant de nouveaux services. Ces services de plus en plus interactifs rendent les téléspectateurs plus engagés dans l’activité télévisuelle. Contrairement à l’usage d’un ordinateur, ils interagissent sur un écran distant avec une télécommande et des applications depuis leur canapé peu propice à l’usage d’un clavier et d’une souris. Ce dispositif et les techniques d’interaction actuelles qui lui sont associées peinent à répondre correctement à leurs attentes. Afin de répondre à cette problématique, les travaux de cette thèse explorent les possibilités offertes par la modalité gestuelle pour concevoir de nouvelles techniques d’interaction pour la télévision interactive en tenant compte de son contexte d’usage.
Dans un premier temps, nous présentons le contexte singulier de l’activité télévisuelle. Puis, nous proposons un espace de caractérisation des travaux de la littérature cherchant à améliorer la télécommande pour, finalement, nous focaliser sur l’interaction gestuelle. Nous introduisons un espace de caractérisation qui tente d’unifier l’interaction gestuelle contrainte par une surface, mains libres, et instrumentée ou non afin de guider la conception de nouvelles techniques. Nous avons conçu et évalué diverses techniques d’interaction gestuelle selon deux axes de recherche : les techniques d’interaction gestuelle instrumentées permettant d’améliorer l’expressivité interactionnelle de la télécommande traditionnelle, et les techniques d’interaction gestuelles mains libres en explorant la possibilité de réaliser des gestes sur la surface du ventre pour contrôler sa télévision. / Television has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use.
More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction.
Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set.
6

Individual Preferences In The Use Of Automation

Thropp, Jennifer 01 January 2006 (has links)
As system automation increases and evolves, the intervention of the supervising operator becomes ever less frequent but ever more crucial. The adaptive automation approach is one in which control of tasks dynamically shifts between humans and machines, being an alternative to traditional static allocation in which task control is assigned during system design and subsequently remains unchanged during operations. It is proposed that adaptive allocation should adjust to the individual operators' characteristics in order to improve performance, avoid errors, and enhance safety. The roles of three individual difference variables relevant to adaptive automation are described: attentional control, desirability of control, and trait anxiety. It was hypothesized that these traits contribute to the level of performance for target detection tasks for different levels of difficulty as well as preferences for different levels of automation. The operators' level of attentional control was inversely proportional to automation level preferences, although few objective performance changes were observed. The effects of sensory modality were also assessed, and auditory signal detection was superior to visual signal detection. As a result, the following implications have been proposed: operators generally preferred either low or high automation while neglecting the intermediary level; preferences and needs for automation may not be congruent; and there may be a conservative response bias associated with high attentional control, notably in the auditory modality.
7

Assistive Navigation Technology for Visually Impaired Individuals

Norouzi Kandalan, Roya 08 1900 (has links)
Sight is essential in our daily tasks. Compensatory senses have been used for centuries by visually impaired individuals to navigate independently. The help of technology can minimize some challenges for visually impaired individuals. Assistive navigation technologies facilitate the pathfinding and tracing in indoor scenarios. Different modules are added to assistive navigation technologies to warn about the obstacles not only on the ground but about hanging objects. In this work, we attempt to explore new methods to assist visually impaired individuals in navigating independently in an indoor scenario. We employed a location estimation algorithm based on the fingerprinting method to estimate the initial location of the user. We mitigate the error of estimation with particle filter. The shortest path has been calculated with an A* algorithm. To provide the user with an accident-free experiment, we employed an obstacle avoidance algorithm capable of warning the users about the potential hazards. Finally, to provide an effective means of communication with the user, we employed text-to-speech and speech recognition algorithms. The main contribution of this work is to glue these modules together efficiently and affordably.
8

The computational face for facial emotion analysis: Computer based emotion analysis from the face

Al-dahoud, Ahmad January 2018 (has links)
Facial expressions are considered to be the most revealing way of understanding the human psychological state during face-to-face communication. It is believed that a more natural interaction between humans and machines can be undertaken through the detailed understanding of the different facial expressions which imitate the manner by which humans communicate with each other. In this research, we study the different aspects of facial emotion detection, analysis and investigate possible hidden identity clues within the facial expressions. We study a deeper aspect of facial expressions whereby we try to identify gender and human identity - which can be considered as a form of emotional biometric - using only the dynamic characteristics of the smile expressions. Further, we present a statistical model for analysing the relationship between facial features and Duchenne (real) and non-Duchenne (posed) smiles. Thus, we identify that the expressions in the eyes contain discriminating features between Duchenne and non-Duchenne smiles. Our results indicate that facial expressions can be identified through facial movement analysis models where we get an accuracy rate of 86% for classifying the six universal facial expressions and 94% for classifying the common 18 facial action units. Further, we successfully identify the gender using only the dynamic characteristics of the smile expression whereby we obtain an 86% classification rate. Likewise, we present a framework to study the possibility of using the smile as a biometric whereby we show that the human smile is unique and stable. / Al-Zaytoonah University
9

The impact of voice on trust attributions

Torre, Ilaria January 2017 (has links)
Trust and speech are both essential aspects of human interaction. On the one hand, trust is necessary for vocal communication to be meaningful. On the other hand, humans have developed a way to infer someone’s trustworthiness from their voice, as well as to signal their own. Yet, research on trustworthiness attributions to speakers is scarce and contradictory, and very often uses explicit data, which do not predict actual trusting behaviour. However, measuring behaviour is very important to have an actual representation of trust. This thesis contains 5 experiments aimed at examining the influence of various voice characteristics — including accent, prosody, emotional expression and naturalness — on trusting behaviours towards virtual players and robots. The experiments have the "investment game"—a method derived from game theory, which allows to measure implicit trustworthiness attributions over time — as their main methodology. Results show that standard accents, high pitch, slow articulation rate and smiling voice generally increase trusting behaviours towards a virtual agent, and a synthetic voice generally elicits higher trustworthiness judgments towards a robot. The findings also suggest that different voice characteristics influence trusting behaviours with different temporal dynamics. Furthermore, the actual behaviour of the various speaking agents was modified to be more or less trustworthy, and results show that people’s trusting behaviours develop over time accordingly. Also, people reinforce their trust towards speakers that they deem particularly trustworthy when these speakers are indeed trustworthy, but punish them when they are not. This suggests that people’s trusting behaviours might also be influenced by the congruency of their first impressions with the actual experience of the speaker’s trustworthiness — a "congruency effect". This has important implications in the context of Human–Machine Interaction, for example for assessing users’ reactions to speaking machines which might not always function properly. Taken together, the results suggest that voice influences trusting behaviour, and that first impressions of a speaker’s trustworthiness based on vocal cues might not be indicative of future trusting behaviours, and that trust should be measured dynamically.
10

Interface Design In an Automobile Glass Cockpit Environment

Spendel, Michael, Strömberg, Markus January 2007 (has links)
<p>Today’s automobile cockpit is filled with different buttons and screen-based displays giving input and relaying information in a complex human-machine system. Following in the footsteps of the early 1970s flight industry, this thesis work focused on creating a complete glass cockpit concept in the automobile.</p><p>Our automobile glass cockpit consists of three displays. A touch screen based centre console with an interface that we took part in creating during the spring of 2006. Parallel to this ongoing master thesis, a head-up display was installed by a group of students and we had the opportunity of giving input regarding the design of the graphical interface.</p><p>The third display, a LCD, replaces the main instruments displaying speed, RPM, fuel level, engine temperature etc. Together with ideas on an extended allocation of functions to the area on and around the steering wheel, creating a dynamic mode based interface replacing today’s static main instruments was the focus of this project.</p><p>After conducting a thorough theoretical study, a large number of ideas were put to the test and incorporated in concept sketches. Paper sketches ranging from detailed features to all-embracing concepts combined with interviews and brainstorming sessions converged into a number of computer sketches made in an image processing software. The computer sketches was easily displayed in the cockpit environment and instantly evaluated. Some parts were discarded and some incorporated in new, modified, ideas leading to a final concept solution.</p><p>After the design part was concluded, the new graphical interface was given functionality with the help of a programming software. As was the case with the computer sketches, the functionality of the interface could be quickly evaluated and modified. With the help of a custom-made application our interface could be integrated with the simulator software and fully implemented in the automobile cockpit at the university simulator facilities.</p><p>Using a custom made scenario, the interface underwent a minor, informal evaluation. A number of potential users were invited to the VR-laboratory and introduced to the new concept. After driving a pre-determined route and familiarizing themselves with the interface, their thoughts on screen-based solutions in general and the interface itself was gathered. In addition, we ourselves performed an evaluation of the interface based on the theoretical study.</p>

Page generated in 0.0885 seconds