• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 14
  • 9
  • 7
  • 6
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 105
  • 22
  • 21
  • 20
  • 20
  • 18
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Moderators Of Trust And Reliance Across Multiple Decision Aids

Ross, Jennifer 01 January 2008 (has links)
The present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness - a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate's actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., 'what' the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users' trust in automation so that system interfaces can be designed to facilitate users' calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems.
2

Problems associated with the process of educational software design

Boland, Robert John, n/a January 1985 (has links)
The problems associated w i t h the process of educational software design are complex and need to be considered from a number of different perspectives. In this study a number of factors are identified as contributing to difficulties generally experienced by software designers. It is suggested, however, that the factor which underlies all others is ineffective or inefficient communication. As the design of Educational Software Systems is a complex, multidisciplined process, the communication of primary interest is that between between experts from different disciplines. To help focus on such problems and processes most discussion is in terms of two representative experts: a Teacher or Educator, and a Computer Programmer or Systems Analyst. In the first chapter the complexity of the task of categorising and evaluating information about educational software is discussed. A need is recognised for some form of conceptual construct which would allow direction and progress in software design to be determined. The concept of a continuum between the &quoteComputer as Tool&quote and the &quoteComputer as Tutor&quote is introduced as a logical basis for such a construct. In this and several other chapters the focus is on the design of Intelligent Educational Software, while not intending to imply that this is its only useful or desirable form. If, however, the design of Intelligent Educational Software is better understood, the designing of less complex forms of software should become much easier, and will make possible the teaching of Educational Software Design as a topic for formal study. The second chapter addresses the problem of interpersonal communication between experts in different disciplines who have no common technical language. The design of educational software is made more difficult by the fact that teachers find it difficult to describe &quotewhat they do&quote when they teach. The concept of a language of accommodation is introduced and discussed. The general problem of software acquisition, design management, and evaluation is addressed in chapter Three. The interaction between the roles of Educator and System Analyst is considered in relation to the types of software available today. It is suggested that collaborative design between experts from different fields can be described and analysed as a set of complex learning behaviours. The process of design is recognised as a learning process which, if better understood, can be improved and taught. Chapter four considers the problem of human/machine interaction. An operational model, or designer's check list, to aid in the design of a Student/Machine software interface is discussed on the assumption that the student, the computer, and the software interface, can be considered as three independent, but interacting systems. By way of illustration a model is developed which could be used to design software for use in adult education. Chapter Five is in two parts, each part dealing with essentially the same concept - the transmission of knowledge about the process of educational software design. Two major strategies are considered. Firstly, the concept of a Microfactor is introduced as a way in which practitioners in the field of educational software design might communicate about solutions to certain problems. The chapter then proposes and discusses a unit of study for teachers on the topic of Educational Software Design in which practitioners communicate with beginners. The main focus of this unit, to be called &quoteEducational Software Design&quote, is on (1) Need for problem solving skills in educational Software Design; (2) Need for communication skills to facilitate collaboration between experts; (3) Need for a schema which will assist in the structuring of knowledge about educational software design. It is modelled on an existing unit in a BA(TAFE/ADULT) course which has been running for several years. A detailed description of this prototype unit and its design is given in appendix A and B. To conclude the study, Chapter Six considers some of the possible attitudinal barriers which can severely restrict the use of educational software. Even the most expertly designed software will be of no benefit if it is not used.
3

Integrated Framework Design for Intelligent Human Machine Interaction

Abou Saleh, Jamil January 2008 (has links)
Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction.
4

Integrated Framework Design for Intelligent Human Machine Interaction

Abou Saleh, Jamil January 2008 (has links)
Human-computer interaction, sometimes referred to as Man-Machine Interaction, is a concept that emerged simultaneously with computers, or more generally machines. The methods by which humans have been interacting with computers have traveled a long way. New designs and technologies appear every day. However, computer systems and complex machines are often only technically successful, and most of the time users may find them confusing to use; thus, such systems are never used efficiently. Therefore, building sophisticated machines and robots is not the only thing someone has to address; in fact, more effort should be put to make these machines simpler for all kind of users, and generic enough to accommodate different types of environments. Thus, designing intelligent human computer interaction modules come to emerge. In this work, we aim to implement a generic framework (referred to as CIMF framework) that allows the user to control the synchronized and coordinated cooperative type of work that a set of robots can perform. Three robots are involved so far: Two manipulators and one mobile robot. The framework should be generic enough to be hardware independent and to allow the easy integration of new entities and modules. We also aim to implement the different building blocks for the intelligent manufacturing cell that communicates with the framework via the most intelligent and advanced human computer interaction techniques. Three techniques shall be addressed: Interface-, audio-, and visual-based type of interaction.
5

HMI Solution between a manual operator and a pump drive based on Smartphones

Santosh, Golla January 2014 (has links)
With the development of modern technology, mobile communications are changing people’s life and making their day to day life activities easier. The aim of this thesis work is to address one of the modern technology solution that simplifies and act as an HMI solution between a pump drive and a operator based on smart phones.   Xylem provides a wide range of pump control units, serving several advanced features includes condition monitoring, cleaning sequence, flow calculation, energy optimizer, sump cleaning and so on. Smart run is a pump control unit, whose parameters installed at wastewater pump stations are possible to monitor and configure physically using keypad or remotely using extension communication  gateway, which is a costly solution for installations and maintenance. So, a simple working prototype HMI solution based on smart phones is interested to see how smart phone can relay information between a pump control and an operator in the vicinity of the pump.    For this approach a thorough study has been done on different types of smart phones, their trends and different possible wireless communication solutions between  operator’s smart phone  and the pump. An interactive design process with a focus on usability and data representation  on a smart phone application was developed to support their needs and provide a cost effective solution. The result showed that this approach has many benefits includes serving as cost effective HMI solution, data monitoring, better alarm monitoring with additional information, enhanced display over Smart Run’s OLED displays, multilingual support, provides easier support services and also useful as a receiver unit for dewatering pumps hardware developed in parallel with this thesis. This thesis work is carried out in Xylem Water Solutions AB[1], Stockholm, Sweden in collaboration with Mittuniversitetet[2], Sundsvall, Department of Electronics Design.  This report can be used as groundwork for future development of smart phone applications for Xylem products [1]  http://www.xyleminc.com [2]  http://www.miun.se
6

Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive / New gestural interaction techniques for interactive television

Vo, Dong-Bach 24 September 2013 (has links)
La télévision n’a cessé de se populariser et d’évoluer en proposant de nouveaux services. Ces services de plus en plus interactifs rendent les téléspectateurs plus engagés dans l’activité télévisuelle. Contrairement à l’usage d’un ordinateur, ils interagissent sur un écran distant avec une télécommande et des applications depuis leur canapé peu propice à l’usage d’un clavier et d’une souris. Ce dispositif et les techniques d’interaction actuelles qui lui sont associées peinent à répondre correctement à leurs attentes. Afin de répondre à cette problématique, les travaux de cette thèse explorent les possibilités offertes par la modalité gestuelle pour concevoir de nouvelles techniques d’interaction pour la télévision interactive en tenant compte de son contexte d’usage.
Dans un premier temps, nous présentons le contexte singulier de l’activité télévisuelle. Puis, nous proposons un espace de caractérisation des travaux de la littérature cherchant à améliorer la télécommande pour, finalement, nous focaliser sur l’interaction gestuelle. Nous introduisons un espace de caractérisation qui tente d’unifier l’interaction gestuelle contrainte par une surface, mains libres, et instrumentée ou non afin de guider la conception de nouvelles techniques. Nous avons conçu et évalué diverses techniques d’interaction gestuelle selon deux axes de recherche : les techniques d’interaction gestuelle instrumentées permettant d’améliorer l’expressivité interactionnelle de la télécommande traditionnelle, et les techniques d’interaction gestuelles mains libres en explorant la possibilité de réaliser des gestes sur la surface du ventre pour contrôler sa télévision. / Television has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use.
More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction.
Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set.
7

Individual Preferences In The Use Of Automation

Thropp, Jennifer 01 January 2006 (has links)
As system automation increases and evolves, the intervention of the supervising operator becomes ever less frequent but ever more crucial. The adaptive automation approach is one in which control of tasks dynamically shifts between humans and machines, being an alternative to traditional static allocation in which task control is assigned during system design and subsequently remains unchanged during operations. It is proposed that adaptive allocation should adjust to the individual operators' characteristics in order to improve performance, avoid errors, and enhance safety. The roles of three individual difference variables relevant to adaptive automation are described: attentional control, desirability of control, and trait anxiety. It was hypothesized that these traits contribute to the level of performance for target detection tasks for different levels of difficulty as well as preferences for different levels of automation. The operators' level of attentional control was inversely proportional to automation level preferences, although few objective performance changes were observed. The effects of sensory modality were also assessed, and auditory signal detection was superior to visual signal detection. As a result, the following implications have been proposed: operators generally preferred either low or high automation while neglecting the intermediary level; preferences and needs for automation may not be congruent; and there may be a conservative response bias associated with high attentional control, notably in the auditory modality.
8

Assistive Navigation Technology for Visually Impaired Individuals

Norouzi Kandalan, Roya 08 1900 (has links)
Sight is essential in our daily tasks. Compensatory senses have been used for centuries by visually impaired individuals to navigate independently. The help of technology can minimize some challenges for visually impaired individuals. Assistive navigation technologies facilitate the pathfinding and tracing in indoor scenarios. Different modules are added to assistive navigation technologies to warn about the obstacles not only on the ground but about hanging objects. In this work, we attempt to explore new methods to assist visually impaired individuals in navigating independently in an indoor scenario. We employed a location estimation algorithm based on the fingerprinting method to estimate the initial location of the user. We mitigate the error of estimation with particle filter. The shortest path has been calculated with an A* algorithm. To provide the user with an accident-free experiment, we employed an obstacle avoidance algorithm capable of warning the users about the potential hazards. Finally, to provide an effective means of communication with the user, we employed text-to-speech and speech recognition algorithms. The main contribution of this work is to glue these modules together efficiently and affordably.
9

The computational face for facial emotion analysis: Computer based emotion analysis from the face

Al-dahoud, Ahmad January 2018 (has links)
Facial expressions are considered to be the most revealing way of understanding the human psychological state during face-to-face communication. It is believed that a more natural interaction between humans and machines can be undertaken through the detailed understanding of the different facial expressions which imitate the manner by which humans communicate with each other. In this research, we study the different aspects of facial emotion detection, analysis and investigate possible hidden identity clues within the facial expressions. We study a deeper aspect of facial expressions whereby we try to identify gender and human identity - which can be considered as a form of emotional biometric - using only the dynamic characteristics of the smile expressions. Further, we present a statistical model for analysing the relationship between facial features and Duchenne (real) and non-Duchenne (posed) smiles. Thus, we identify that the expressions in the eyes contain discriminating features between Duchenne and non-Duchenne smiles. Our results indicate that facial expressions can be identified through facial movement analysis models where we get an accuracy rate of 86% for classifying the six universal facial expressions and 94% for classifying the common 18 facial action units. Further, we successfully identify the gender using only the dynamic characteristics of the smile expression whereby we obtain an 86% classification rate. Likewise, we present a framework to study the possibility of using the smile as a biometric whereby we show that the human smile is unique and stable. / Al-Zaytoonah University
10

Human Interaction with Autonomous machines: Visual Communication to Encourage Trust

Norstedt, Emil, Sahlberg, Timmy January 2020 (has links)
En pågående utveckling sker inom konstruktionsbranschen där maskiner går från att styras manuellt av en mänsklig förare till styras autonomt, d.v.s. utan mänsklig förare. Detta arbete har varit i samarbete med Volvo CE och deras nya autonoma hjullastare. Då maskinen kommer operera i en miljö kring människor, så krävs en hög säkerhet för att eliminera olyckor. Syftet med arbetet har varit att utveckla ett system för öka säkerheten och förtroendet för människorna i närheten av den autonoma maskinen. Systemet byggs på visuell kommunikation för att uppnå en tillit mellan parterna. Arbetet har baserats på en iterativ process där prototypande, testande och analysering har varit i focus för att uppnå ett lyckat resultat. Genom skapande av modeller med olika funktioner så har en större förståelse kring hur visuell kommunikation mellan människa och maskin kan skapas för att bygga upp en tillit sinsemellan. Detta resulterade i ett koncept som bygger på en kommunikation via ögon från maskinen. Ögonkontakt har visats sig vara en viktig faktor för människor för att skapa ett förtroende för någon eller något i obekväma och utsatta situationer. Maskinen förmedlar olika uttryck genom att ändra färg och form på ögonen för att uppmärksamma och informera människor som rör sig i närheten av maskinen. Genom att anpassa färg och form på ögon kan information uppfattas på olika sätt. Med denna typ av kommunikation kan ett förtroende för maskinen skapas och på så sätt höjs säkerhet och tillit. / Ongoing development is happening within the construction industry. Machines are transformed from being operated by humans to being autonomous. This project has been a collaboration with Volvo Construction Equipment (Volvo CE), and their new autonomous wheel loader. The autonomous machine is supposed to operate in the same environment as people. Therefore, a developed safety system is required to eliminate accidents. The purpose has been developing a system to increase the safety for the workers and to encourage trust for the autonomous machine. The system is based on visual communication to achieve trust between the machine and the people around it. An iterative process, with a focus on testing, prototyping, and analysing, has been used to accomplish a successful result. Better understanding has been developed on how to design a human-machine-interface to encourage trust by creating models with a variety of functions. The iterative process resulted in a concept that communicates through eyes. Eye-contact is an essential factor for creating trust in unfamiliar and exposed situations. The solution mediating different expressions by changing the colour and shape of the eyes to create awareness and to inform people moving around in the same environment. Specific information can be mediated in various situations by adopting the colour and shape of the eyes. Trust can be encouraged for the autonomous machine using this way of communicating.

Page generated in 0.1327 seconds