• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 43
  • 18
  • 17
  • 17
  • 16
  • 6
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 420
  • 420
  • 189
  • 148
  • 98
  • 92
  • 85
  • 66
  • 59
  • 45
  • 45
  • 43
  • 39
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

The effects of five discrete variables on human performance in a telephone information system /

Cary, Michele Marie. January 1993 (has links)
Thesis (M.S.)--Virginia Polytechnic Institute and State University, 1993. / Vita. Abstract. Includes bibliographical references (leaves 67-70). Also available via the Internet.
82

Haptic feedback of manipulator kinematic conditioning for teleoperation /

Maneewarn, Thavida. January 2000 (has links)
Thesis (Ph. D.)--University of Washington, 2000. / Vita. Includes bibliographical references (leaves 153-162).
83

Modeling human performance in a telecommunications network /

Nagy, Gabriella, January 1900 (has links)
Thesis (M.A.)--Carleton University, 2001. / Includes bibliographical references (p. 54-57). Also available in electronic format on the Internet.
84

Collaborative Communication Interruption Management System (C-CIMS): Modeling Interruption Timings via Prosodic and Topic Modelling for Human-Machine Teams

Peters, Nia S. 01 December 2017 (has links)
Human-machine teaming aims to meld human cognitive strengths and the unique capabilities of smart machines to create intelligent teams adaptive to rapidly changing circumstances. One major contributor to the problem of human-machine teaming is a lack of communication skills on the part of the machine. The primary objective of this research is focused on a machine’s interruption timings or when a machine should share and communicate information with human teammates within human-machine teaming interactions. Previous work addresses interruption timings from the perspective of single human, multitasking and multiple human, single task interactions. The primary aim of this dissertation is to augment this area by approaching the same problem from the perspective of a multiple human, multitasking interaction. The proposed machine is the Collaborative Communication Interruption Management System (C-CIMS) which is tasked with leveraging speech information from a human-human task and making inferences on when to interrupt with information related to an orthogonal human-machine task. This study and previous literature both suggest monitoring task boundaries and engagement as candidate moments of interruptibility within multiple human, multitasking interactions. The goal then becomes designing an intermediate step between human teammate communication and points of interruptibility within these interactions. The proposed intermediate step is the mapping of low-level speech information such as prosodic and lexical information onto higher constructs indicative of interruptibility. C-CIMS is composed of a Task Boundary Prosody Model, a Task Boundary Topic Model, and finally a Task Engagement Topic Model. Each of these components are evaluated separately in terms of how they perform within two different simulated human-machine teaming scenarios and the speed vs. accuracy tradeoffs as well as other limitations of each module. Overall the Task Boundary Prosody Model is tractable within a real-time system because of the low-latency in processing prosodic information, but is less accurate at predicting task boundaries even within human-machine interactions with simple dialogue. Conversely, the Task Boundary and Task Engagement Topic Models do well inferring task boundaries and engagement respectively, but are intractable in a real-time system because of the bottleneck in producing automatic speech recognition transcriptions to make interruption decisions. The overall contribution of this work is a novel approach to predicting interruptibility within human-machine teams by modeling higher constructs indicative of interruptibility using low-level speech information.
85

Extraction et analyse des caractéristiques faciales : application à l'hypovigilance chez le conducteur / Extraction and analysis of facial features : application to drover hypovigilance detection

Alioua, Nawal 28 March 2015 (has links)
L'étude des caractéristiques faciales a suscité l'intérêt croissant de la communauté scientifique et des industriels. En effet, ces caractéristiques véhiculent des informations non verbales qui jouent un rôle clé dans la communication entre les hommes. De plus, elles sont très utiles pour permettre une interaction entre l'homme et la machine. De ce fait, l'étude automatique des caractéristiques faciales constitue une tâche primordiale pour diverses applications telles que les interfaces homme-machine, la science du comportement, la pratique clinique et la surveillance de l'état du conducteur. Dans cette thèse, nous nous intéressons à la surveillance de l'état du conducteur à travers l'analyse de ses caractéristiques faciales. Cette problématique sollicite un intérêt universel causé par le nombre croissant des accidents routiers, dont une grande partie est provoquée par une dégradation de la vigilance du conducteur, connue sous le nom de l'hypovigilance. En effet, nous pouvons distinguer trois états d'hypovigilance. Le premier, et le plus critique, est la somnolence qui se manifeste par une incapacité à se maintenir éveillé et se caractérise par les périodes de micro-sommeil correspondant à des endormissements de 2 à 6 secondes. Le second est la fatigue qui se définit par la difficulté croissante à maintenir une tâche à terme et se caractérise par une augmentation du nombre de bâillements. Le troisième est l'inattention qui se produit lorsque l'attention est détournée de l'activité de conduite et se caractérise par le maintien de la pose de la tête en une direction autre que frontale. L'objectif de cette thèse est de concevoir des approches permettant de détecter l'hypovigilance chez le conducteur en analysant ses caractéristiques faciales. En premier lieu, nous avons proposé une approche dédiée à la détection de la somnolence à partir de l'identification des périodes de micro-sommeil à travers l'analyse des yeux. En second lieu, nous avons introduit une approche permettant de relever la fatigue à partir de l'analyse de la bouche afin de détecter les bâillements. Du fait qu'il n'existe aucune base de données publique dédiée à la détection de l'hypovigilance, nous avons acquis et annoté notre propre base de données représentant différents sujets simulant des états d'hypovigilance sous des conditions d'éclairage réelles afin d'évaluer les performances de ces deux approches. En troisième lieu, nous avons développé deux nouveaux estimateurs de la pose de la tête pour permettre à la fois de détecter l'inattention du conducteur et de déterminer son état, même quand ses caractéristiques faciales (yeux et bouche) ne peuvent être analysées suite à des positions non-frontales de la tête. Nous avons évalué ces deux estimateurs sur la base de données publique Pointing'04. Ensuite, nous avons acquis et annoté une base de données représentant la variation de la pose de la tête du conducteur pour valider nos estimateurs sous un environnement de conduite. / Studying facial features has attracted increasing attention in both academic and industrial communities. Indeed, these features convey nonverbal information that plays a key role in humancommunication. Moreover, they are very useful to allow human-machine interactions. Therefore, the automatic study of facial features is an important task for various applications includingrobotics, human-machine interfaces, behavioral science, clinical practice and monitoring driver state. In this thesis, we focus our attention on monitoring driver state through its facial features analysis. This problematic solicits a universal interest caused by the increasing number of road accidents, principally induced by deterioration in the driver vigilance level, known as hypovigilance. Indeed, we can distinguish three hypovigilance states. The first and most critical one is drowsiness, which is manifested by an inability to keep awake and it is characterized by microsleep intervals of 2-6 seconds. The second one is fatigue, which is defined by the increasing difficulty of maintaining a task and it is characterized by an important number of yawns. The third and last one is the inattention that occurs when the attention is diverted from the driving activity and it is characterized by maintaining the head pose in a non-frontal direction.The aim of this thesis is to propose facial features based approaches allowing to identify driver hypovigilance. The first approach was proposed to detect drowsiness by identifying microsleepintervals through eye state analysis. The second one was developed to identify fatigue by detecting yawning through mouth analysis. Since no public hypovigilance database is available,we have acquired and annotated our own database representing different subjects simulating hypovigilance under real lighting conditions to evaluate the performance of these two approaches. Next, we have developed two driver head pose estimation approaches to detect its inattention and also to determine its vigilance level even if the facial features (eyes and mouth) cannot be analyzed because of non-frontal head positions. We evaluated these two estimators on the public database Pointing'04. Then, we have acquired and annotated a driver head pose database to evaluate our estimators in real driving conditions.
86

Deriving Motor Unit-based Control Signals for Multi-Degree-of-Freedom Neural Interfaces

Twardowski, Michael D. 14 May 2020 (has links)
Beginning with the introduction of electrically powered prostheses more than 65 years ago surface electromyographic (sEMG) signals recorded from residual muscles in amputated limbs have served as the primary source of upper-limb myoelectric prosthetic control. The majority of these devices use one or more neural interfaces to translate the sEMG signal amplitude into voltage control signals that drive the mechanical components of a prosthesis. In so doing, users are able to directly control the speed and direction of prosthetic actuation by varying the level of muscle activation and the associated sEMG signal amplitude. Consequently, in spite of decades of development, myoelectric prostheses are prone to highly variable functional control, leading to a relatively high-incidence of prosthetic abandonment among 23-35% of upper-limb amputees. Efforts to improve prosthetic control in recent years have led to the development and commercialization of neural interfaces that employ pattern recognition of sEMG signals recorded from multiple locations on a residual limb to map different intended movements. But while these advanced algorithms have made strident gains, there still exists substantial need for further improvement to increase the reliability of pattern recognition control solutions amongst the variability of muscle co-activation intensities. In efforts to enrich the control signals that form the basis for myoelectric control, I have been developing advanced algorithms as part of a next generation neural interface research and development, referred to as Motor Unit Drive (MU Drive), that is able to non-invasively extract the firings of individual motor units (MUs) from sEMG signals in real-time and translate the firings into smooth biomechanically informed control signals. These measurements of motor unit firing rates and recruitment naturally provide high-levels of motor control information from the peripheral nervous system for intact limbs and therefore hold the greater promise for restoring function for amputees. The goal for my doctoral work was to develop advanced algorithms for the MU Drive neural interface system, that leverage MU features to provide intuitive control of multiple degrees-of-freedom. To achieve this goal, I targeted 3 research aims: 1) Derive real-time MU-based control signals from motor unit firings, 2) Evaluate feasibility of motor unit action potential (MUAP) based discrimination of muscle intent 3) Design and evaluate MUAP-based motion Classification of motions of the arm and hand.
87

Conception et évaluation de nouvelles techniques d'interaction dans le contexte de la télévision interactive / New gestural interaction techniques for interactive television

Vo, Dong-Bach 24 September 2013 (has links)
La télévision n’a cessé de se populariser et d’évoluer en proposant de nouveaux services. Ces services de plus en plus interactifs rendent les téléspectateurs plus engagés dans l’activité télévisuelle. Contrairement à l’usage d’un ordinateur, ils interagissent sur un écran distant avec une télécommande et des applications depuis leur canapé peu propice à l’usage d’un clavier et d’une souris. Ce dispositif et les techniques d’interaction actuelles qui lui sont associées peinent à répondre correctement à leurs attentes. Afin de répondre à cette problématique, les travaux de cette thèse explorent les possibilités offertes par la modalité gestuelle pour concevoir de nouvelles techniques d’interaction pour la télévision interactive en tenant compte de son contexte d’usage.
Dans un premier temps, nous présentons le contexte singulier de l’activité télévisuelle. Puis, nous proposons un espace de caractérisation des travaux de la littérature cherchant à améliorer la télécommande pour, finalement, nous focaliser sur l’interaction gestuelle. Nous introduisons un espace de caractérisation qui tente d’unifier l’interaction gestuelle contrainte par une surface, mains libres, et instrumentée ou non afin de guider la conception de nouvelles techniques. Nous avons conçu et évalué diverses techniques d’interaction gestuelle selon deux axes de recherche : les techniques d’interaction gestuelle instrumentées permettant d’améliorer l’expressivité interactionnelle de la télécommande traditionnelle, et les techniques d’interaction gestuelles mains libres en explorant la possibilité de réaliser des gestes sur la surface du ventre pour contrôler sa télévision. / Television has never stopped being popularized and offering new services to the viewers. These interactive services make viewers more engaged in television activities. Unlike the use of a computer, they interact on a remote screen with a remote control from their sofa which is not convenient for using a keyboard and a mouse. The remote control and the current interaction techniques associated with it are struggling to meet viewers’ expectations. To address this problem, the work of this thesis explores the possibilities offered by the gestural modality to design new interaction techniques for interactive television, taking into account its context of use.
More specifically, in a first step, we present the specific context of the television usage. Then, we propose a litterature review of research trying to improve the remote control. Finally we focus on gestural interaction. To guide the design of interaction techniques based on gestural modality, we introduce a taxonomy that attempts to unify gesture interaction constrained by a surface and hand-free gesture interaction.
Therefore, we propose various techniques for gestural interaction in two scopes of research : gestural instrumented interaction techniques, which improves the traditional remote control expressiveness, and hand-free gestural interaction by exploring the possibility o performing gestures on the surface of the belly to control the television set.
88

A user interface builder/manager for knowledge craft /

Sedighian, Kamran January 1987 (has links)
No description available.
89

Man the machine : a history of a metaphor from Leonardo da Vinci to H. G. Wells

Tombs, George, 1956- January 2002 (has links)
No description available.
90

Individual Preferences In The Use Of Automation

Thropp, Jennifer 01 January 2006 (has links)
As system automation increases and evolves, the intervention of the supervising operator becomes ever less frequent but ever more crucial. The adaptive automation approach is one in which control of tasks dynamically shifts between humans and machines, being an alternative to traditional static allocation in which task control is assigned during system design and subsequently remains unchanged during operations. It is proposed that adaptive allocation should adjust to the individual operators' characteristics in order to improve performance, avoid errors, and enhance safety. The roles of three individual difference variables relevant to adaptive automation are described: attentional control, desirability of control, and trait anxiety. It was hypothesized that these traits contribute to the level of performance for target detection tasks for different levels of difficulty as well as preferences for different levels of automation. The operators' level of attentional control was inversely proportional to automation level preferences, although few objective performance changes were observed. The effects of sensory modality were also assessed, and auditory signal detection was superior to visual signal detection. As a result, the following implications have been proposed: operators generally preferred either low or high automation while neglecting the intermediary level; preferences and needs for automation may not be congruent; and there may be a conservative response bias associated with high attentional control, notably in the auditory modality.

Page generated in 0.0349 seconds