Spelling suggestions: "subject:"humanmachine interfaces"" "subject:"manmachine interfaces""
1 |
Extraction et analyse des caractéristiques faciales : application à l'hypovigilance chez le conducteur / Extraction and analysis of facial features : application to drover hypovigilance detectionAlioua, Nawal 28 March 2015 (has links)
L'étude des caractéristiques faciales a suscité l'intérêt croissant de la communauté scientifique et des industriels. En effet, ces caractéristiques véhiculent des informations non verbales qui jouent un rôle clé dans la communication entre les hommes. De plus, elles sont très utiles pour permettre une interaction entre l'homme et la machine. De ce fait, l'étude automatique des caractéristiques faciales constitue une tâche primordiale pour diverses applications telles que les interfaces homme-machine, la science du comportement, la pratique clinique et la surveillance de l'état du conducteur. Dans cette thèse, nous nous intéressons à la surveillance de l'état du conducteur à travers l'analyse de ses caractéristiques faciales. Cette problématique sollicite un intérêt universel causé par le nombre croissant des accidents routiers, dont une grande partie est provoquée par une dégradation de la vigilance du conducteur, connue sous le nom de l'hypovigilance. En effet, nous pouvons distinguer trois états d'hypovigilance. Le premier, et le plus critique, est la somnolence qui se manifeste par une incapacité à se maintenir éveillé et se caractérise par les périodes de micro-sommeil correspondant à des endormissements de 2 à 6 secondes. Le second est la fatigue qui se définit par la difficulté croissante à maintenir une tâche à terme et se caractérise par une augmentation du nombre de bâillements. Le troisième est l'inattention qui se produit lorsque l'attention est détournée de l'activité de conduite et se caractérise par le maintien de la pose de la tête en une direction autre que frontale. L'objectif de cette thèse est de concevoir des approches permettant de détecter l'hypovigilance chez le conducteur en analysant ses caractéristiques faciales. En premier lieu, nous avons proposé une approche dédiée à la détection de la somnolence à partir de l'identification des périodes de micro-sommeil à travers l'analyse des yeux. En second lieu, nous avons introduit une approche permettant de relever la fatigue à partir de l'analyse de la bouche afin de détecter les bâillements. Du fait qu'il n'existe aucune base de données publique dédiée à la détection de l'hypovigilance, nous avons acquis et annoté notre propre base de données représentant différents sujets simulant des états d'hypovigilance sous des conditions d'éclairage réelles afin d'évaluer les performances de ces deux approches. En troisième lieu, nous avons développé deux nouveaux estimateurs de la pose de la tête pour permettre à la fois de détecter l'inattention du conducteur et de déterminer son état, même quand ses caractéristiques faciales (yeux et bouche) ne peuvent être analysées suite à des positions non-frontales de la tête. Nous avons évalué ces deux estimateurs sur la base de données publique Pointing'04. Ensuite, nous avons acquis et annoté une base de données représentant la variation de la pose de la tête du conducteur pour valider nos estimateurs sous un environnement de conduite. / Studying facial features has attracted increasing attention in both academic and industrial communities. Indeed, these features convey nonverbal information that plays a key role in humancommunication. Moreover, they are very useful to allow human-machine interactions. Therefore, the automatic study of facial features is an important task for various applications includingrobotics, human-machine interfaces, behavioral science, clinical practice and monitoring driver state. In this thesis, we focus our attention on monitoring driver state through its facial features analysis. This problematic solicits a universal interest caused by the increasing number of road accidents, principally induced by deterioration in the driver vigilance level, known as hypovigilance. Indeed, we can distinguish three hypovigilance states. The first and most critical one is drowsiness, which is manifested by an inability to keep awake and it is characterized by microsleep intervals of 2-6 seconds. The second one is fatigue, which is defined by the increasing difficulty of maintaining a task and it is characterized by an important number of yawns. The third and last one is the inattention that occurs when the attention is diverted from the driving activity and it is characterized by maintaining the head pose in a non-frontal direction.The aim of this thesis is to propose facial features based approaches allowing to identify driver hypovigilance. The first approach was proposed to detect drowsiness by identifying microsleepintervals through eye state analysis. The second one was developed to identify fatigue by detecting yawning through mouth analysis. Since no public hypovigilance database is available,we have acquired and annotated our own database representing different subjects simulating hypovigilance under real lighting conditions to evaluate the performance of these two approaches. Next, we have developed two driver head pose estimation approaches to detect its inattention and also to determine its vigilance level even if the facial features (eyes and mouth) cannot be analyzed because of non-frontal head positions. We evaluated these two estimators on the public database Pointing'04. Then, we have acquired and annotated a driver head pose database to evaluate our estimators in real driving conditions.
|
2 |
Deriving Motor Unit-based Control Signals for Multi-Degree-of-Freedom Neural InterfacesTwardowski, Michael D. 14 May 2020 (has links)
Beginning with the introduction of electrically powered prostheses more than 65 years ago surface electromyographic (sEMG) signals recorded from residual muscles in amputated limbs have served as the primary source of upper-limb myoelectric prosthetic control. The majority of these devices use one or more neural interfaces to translate the sEMG signal amplitude into voltage control signals that drive the mechanical components of a prosthesis. In so doing, users are able to directly control the speed and direction of prosthetic actuation by varying the level of muscle activation and the associated sEMG signal amplitude. Consequently, in spite of decades of development, myoelectric prostheses are prone to highly variable functional control, leading to a relatively high-incidence of prosthetic abandonment among 23-35% of upper-limb amputees. Efforts to improve prosthetic control in recent years have led to the development and commercialization of neural interfaces that employ pattern recognition of sEMG signals recorded from multiple locations on a residual limb to map different intended movements. But while these advanced algorithms have made strident gains, there still exists substantial need for further improvement to increase the reliability of pattern recognition control solutions amongst the variability of muscle co-activation intensities. In efforts to enrich the control signals that form the basis for myoelectric control, I have been developing advanced algorithms as part of a next generation neural interface research and development, referred to as Motor Unit Drive (MU Drive), that is able to non-invasively extract the firings of individual motor units (MUs) from sEMG signals in real-time and translate the firings into smooth biomechanically informed control signals. These measurements of motor unit firing rates and recruitment naturally provide high-levels of motor control information from the peripheral nervous system for intact limbs and therefore hold the greater promise for restoring function for amputees. The goal for my doctoral work was to develop advanced algorithms for the MU Drive neural interface system, that leverage MU features to provide intuitive control of multiple degrees-of-freedom. To achieve this goal, I targeted 3 research aims: 1) Derive real-time MU-based control signals from motor unit firings, 2) Evaluate feasibility of motor unit action potential (MUAP) based discrimination of muscle intent 3) Design and evaluate MUAP-based motion Classification of motions of the arm and hand.
|
3 |
Informations vibrotactiles pour l'aide à la navigation et la gestion des contacts avec l'environnement / Vibrotactile information for approach regulation and making contactsMandil, Cynthia 26 October 2017 (has links)
Ce travail de recherche vise à étudier la transmission d’informations vibrotactiles pour l’aide à la navigation et plus particulièrement pour améliorer la régulation des phases d’approche et la gestion des contacts avec l’environnement. L’un des défis majeurs de ce domaine de recherche est de comprendre comment rendre compte d’informations, parfois complexes, avec une modalité sensorielle n’étant pas naturellement utilisée pour les traiter. Ainsi, ce travail doctoral avait pour but de montrer la possibilité de suppléer la vision et à spécifier les caractéristiques de la stimulation vibrotactile qui influencent l’accès aux informations d’approche. Les différentes études qui étayent cette thèse ont été réalisées à partir d’un dispositif expérimental couplant un environnement virtuel et un dispositif tactile comprenant différents vibrateurs placés à la surface de la peau. Les deux premiers chapitres expérimentaux se sont appuyés sur des tâches d’estimation de temps de pré-contact (time-to-contact, TTC) classiquement utilisées pour étudier les processus visuels mis en jeu dans la régulation des situations d’approche. Le premier chapitre expérimental (expériences 1, 2 et 3) constituait une étude préliminaire qui a notamment montré que le jugement était plus précis lorsque le dispositif tactile renvoyait des informations concernant la distance d’approche (par rapport à des informations sur la taille angulaire). Les résultats du second chapitre expérimental (expériences 4 et 5) ont montré que la modalité tactile permettait d’estimer le TTC mais de manière moins précise que la modalité visuelle. Toutefois, lorsque la modalité visuelle est occultée, transmettre des informations tactiles durant la période d’occultation permet d’améliorer la précision du jugement. Le dernier chapitre expérimental (expériences 6 et 7) s’est intéressé plus précisément à l’influence des informations vibrotactiles sur la régulation d’une approche au sol dans une situation simulée d’atterrissage en hélicoptère. Les deux expérimentations ont montré que l’utilisation d’informations tactiles permettait une diminution significative de la vitesse de contact au sol lorsque l’environnement visuel était dégradé et que cette diminution dépendait de la variable informationnelle transmise par le dispositif. Au final, les résultats de ce travail de recherche sont discutés au regard des théories fondamentales sur la perception et l’action. Ils permettent de montrer comment des informations d’approche peuvent être perçues à travers la modalité tactile et ainsi suppléer la vision lorsqu’elle est dégradée. / The purpose of this doctoral research was to study vibrotactile information in navigation tasks, especially for approach regulation. One of the main issues in this research area is to find out how to specify complex information though a sensory modality that is usually unused. Thus, this work aimed at demonstrating the possibility to supply vision with tactile information and at specifying the characteristics of the vibrotactile stimulation that allow access to the information. The different studies have been carried out with an experimental display coupling a virtual environment and a tactile display consisting of several actuators placed on the skin. The first two empirical chapters were based on time-to-contact (TTC) judgment tasks, a paradigm generally used to study visual processes involved in approach situations. The first experimental chapter (experiments 1, 2 and 3) was a preliminary study, which showed that TTC estimation were more precise when the tactile display conveyed information about the distance to the target (compared to information about its angular size). The results of the second chapter (experiments 4 and 5) showed that TTC estimation was less accurate with tactile information compared to vision. Nevertheless, conveying tactile information when visual information was occluded significantly improved time-to-contact estimation. In the last chapter of this thesis, we focused on the influence of vibrotactile information on the regulation of a ground approach with a virtual situation of landing with a helicopter. We showed that tactile information reduced significantly the impact velocity when the visual environment was degraded (experiment 6 and 7). Moreover, the results showed that this decrease of velocity depended on the variable conveyed by the tactile display. Finally, the results of this work are discussed regarding fundamental theories about perception and action. Overall, it shows that approach information can be perceive through the tactile modality and thus supply vision in degraded environment.
|
4 |
<b>THE EFFECTS OF AUTOMATED VEHICLE SYSTEM-CERTAINTY ON DRIVERS' TRUST AND BEHAVIOR</b>Micah Wilson Wilson George (19159099) 18 July 2024 (has links)
<p dir="ltr">As automated vehicle (AV) systems become increasingly more intelligent, understanding the complex interplay between drivers' trust in these systems and their resulting behavior is paramount for the successful integration of autonomous technologies into the transportation landscape. Currently, the effects of displaying AV system-certainty information, concerning its navigability around obstacles, on drivers' trust, decision-making, and behavior is underexplored. This thesis seeks to address this research gap and evaluate a set of dynamic and continuous human-machine interfaces (HMIs) that present self-assessed system-certainty information to drivers of AVs. A simulated driving study was conducted wherein participants were exposed to four different linear and curvilinear AV system-certainty patterns when their AV approached a construction zone. The certainty patterns represented the vehicle’s confidence in safely avoiding the construction. Using this information, drivers needed to decide whether or not to take over from the vehicle. The AV’s reliability and system-certainty were not directly proportional to one another. During the study, drivers' trust, workload, takeover decisions and performance, eye movement behavior, and heart?rate measures were captured to comprehensively understand of the factors influencing drivers' interactions with automated vehicles. Overall, participants took over in 41.3% of the drives. Results suggest that the communication of different system-certainty trends had a significant effect on drivers’ takeover response times and gaze behavior, but did not affect their trust in the system nor their workload. Ultimately, the results of this work can be used to inform the design of in vehicle interfaces in future autonomous vehicles, aiming to enhance safety and driver acceptance. By elucidating the intricate relationship between drivers' trust and behavior, this study provides valuable insights for both researchers and developers, contributing to the ongoing discourse on the human factors associated with the integration of autonomous technologies into the transportation ecosystem.</p>
|
5 |
Assessing Alternate Approaches for Conveying Automated Vehicle IntentionsBasantis, Alexis Rae 30 October 2019 (has links)
Objectives: Research suggests the general public has a lack of faith in highly automated vehicles (HAV) stems from a lack of system transparency while in motion (e.g., the user not being informed on roadway perception or anticipated responses of the car in certain situations). This problem is particularly prevalent in public transit or ridesharing applications, where HAVs are expected to debut, and when the user has minimal training on, and control over, the vehicle. To improve user trust and their perception of comfort and safety, this study aimed to develop more detailed and tailored human-machine interfaces (HMI) aimed at relying automated vehicle intended actions (i.e., "intentions") and perceptions of the driving environment to the user.
Methods: This project developed HMI systems, with a focus on visual and auditory displays, and implemented them into a HAV developed at the Virginia Tech Transportation Institute (VTTI). Volunteer participants were invited to the Smart Roads at VTTI to experience these systems in real-world driving scenarios, especially ones typically found in rideshare or public transit operations. Participant responses and opinions about the HMIs and their perceived levels of comfort, safety, trust, and situational awareness were captured via paper-based surveys administered during experimentation.
Results: There was a considerable link found between HMI modality and users' reported levels of comfort, safety, trust, and situational awareness during experimentation. In addition, there were several key behavioral factors that made users more or less likely to feel comfortable in the HAV.
Conclusions: Moving forward, it will be necessary for HAVs to provide ample feedback to users in an effort to increase system transparency and understanding. Feedback should consistently and accurately represent the driving landscape and clearly communicate vehicle states to users. / Master of Science / One of the greatest barriers to the entry of highly automated vehicles (HAV) into the market is the lack of user trust in the vehicle. Research has shown that this lack of faith in the system primarily stems from a lack of system transparency while in motion (e.g., the user not being told how the car will react in a certain situation) and not having an effective way to control the vehicle in the event of a system failure. This problem is particularly prevalent in public transit or ridesharing applications, where HAVs are expected to first appear and where the user has less training and control over the vehicle. To improve user trust and perceptions of comfort and safety, this study developed human-machine interface (HMI) systems, focusing on visual and auditory displays, to better relay automated vehicle "intentions" and the perceived driving environment to the user. These HMI systems were then implemented into a HAV developed at the Virginia Tech Transportation Institute (VTTI) and tested with volunteer participants on the Smart Roads.
|
6 |
A Review of Anthropomorphic Robotic Hand Technology and Data Glove Based ControlPowell, Stephen Arthur 27 September 2016 (has links)
For over 30 years, the development and control of anthropomorphic robotic hands has been a highly popular sub-discipline in robotics research. Because the human hand is an extremely sophisticated system, both in its mechanical and sensory abilities, engineers have been fascinated with replicating these abilities in artificial systems. The applications of robotic hands typically fall under the categories of standalone testbed platforms, mostly to conduct research on manipulation, prosthetics, and robotic end effectors for larger systems. The teleoperation of robotic hands is another application with significant potential, where users can control a manipulator in real time to accomplish diverse tasks. In controlling a device that seeks to emulate the function of the human hand, it is intuitive to choose a human-machine interface (HMI) that will allow for the most intuitive control. Data gloves are the ideal HMI for this need, allowing a robotic hand to accurately mimic the human operator's natural movements. In this paper we present a combined review on the critical design aspects of data gloves and robotic hands. In literature, many of the proposed designs covering both these topical areas, robotic hand and data gloves, are cost prohibitive which limits their implementation for intended tasks. After reviewing the literature, new designs of robotic hand and data glove technology are also presented, introducing low cost solutions that can serve as accessible platforms for researchers, students, and engineers to further the development of teleoperative applications. / Master of Science / For over 30 years, the development and control of anthropomorphic robotic hands has been a highly popular sub-discipline in robotics research. Because the human hand is an extremely sophisticated system, both in its mechanical and sensory abilities, engineers have been fascinated with replicating these abilities in artificial systems. The applications of robotic hands typically fall under the categories of standalone testbed platforms, mostly to conduct research on manipulation, prosthetics, and robotic end effectors for larger systems. The teleoperation of robotic hands is another application with significant potential, where users can control a manipulator in real time to accomplish diverse tasks. In controlling a device that seeks to emulate the function of the human hand, it is intuitive to choose a human-machine interface (HMI) that will allow for the most intuitive control. Data gloves are the ideal HMI for this need, allowing a robotic hand to accurately mimic the human operator’s natural movements. In this paper we present a combined review on the critical design aspects of data gloves and robotic hands. In literature, many of the proposed designs covering both these topical areas, robotic hand and data gloves, are cost prohibitive which limits their implementation for intended tasks. After reviewing the literature, new designs of robotic hand and data glove technology are also presented, introducing low cost solutions that can serve as accessible platforms for researchers, students, and engineers to further the development of teleoperative applications.
|
7 |
Designing HMI and SCADA Laboratory Work for Engineering StudentsBarteck, Julianne 01 May 2025 (has links) (PDF)
Human-machine interfaces are, essentially, the user interfaces used to monitor industrial facility machines and systems, which makes them a common industry tool in engineering. At East Tennessee State University, while human-machine interfaces are briefly covered in lecture and graduate students have set up some basic hands-on work, there exist no formal lab work for students on the topic. This thesis explores the literature surrounding the design of labs for a STEM or engineering-specific university class, and applies the recommendations and methods within that literature to design two laboratory guides on human-machine interfaces. These labs are intended to be implemented in either (or both) the ENTC 3350 Industrial Electronics class or the ENTC 4517/5517 Automation & Robotics class at East Tennessee State University. This work also details the collection of student feedback and the refinement of the labs according to those results. It also notes roadblocks to the creation of a lab on the topic of supervisory control and data acquisition systems. Finally, this thesis provides several future recommendations for adjustments to the class materials, future labs, and education on supervisory control and data acquistion systems.
|
8 |
Conception d’interfaces adaptatives basée sur l’ingénierie dirigée par les modèles pour le support à la coordination / Model driven adaptive interface design for coordination supportAltenburger, Thomas 12 December 2013 (has links)
De nos jours, nous vivons dans un monde d'interactions. Nous sommes entourés d'appareils électroniques susceptibles de compliquer ces interactions. De plus, les utilisateurs sont dorénavant mobiles et évoluent dans des environnements changeant. Vis-à-vis de la collaboration, ces conditions peuvent inhiber la productivité. Ce projet de thèse vise à proposer des méthodes pour la conception d'interfaces utilisateur capables de tenir compte du contexte à travers l'utilisation d'interfaces adaptatives. La contribution principale de cette thèse consiste en un cadre de référence dirigé par les modèles pour la conception et l'exécution d'interfaces utilisateur adaptatives supportant des mécanismes de coordination (i.e. flux de travaux, conscience de groupe). La proposition se présente sous deux aspects : Un cadre méthodologique pour l'aide è la conception d'interfaces supportant la coordination. Il consiste essentiellement en l'emploi de méthodes de modélisation d'exigences métier via un processus itératif ; Un cadre technologique qui repose sur l'ingénierie basée sur les modèles pour permettre la génération et l'exécution d'interfaces utilisateur adaptatives. Il se base sur une architecture orientée widgets pour le support de la conscience de groupe afin de promouvoir la coordination / Nowadays, we live in a world of interactions. We are surrounded by electronic devices which tend to complexify user interactions. Moreover, users are now mobile and evolve in ever changing environments. Regarding collaboration, these conditions may inhibit productivity. This PhD aims to propose design methods for user interfaces able to consider the context through the use of adaptive user interfaces. The main contribution of this thesis consists in a model-driven reference framework for the design and the execution of adaptive user interfaces supporting coordination mecanisms (i.e. workflow, group awareness). The proposition is composed of these two facets: A methodologic framework to assist in the design of user interfaces supporting coordination. It consists in the use of iterative modelisation methods for requirements engineering; A technological framework which relies on model-based engineering to allow the generation and execution of adaptive user interfaces. It makes use of widget-based architecture to support group awareness in order to promote coordination
|
9 |
A Learning-based Control Architecture for Socially Assistive Robots Providing Cognitive InterventionsChan, Jeanie 05 December 2011 (has links)
Due to the world’s rapidly growing elderly population, dementia is becoming increasingly prevalent. This poses considerable health, social, and economic concerns as it impacts individuals, families and healthcare systems. Current research has shown that cognitive interventions may slow the decline of or improve brain functioning in older adults. This research investigates the use of intelligent socially assistive robots to engage individuals in person-centered cognitively stimulating activities. Specifically, in this thesis, a novel learning-based control architecture is developed to enable socially assistive robots to act as social motivators during an activity. A hierarchical reinforcement learning approach is used in the architecture so that the robot can learn appropriate assistive behaviours based on activity structure and personalize an interaction based on the individual’s behaviour and user state. Experiments show that the control architecture is effective in determining the robot’s optimal assistive behaviours for a memory game interaction and a meal assistance scenario.
|
10 |
A Learning-based Control Architecture for Socially Assistive Robots Providing Cognitive InterventionsChan, Jeanie 05 December 2011 (has links)
Due to the world’s rapidly growing elderly population, dementia is becoming increasingly prevalent. This poses considerable health, social, and economic concerns as it impacts individuals, families and healthcare systems. Current research has shown that cognitive interventions may slow the decline of or improve brain functioning in older adults. This research investigates the use of intelligent socially assistive robots to engage individuals in person-centered cognitively stimulating activities. Specifically, in this thesis, a novel learning-based control architecture is developed to enable socially assistive robots to act as social motivators during an activity. A hierarchical reinforcement learning approach is used in the architecture so that the robot can learn appropriate assistive behaviours based on activity structure and personalize an interaction based on the individual’s behaviour and user state. Experiments show that the control architecture is effective in determining the robot’s optimal assistive behaviours for a memory game interaction and a meal assistance scenario.
|
Page generated in 0.1024 seconds