• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 214
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 382
  • 382
  • 310
  • 126
  • 107
  • 69
  • 63
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Model Development for Autonomous Short-Term Adaptation of Cobots' Motion Speed to Human Work Behavior in Human-Robot Collaboration Assembly Stations

Jeremy Amadeus Deniz Askin (11625070) 26 July 2022 (has links)
<p>  </p> <p>Manufacturing flexibility and human-centered designs are promising approaches to face the demand for individualized products. Human-robot assembly cells still lack flexibility and adaptability (VDI, 2017) using static control architectures (Bessler et al., 2020). Autonomous adaptation to human operators in short time horizons increases the willingness to work with cobots. Besides, monotonous static assembling in manufacturing operations does not accommodate the human way of working. Therefore, Human-Robot Collaboration (HRC) workstations require a work behavior adaptation accommodating varying work behavior regarding human mental and physical conditions (Weiss et al., 2021). The thesis presents the development of a cyber-physical HRC assembly station.</p> <p>Moreover, the thesis includes an experimental study investigating the influence of a cobot’s speed on human work behavior. The Cyber-Physical System (CPS) integrates the experiment's findings with event-based software architecture and a semantic knowledge representation. Thereby, the work focuses on demonstrating the feasibility of the CPS and the semantic model, allowing the self-adaptation of the system. Finally, the conclusion identifies the need for further research in human work behavior detection and fuzzy decision models. Such detection and decision models could improve self-adaptation in human-centered assembly systems.</p>
242

Haptic-Enabled Robotic Arms to Achieve Handshakes in the Metaverse

Mohd Faisal, 26 September 2022 (has links)
Humans are social by nature, and the physical distancing due to COVID has converted many of our daily interactions into virtual ones. Among the negative consequences of this, we find the lack of an element that is essential to humans' well-being, which is the physical touch. With more interactions shifting towards the digital world of the metaverse, we want to provide individuals with the means to include the physical touch in their interactions. We explore the Digital Twin technology's prospect to support in reducing the impact of this on humans. We provide a definition of the concept of Robo Twin and explain its role in mediating human interactions. Besides, we survey research works related to Digital Twin's physical representation with a focus on under-actuated Digital Twin's robotic arms. In this thesis, we first provide findings from the literature, to support researchers' decisions in the adoption and use of designs and implementations of Digital Twin's robotic arms, and to inform future research on current challenges and gaps in existing research works. Subsequently, we design and implement two right-handed under-actuated Digital Twin's robotic arms to mediate the physical interaction between two individuals by allowing them to perform a handshake while they are physically distanced. This experiment served as a proof of concept for our proposed idea of Robo Twin. The findings are very promising as our evaluation shows that the participants are highly interested in using our system to make a handshake with their loved ones when they are physically separated. With this Robo Twin Arm system, we also find a correlation between the handshake characteristics and gender and/or personality traits of the participants from the quantitative handshake data collected during the experiment. Moreover, it is a step towards the design and development of Digital Twin's under-actuated robotic arms and ways to enhance the overall user experience with such a system.
243

Perceived Safety Aspects when Collaborating with Robots in the Manufacturing Industry : Applying an HTO Methodology

Eklund, Jonas, Hallengren, Ida January 2024 (has links)
As Industry 4.0 continues to evolve, human-robot collaboration, HRC, has become more common in industries. This study aimed to explore perceived safety in HRC within manufacturing, with a focus on the assembly processes at Volvo. The goal was to promote perceived safety among operators by applying the Human-Technology-Organization, HTO, perspective, including Safety-I, -II, and -III. A framework was developed to illustrate the aim in relation to the theory and the approach taken in the study. The Volvo case RITA, a collaborative robot designed to assist with kitting, was used as a use case in the study. Numerous interviews were conducted with organizational representatives and assembly line operators with a complementary questionnaire. Since RITA was not operational, a video of the case was utilized extensively throughout the study. Operator interviews were centered on gathering their insights on perceived safety, drawing from the above safety perspectives. The formulated recommendations emphasized the importance of comprehensive operator training and early involvement in new development processes. Various traffic rules were devised for different collaboration scenarios, and the significance of clear workspaces was underscored to maintain system efficiency. These recommendations were later validated by an organizational representative from Volvo. Lastly, the study emphasizes that while technical solutions for safety are necessary, they are not sufficient without a strong safety culture that encourages openness and collaboration. By considering technical, organizational, and human aspects of safety, this study contributes to a deeper understanding of the dynamics in HRC and lays the foundation for safe and efficient manufacturing processes.
244

Human-Robot Interaction with Pose Estimation and Dual-Arm Manipulation Using Artificial Intelligence

Ren, Hailin 16 April 2020 (has links)
This dissertation focuses on applying artificial intelligence techniques to human-robot interaction, which involves human pose estimation and dual-arm robotic manipulation. The motivating application behind this work is autonomous victim extraction in disaster scenarios using a conceptual design of a Semi-Autonomous Victim Extraction Robot (SAVER). SAVER is equipped with an advanced sensing system and two powerful robotic manipulators as well as a head and neck stabilization system to achieve autonomous safe and effective victim extraction, thereby reducing the potential risk to field medical providers. This dissertation formulates the autonomous victim extraction process using a dual-arm robotic manipulation system for human-robot interaction. According to the general process of Human-Robot Interaction (HRI), which includes perception, control, and decision-making, this research applies machine learning techniques to human pose estimation, robotic manipulator modeling, and dual-arm robotic manipulation, respectively. In the human pose estimation, an efficient parallel ensemble-based neural network is developed to provide real-time human pose estimation on 2D RGB images. A 13-limb, 14-joint skeleton model is used in this perception neural network and each ensemble of the neural network is designed for a specific limb detection. The parallel structure poses two main benefits: (1) parallel ensembles architecture and multiple Graphics Processing Units (GPU) make distributed computation possible, and (2) each individual ensemble can be deployed independently, making the processing more efficient when the detection of only some specific limbs is needed for the tasks. Precise robotic manipulator modeling benefits from the simplicity of the controller design and improves the performance of trajectory following. Traditional system modeling relies on first principles, simplifying assumptions and prior knowledge. Any imperfection in the above could lead to an analytical model that is different from the real system. Machine learning techniques have been applied in this field to pursue faster computation and more accurate estimation. However, a large dataset is always needed for these techniques, while obtaining the data from the real system could be costly in terms of both time and maintenance. In this research, a series of different Generative Adversarial Networks (GANs) are proposed to efficiently identify inverse kinematics and inverse dynamics of the robotic manipulators. One four-Degree-of-Freedom (DOF) robotic manipulator and one six-DOF robotic manipulator are used with different sizes of the dataset to evaluate the performance of the proposed GANs. The general methods can also be adapted to other systems, whose dataset is limited using general machine learning techniques. In dual-arm robotic manipulation, basic behaviors such as reaching, pushing objects, and picking objects up are learned using Reinforcement Learning. A Teacher-Student advising framework is proposed to learn a single neural network to control dual-arm robotic manipulators with previous knowledge of controlling a single robotic manipulator. Simulation and experimental results present the efficiency of the proposed framework compared to the learning process from scratch. Another concern in robotic manipulation is safety constraints. A variable-reward hierarchical reinforcement learning framework is proposed to solve sparse reward and tasks with constraints. A task of picking up and placing two objects to target positions while keeping them in a fixed distance within a threshold is used to evaluate the performance of the proposed method. Comparisons to other state-of-the-art methods are also presented. Finally, all the three proposed components are integrated as a single system. Experimental evaluation with a full-size manikin was performed to validate the concept of applying artificial intelligence techniques to autonomous victim extraction using a dual-arm robotic manipulation system. / Doctor of Philosophy / Using mobile robots for autonomous victim extraction in disaster scenarios reduces the potential risk to field medical providers. This dissertation focuses on applying artificial intelligence techniques to this human-robot interaction task involving pose estimation and dual-arm manipulation for victim extraction. This work is based on a design of a Semi-Autonomous Victim Extraction Robot (SAVER). SAVER is equipped with an advanced sensing system and two powerful robotic manipulators as well as a head and neck stabilization system attached on an embedded declining stretcher to achieve autonomous safe and effective victim extraction. Therefore, the overall research in this dissertation addresses: human pose estimation, robotic manipulator modeling, and dual-arm robotic manipulation for human pose adjustment. To accurately estimate the human pose for real-time applications, the dissertation proposes a neural network that could take advantages of multiple Graphics Processing Units (GPU). Considering the cost in data collection, the dissertation proposed novel machine learning techniques to obtain the inverse dynamic model and the inverse kinematic model of the robotic manipulators using limited collected data. Applying safety constraints is another requirement when robots interacts with humans. This dissertation proposes reinforcement learning techniques to efficiently train a dual-arm manipulation system not only to perform the basic behaviors, such as reaching, pushing objects and picking up and placing objects, but also to take safety constraints into consideration in performing tasks. Finally, the three components mentioned above are integrated together as a complete system. Experimental validation and results are discussed at the end of this dissertation.
245

AR-Supported Supervision of Conditional Autonomous Robots: Considerations for Pedicle Screw Placement in the Future

Schreiter, Josefine, Schott, Danny, Schwenderling, Lovis, Hansen, Christian, Heinrich, Florian, Joeres, Fabian 16 May 2024 (has links)
Robotic assistance is applied in orthopedic interventions for pedicle screw placement (PSP). While current robots do not act autonomously, they are expected to have higher autonomy under surgeon supervision in the mid-term. Augmented reality (AR) is promising to support this supervision and to enable human–robot interaction (HRI). To outline a futuristic scenario for robotic PSP, the current workflow was analyzed through literature review and expert discussion. Based on this, a hypothetical workflow of the intervention was developed, which additionally contains the analysis of the necessary information exchange between human and robot. A video see-through AR prototype was designed and implemented. A robotic arm with an orthopedic drill mock-up simulated the robotic assistance. The AR prototype included a user interface to enable HRI. The interface provides data to facilitate understanding of the robot’s ”intentions”, e.g., patient-specific CT images, the current workflow phase, or the next planned robot motion. Two-dimensional and three-dimensional visualization illustrated patient-specific medical data and the drilling process. The findings of this work contribute a valuable approach in terms of addressing future clinical needs and highlighting the importance of AR support for HRI.
246

GENTLE/A : adaptive robotic assistance for upper-limb rehabilitation

Gudipati, Radhika January 2014 (has links)
Advanced devices that can assist the therapists to offer rehabilitation are in high demand with the growing rehabilitation needs. The primary requirement from such rehabilitative devices is to reduce the therapist monitoring time. If the training device can autonomously adapt to the performance of the user, it can make the rehabilitation partly self-manageable. Therefore the main goal of our research is to investigate how to make a rehabilitation system more adaptable. The strategy we followed to augment the adaptability of the GENTLE/A robotic system was to (i) identify the parameters that inform about the contribution of the user/robot during a human-robot interaction session and (ii) use these parameters as performance indicators to adapt the system. Three main studies were conducted with healthy participants during the course of this PhD. The first study identified that the difference between the position coordinates recorded by the robot and the reference trajectory position coordinates indicated the leading/lagging status of the user with respect to the robot. Using the leadlag model we proposed two strategies to enhance the adaptability of the system. The first adaptability strategy tuned the performance time to suit the user’s requirements (second study). The second adaptability strategy tuned the task difficulty level based on the user’s leading or lagging status (third study). In summary the research undertaken during this PhD successfully enhanced the adaptability of the GENTLE/A system. The adaptability strategies evaluated were designed to suit various stages of recovery. Apart from potential use for remote assessment of patients, the work presented in this thesis is applicable in many areas of human-robot interaction research where a robot and human are involved in physical interaction.
247

A differential-based parallel force/velocity actuation concept : theory and experiments

Rabindran, Dinesh, 1978- 05 February 2010 (has links)
Robots are now moving from their conventional confined habitats such as factory floors to human environments where they assist and physically interact with people. The requirement for inherent mechanical safety is overarching in such human-robot interaction systems. We propose a dual actuator called Parallel Force/Velocity Actuator (PFVA) that combines a Force Actuator (FA) (low velocity input) and a Velocity Actuator (VA) (high velocity input) using a differential gear train. In this arrangement mechanical safety can be achieved by limiting the torque on the FA and thus making it a backdriveable input. In addition, the kinematic redundancy in the drive can be used to control output velocity while satisfying secondary operational objectives. Our research focus was on three areas: (i) scalable parametric design of the PFVA, (ii) analytical modeling of the PFVA and experimental testing on a single-joint prototype, and (iii) generalized model formulation for PFVA-driven serial robot manipulators. In our analysis, the ratio of velocity ratios between the FA and the VA, called the relative scale factor, emerged as a purely geometric and dominant design parameter. Based on a dimensionless parametric design of PFVAs using power-flow and load distributions between the inputs, a prototype was designed and built using commercial-off-the-shelf components. Using controlled experiments, two performance-limiting phenomena in our prototype, friction and dynamic coupling between the two inputs, were identified. Two other experiments were conducted to characterize the operational performance of the actuator in velocity-mode and in what we call ‘torque-limited’ mode (i.e. when the FA input can be backdriven). Our theoretical and experimental results showed that the PFVA can be mechanical safe to both slow collisions and impacts due to the backdriveability of the FA. Also, we show that its kinematic redundancy can be effectively utilized to mitigate low-velocity friction and backlash in geared mechanisms. The implication at the system level of our actuator level analytical and experimental work was studied using a generalized dynamic modeling framework based on kinematic influence coefficients. Based on this dynamic model, three design case studies for a PFVA-driven serial planar 3R manipulator were presented. The major contributions of this research include (i) mathematical models and physical understanding for over six fundamental design and operational parameters of the PFVA, based on which approximately ten design and five operational guidelines were laid out, (ii) analytical and experimental proof-of-concept for the mechanical safety feature of the PFVA and the effective utilization of its kinematic redundancy, (iii) an experimental methodology to characterize the dynamic coupling between the inputs in a differential-summing mechanism, and (iv) a generalized dynamic model formulation for PFVA-driven serial robot manipulators with emphasis on distribution of output loads between the FA and VA input-sets. / text
248

Apprendre à un robot à reconnaître des objets visuels nouveaux et à les associer à des mots nouveaux : le rôle de l’interface

Rouanet, Pierre 04 April 2012 (has links)
Cette thèse s’intéresse au rôle de l’interface dans l’interaction humain-robot pour l’apprentissage. Elle étudie comment une interface bien conçue peut aider les utilisateurs non-experts à guider l’apprentissage social d’un robot, notamment en facilitant les situations d’attention partagée. Nous étudierons comment l’interface peut rendre l’interaction plus robuste, plus intuitive, mais aussi peut pousser les humains à fournir les bons exemples d’apprentissage qui amélioreront les performances de l’ensemble du système. Nous examinerons cette question dans le cadre de la robotique personnelle où l’apprentissage social peut jouer un rôle clé dans la découverte et l’adaptation d’un robot à son environnement immédiat. Nous avons choisi d’étudier le rôle de l’interface sur une instance particulière d’apprentissage social : l’apprentissage conjoint d’objets visuels et de mots nouveaux par un robot en interaction avec un humain non-expert. Ce défi représente en effet un levier important du développement de la robotique personnelle, l’acquisition du langage chez les robots et la communication entre un humain et un robot. Nous avons particulièrement étudié les défis d’interaction tels que le pointage et l’attention partagée.Nous présenterons au chapitre 1 une description de notre contexte applicatif : la robotique personnelle. Nous décrirons ensuite au chapitre 2 les problématiques liées au développement de robots sociaux et aux interactions avec l’homme. Enfin, au chapitre 3 nous présenterons la question de l’interface dans l’acquisition des premiers mots du langage chez les robots. La démarche centrée utilisateur suivie tout au long du travail de cette thèse sera décrite au chapitre 4. Dans les chapitres suivants, nous présenterons les différentes contributions de cette thèse. Au chapitre 5, nous montrerons comment des interfaces basées sur des objets médiateurs peuvent permettre de guider un robot dans un environnement du quotidien encombré. Au chapitre 6, nous présenterons un système complet basé sur des interfaces humain-robot, des algorithmes de perception visuelle et des mécanismes d’apprentissage, afin d’étudier l’impact des interfaces sur la qualité des exemples d’apprentissage d’objets visuels collectés. Une évaluation à grande échelle de ces interfaces, conçue sous forme de jeu robotique afin de reproduire des conditions réalistes d’utilisation hors-laboratoire, sera décrite au chapitre 7. Au chapitre 8, nous présenterons une extension de ce système permettant la collecte semi-automatique d’exemples d’apprentissage d’objets visuels. Nous étudierons ensuite la question de l’acquisition conjointe de mots vocaux nouveaux associés aux objets visuels dans le chapitre 9. Nous montrerons comment l’interface peut permettre d’améliorer les performances du système de reconnaissance vocale, et de faire directement catégoriser les exemples d’apprentissage à l’utilisateur à travers des interactions simples et transparentes. Enfin, les limites et extensions possibles de ces contributions seront présentées au chapitre 10. / This thesis is interested in the role of interfaces in human-robot interactions for learning. In particular it studies how a well conceived interface can aid users, and more specifically non-expert users, to guide social learning of a robotic student, notably by facilitating situations of joint attention. We study how the interface can make the interaction more robust, more intuitive, but can also push the humans to provide good learning examples which permits the improvement of performance of the system as a whole. We examine this question in the realm of personal robotics where social learning can play a key role in the discovery and adaptation of a robot in its immediate environment. We have chosen to study this question of the role of the interface in social learning within a particular instance of learning : the combined learning of visual objects and new words by a robot in interactions with a non-expert human. Indeed this challenge represents an important an lever in the development of personal robotics, the acquisition of language for robots, and natural communication between a human and a robot. We have studied more particularly the challenge of human-robot interaction with respect to pointing and joint attention.We present first of all in Chapter 1 a description of our context : personal robotics. We then describe in Chapter 2 the problems which are more specifically linked to social robotic development and interactions with people. Finally, in Chapter 3, we present the question of interfaces in acquisition of the first words of language for a robot. The user centered approach followed throughout the work of this thesis will be described in Chapter 4. In the following chapters, we present the different contributions of this thesis. In Chapter 5, we show how some interfaces based on mediator objects can permit the guiding of a personal robot in a cluttered home environment. In Chapter 6, we present a complete system based on human-robot interfaces, the algorithms of visual perception and machine learning in order to study the impact of interfaces, and more specifically the role of different feedback of what the robot perceives, on the quality of collected learning examples of visual objects. A large scale user-study of these interfaces, designed in the form of a robotic game that reproduces realistic conditions of use outside of a laboratory, will be described in details in Chapter 7. In Chapter 8, we present an extension of the system which allows the collection of semi-automatic learning examples of visual objects. We then study the question of combined acquisition of new vocal words associated with visual objects in Chapter 9. We show that the interface can permit both the improvement of the performance of the speech recognition and direct categorization of the different learning examples through simple and transparent user’s interactions. Finally, a discussion of the limits and possible extensions of these contributions will be presented in Chapter 10.
249

Synchronisation et coordination interpersonnelle dans l'interaction Homme-robot / Synchrony and Interpersonal coordination in Human Robot interaction

Hasnain, Syed Khursheed 10 July 2014 (has links)
As robots start moving closer to our social and daily lives, issues of agency and social behavior become more important. However, despite noticeable advances in Human Robot Interaction (HRI), the developed technologies induce two major drawbacks : (i) HRI are highly demanding, (ii) humans have to adapt their way of thinking to the potential and limitations of the Robot. Thereby, HRI induce an important cognitive load which question the acceptability of the future robots. Consequently, we can address the question of understanding and mastering the development of pleasant yet efficient human-robot interactions which increase self- esteem, engagement (or pleasure), and efficacy of the human when interacting with the machine.In this race for more user-friendly HRI systems (robotic companion, intelligent objects etc.), working on the technical features (the design of appearance and superficial traits of behavior) can contribute to some partial solutions for punctual or short-term interactions. For instance, a major focus of interest has been put on the expressiveness and the appearance of robots and avatars. Yet, these approaches have neglected the importance of understanding the dynamics of interactions.In our opinion, intuitive communication refers to the ability of the robot to detect the crucial signals of the interaction and use them to adapt one's dynamics to the other's behavior. In fact, this central issue is highly dependent on the robot's capabilities to sense the human world and interact with it in a way that emulates human-human interactions.In early communication among humans, synchrony was found to be a funda- mental mechanism relying on very low-level sensory-motor networks, inducing the synchronization of inter-individual neural populations from sensory flows (vision, audition, or touch). Synchrony is caused by the interaction but also sustains the interaction itself in a circular way, as promoted by the enaction approach. Consequently, to become a partner in a working together scenario, the machine can obtain a minimal level of autonomy and adaptation by predicting the rhythmic structure of the interaction to build reinforcement signals to adapt the robot behavior as it can maintain the interest of the human in more long-term interactions.More precisely, as we are aiming for more “intuitive” and “natural” HRI, we took advantages of recent discoveries in low-level human interactions and studied Unintentional Synchronizations during rhythmic human robot interactions. We argue that exploiting natural stability and adaptability properties of unintentional synchronizations and rhythmic activities in human-human interactions can solve several of the acceptability problems of HRIs, and allow rethinking the current approaches to design them. / As robots start moving closer to our social and daily lives, issues of agency and social behavior become more important. However, despite noticeable advances in Human Robot Interaction (HRI), the developed technologies induce two major drawbacks : (i) HRI are highly demanding, (ii) humans have to adapt their way of thinking to the potential and limitations of the Robot. Thereby, HRI induce an important cognitive load which question the acceptability of the future robots. Consequently, we can address the question of understanding and mastering the development of pleasant yet efficient human-robot interactions which increase self- esteem, engagement (or pleasure), and efficacy of the human when interacting with the machine.In this race for more user-friendly HRI systems (robotic companion, intelligent objects etc.), working on the technical features (the design of appearance and superficial traits of behavior) can contribute to some partial solutions for punctual or short-term interactions. For instance, a major focus of interest has been put on the expressiveness and the appearance of robots and avatars. Yet, these approaches have neglected the importance of understanding the dynamics of interactions.In our opinion, intuitive communication refers to the ability of the robot to detect the crucial signals of the interaction and use them to adapt one's dynamics to the other's behavior. In fact, this central issue is highly dependent on the robot's capabilities to sense the human world and interact with it in a way that emulates human-human interactions.In early communication among humans, synchrony was found to be a funda- mental mechanism relying on very low-level sensory-motor networks, inducing the synchronization of inter-individual neural populations from sensory flows (vision, audition, or touch). Synchrony is caused by the interaction but also sustains the interaction itself in a circular way, as promoted by the enaction approach. Consequently, to become a partner in a working together scenario, the machine can obtain a minimal level of autonomy and adaptation by predicting the rhythmic structure of the interaction to build reinforcement signals to adapt the robot behavior as it can maintain the interest of the human in more long-term interactions.More precisely, as we are aiming for more “intuitive” and “natural” HRI, we took advantages of recent discoveries in low-level human interactions and studied Unintentional Synchronizations during rhythmic human robot interactions. We argue that exploiting natural stability and adaptability properties of unintentional synchronizations and rhythmic activities in human-human interactions can solve several of the acceptability problems of HRIs, and allow rethinking the current approaches to design them.
250

Analyse acoustique de la voix émotionnelle de locuteurs lors d’une interaction humain-robot / Acoustic analysis of speakers emotional voices during a human-robot interaction

Tahon, Marie 15 November 2012 (has links)
Mes travaux de thèse s'intéressent à la voix émotionnelle dans un contexte d'interaction humain-robot. Dans une interaction réaliste, nous définissons au moins quatre grands types de variabilités : l'environnement (salle, microphone); le locuteur, ses caractéristiques physiques (genre, âge, type de voix) et sa personnalité; ses états émotionnels; et enfin le type d'interaction (jeu, situation d'urgence ou de vie quotidienne). A partir de signaux audio collectés dans différentes conditions, nous avons cherché, grâce à des descripteurs acoustiques, à imbriquer la caractérisation d'un locuteur et de son état émotionnel en prenant en compte ces variabilités.Déterminer quels descripteurs sont essentiels et quels sont ceux à éviter est un défi complexe puisqu'il nécessite de travailler sur un grand nombre de variabilités et donc d'avoir à sa disposition des corpus riches et variés. Les principaux résultats portent à la fois sur la collecte et l'annotation de corpus émotionnels réalistes avec des locuteurs variés (enfants, adultes, personnes âgées), dans plusieurs environnements, et sur la robustesse de descripteurs acoustiques suivant ces quatre variabilités. Deux résultats intéressants découlent de cette analyse acoustique: la caractérisation sonore d'un corpus et l'établissement d'une liste "noire" de descripteurs très variables. Les émotions ne sont qu'une partie des indices paralinguistiques supportés par le signal audio, la personnalité et le stress dans la voix ont également été étudiés. Nous avons également mis en oeuvre un module de reconnaissance automatique des émotions et de caractérisation du locuteur qui a été testé au cours d'interactions humain-robot réalistes. Une réflexion éthique a été menée sur ces travaux. / This thesis deals with emotional voices during a human-robot interaction. In a natural interaction, we define at least, four kinds of variabilities: environment (room, microphone); speaker, its physic characteristics (gender, age, voice type) and personality; emotional states; and finally the kind of interaction (game scenario, emergency, everyday life). From audio signals collected in different conditions, we tried to find out, with acoustic features, to overlap speaker and his emotional state characterisation taking into account these variabilities.To find which features are essential and which are to avoid is hard challenge because it needs to work with a high number of variabilities and then to have riche and diverse data to our disposal. The main results are about the collection and the annotation of natural emotional corpora that have been recorded with different kinds of speakers (children, adults, elderly people) in various environments, and about how reliable are acoustic features across the four variabilities. This analysis led to two interesting aspects: the audio characterisation of a corpus and the drawing of a black list of features which vary a lot. Emotions are ust a part of paralinguistic features that are supported by the audio channel, other paralinguistic features have been studied such as personality and stress in the voice. We have also built automatic emotion recognition and speaker characterisation module that we have tested during realistic interactions. An ethic discussion have been driven on our work.

Page generated in 0.0601 seconds