• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 302
  • 302
  • 302
  • 105
  • 91
  • 59
  • 51
  • 50
  • 41
  • 39
  • 39
  • 39
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Topic change in robot-moderated group discussions : Investigating machine learning approaches for topic change in robot-moderated discussions using non-verbal features / Ämnesbyte i robotmodererade gruppdiskussioner : Undersöka maskininlärningsmetoder för ämnesändring i robotmodererad diskussion med hjälp av icke-verbala egenskaper

Hadjiantonis, Georgios January 2024 (has links)
Moderating group discussions among humans can often be challenging and require certain skills, particularly in deciding when to ask other participants to elaborate or change the current topic of the discussion. Recent research on Human-Robot Interaction in groups has demonstrated the positive impact of robot behavior on the quality and effectiveness of the interaction and their ability to shape the dynamics of the group and promote social behavior. In light of this, there is the potential of using social robots as discussion moderators to facilitate engaging and productive discussions among humans. Previous work on topic management in conversational agents was predominantly based on human engagement and topic personalization, with the agent having an active/central role in the conversation. This thesis focuses exclusively on the moderation of group discussions; instead of moderating the topic based on evaluated human engagement, the thesis builds upon previous research on non-verbal cues related to discussion topic structure and turntaking to determine whether participants intend to continue discussing the current topic in a content-free manner. This thesis investigates the suitability of machine-learning models and the contribution of different audiovisual non-verbal features in predicting appropriate topic changes. For this purpose, we utilized pre-recorded interactions between a robot moderator and human participants, which we annotated and from which we extracted acoustic and body language-related features. We provide an analysis of the performance of sequential and nonsequential machine learning approaches using different sets of features, as well as a comparison with rule-based heuristics. The results indicate promising performance in classifying between cases when a topic change was inappropriate versus when a topic change could or should change, outperforming rule-based approaches and demonstrating the feasibility of using machine learning models for topic moderation. Regarding the type of models, the results suggest no distinct advantage of sequential over non-sequential modeling approaches, indicating the effectiveness of simpler non-sequential data models. Acoustic features exhibited comparable and, in some cases, improved overall performance and robustness compared to using only body language-related features or a combination of both types. In summary, this thesis provides a foundation for future research in robot-mediated topic moderation in groups using non-verbal cues, presenting opportunities to further improve social robots with topic moderation capabilities. / Att moderera gruppdiskussioner mellan människor kan ofta vara utmanande och kräver vissa färdigheter, särskilt när det gäller att bestämma när man ska be andra deltagare att utveckla eller ändra det aktuella ämnet för diskussionen. Ny forskning om människa-robotinteraktion i grupper har visat den positiva effekten av robotbeteende på interaktionens kvalitet och effektivitet och deras förmåga att forma gruppens dynamik och främja socialt beteende. I ljuset av detta finns det potential att använda sociala robotar som diskussionsmoderatorer för att underlätta engagerande och produktiva diskussioner bland människor. Tidigare arbete med ämneshantering hos konversationsagenter baserades till övervägande del på mänskligt engagemang och ämnesanpassning, där agenten hade en aktiv/central roll i samtalet. Denna avhandling fokuserar uteslutande på moderering av gruppdiskussioner; istället för att moderera ämnet baserat på utvärderat mänskligt engagemang, bygger avhandlingen på tidigare forskning om icke-verbala ledtrådar relaterade till diskussionsämnesstruktur och turtagning för att avgöra om deltagarna avser att fortsätta diskutera det aktuella ämnet på ett innehållsfritt sätt. Denna avhandling undersöker lämpligheten av maskininlärningsmodeller och bidraget från olika audiovisuella icke-verbala funktioner för att förutsäga lämpliga ämnesändringar. För detta ändamål använde vi förinspelade interaktioner mellan en robotmoderator och mänskliga deltagare, som vi kommenterade och från vilka vi extraherade akustiska och kroppsspråksrelaterade funktioner. Vi tillhandahåller en analys av prestandan för sekventiell och ickesekventiell maskininlärningsmetoder med olika uppsättningar funktioner, samt en jämförelse med regelbaserad heuristik. Resultaten indikerar lovande prestation när det gäller att klassificera mellan fall när ett ämnesbyte var olämpligt kontra när ett ämnesbyte kunde eller borde ändras, överträffande regelbaserade tillvägagångssätt och demonstrerar genomförbarheten av att använda maskininlärningsmodeller för ämnesmoderering. När det gäller typen av modeller tyder resultaten inte på någon tydlig fördel med sekventiella metoder framför icke-sekventiella modelleringsmetoder, vilket indikerar effektiviteten hos enklare icke-sekventiella datamodeller. Akustiska funktioner uppvisade jämförbara och, i vissa fall, förbättrade övergripande prestanda och robusthet jämfört med att endast använda kroppsspråksrelaterade funktioner eller en kombination av båda typerna.svis ger denna avhandling en grund för framtida forskning inom robotmedierad ämnesmoderering i grupper som använder icke-verbala ledtrådar, och presenterar möjligheter att förbättra sociala robotar ytterligare med ämnesmodererande förmåga.
262

<b>Design and Modeling of Variable Stiffness Mechanisms </b><b>for</b><b> </b><b>Collaborative</b><b> </b><b>Robots</b><b> </b><b>and</b><b> </b><b>Flexible</b><b> </b><b>Grasping</b>

Jiaming Fu (18437502) 27 April 2024 (has links)
<p dir="ltr">To ensure safety, traditional industrial robots must operate within cages to separate them from human workers. This requirement has led to the rapid development of collaborative robots (cobots) designed to work closely to humans. However, existing cobots often prioritize <a href="" target="_blank">performance </a>aspects, such as precision, speed, and payload capacity, or prioritize safety, leading to a challenging balance between them. To address this issue, this dissertation introduces innovative concepts and methodologies for variable stiffness mechanisms. These mechanisms are applied to create easily fabricated cobot components to allow for controllable trade-offs between safety and performance in human-robot collaboration intrinsically. Additionally, the end-effectors developed based on these mechanisms enable the flexible and adaptive gripping of objects, enhancing the utility and efficiency of cobots in various applications.</p><p dir="ltr">This article-based dissertation comprises five peer-reviewed articles. The first essay introduces a reconfigurable variable stiffness parallel-guided beam (VSPB), whose stiffness can be adjusted discretely. An accurate stiffness model is also established, capable of leveraging a simple and reliable mechanical structure to achieve broad stiffness variation. The second essay discusses several discrete variable stiffness actuators (DVSAs) suitable for robotic joints. These DVSAs offer high stiffness ratios, rapid shifting speeds, low energy consumption, and compact structures compared to most existing variable stiffness actuators. The third essay introduces a discrete variable stiffness link (DVSL), applied to the robotic arm of a collaborative robot. Comprising three serially connected VSPBs, it offers eight different stiffness modes to accommodate diverse application scenarios, representing the first DVSL in the world. The fourth essay presents a variable stiffness gripper (VSG) with two fingers, each capable of continuous stiffness adjustment. The VSG is a low-cost, customizable universal robotic hand capable of successfully grasping objects of different types, shapes, weights, fragility, and hardness. The fifth essay introduces another robotic hand, the world's first discrete variable stiffness gripper (DVSG). It features four different stiffness modes for discrete stiffness adjustment in various gripper positions by on or off the ribs. Therefore, unlike the VSG, the DVSG focuses more on adaptability to object shapes during grasping.</p><p dir="ltr">These research achievements have the potential to facilitate the construction and popularize of next-generation collaborative robots, thereby enhancing productivity in industry and possibly leading to the integration of personal robotic assistants into countless households.</p>
263

Affective Workload Allocation System For Multi-human Multi-robot Teams

Wonse Jo (13119627) 17 May 2024 (has links)
<p>Human multi-robot systems constitute a relatively new area of research that focuses on the interaction and collaboration between humans and multiple robots. Well-designed systems can enable a team of humans and robots to effectively work together on complex and sophisticated tasks such as exploration, monitoring, and search and rescue operations. This dissertation introduces an affective workload allocation system capable of adaptively allocating workload in real-time while considering the conditions and work performance of human operators in multi-human multi-robot teams. The proposed system is largely composed of three parts, taking the surveillance scenario involving multi-human operators and multi-robot system as an example. The first part of the system is a framework for an adaptive multi-human multi-robot system that allows real-time measurement and communication between heterogeneous sensors and multi-robot systems. The second part is an algorithm for real-time monitoring of humans' affective states using machine learning techniques and estimation of the affective state from multimodal data that consists of physiological and behavioral signals. The third part is a deep reinforcement learning-based workload allocation algorithm. For the first part of the affective workload allocation system, we developed a robot operating system (ROS)-based affective monitoring framework to enable communication among multiple wearable biosensors, behavioral monitoring devices, and multi-robot systems using the real-time operating system feature of ROS. We validated the sub-interfaces of the affective monitoring framework through connecting to a robot simulation and utilizing the framework to create a dataset. The dataset included various visual and physiological data categorized on the cognitive load level. The targeted cognitive load is stimulated by a closed-circuit television (CCTV) monitoring task on the surveillance scenario with multi-robot systems. Furthermore, we developed a deep learning-based affective prediction algorithm using the physiological and behavioral data captured from wearable biosensors and behavior-monitoring devices, in order to estimate the cognitive states for the second part of the system. For the third part of the affective workload allocation system, we developed a deep reinforcement learning-based workload allocation algorithm to allocate optimal workloads based on a human operator's performance. The algorithm was designed to take an operator's cognitive load, using objective and subjective measurements as inputs, and consider the operator's task performance model we developed using the empirical findings of the extensive user experiments, to allocate optimal workloads to human operators. We validated the proposed system through within-subjects study experiments on a generalized surveillance scenario involving multiple humans and multiple robots in a team. The multi-human multi-robot surveillance environment included an affective monitoring framework and an affective prediction algorithm to read sensor data and predict human cognitive load in real-time, respectively. We investigated optimal methods for affective workload allocations by comparing other allocation strategies used in the user experiments. As a result, we demonstrated the effectiveness and performance of the proposed system. Moreover, we found that the subjective and objective measurement of an operator's cognitive loads and the process of seeking consent for the workload transitions must be included in the workload allocation system to improve the team performance of the multi-human multi-robot teams.</p>
264

Who Knows Best? Self- versus Friend Robot Customisation with ChatGPT : A study investigating self- and friend-customisation of socially assistive robots acting as a health coach.

Göransson, Marcus January 2024 (has links)
When using socially assistive robots (SAR), it is important that their personality is personalised so that it suits their user. This work investigated how the customisation of the personality of a SAR health coach is perceived when done by the users themselves or their friends via ChatGPT. Therefore, the research question in this study is: How is personalised dialogue for a social robot perceived when generated via ChatGPT, by users and their friends? This study uses a mixed method approach, where participants got to test their own and their friend’s personalised version. The qualitative data was analysed using a thematic analysis. Sixteen participants were recruited.The result from this study showed that it does not matter who is customising the SAR, nor does one make a more persuasive version than the other, and when customising the personality, participants explained what they or their friend preferred. However, it is important to remember that the individual’s preference matters.
265

Human Factors Involved in Explainability of Autonomous Driving : Master’s Thesis / Mänskliga faktorer som är involverade i förklaringen av autonom körning : Magisteruppsats

Arisoni, Abriansyah January 2023 (has links)
Autonomous Car (AC) has been more common in recent years. Despite the rapid development of the driving part of the AC, researchers still need to improve the overall experience of the AC's passengers and boost their willingness to adopt the technology. When driving in an AC, passengers need to have a good situation awareness to feel more comfortable riding in an AC and have a higher trust towards the system. One of the options to improve the situation awareness is by giving passengers an explanation about the situation. This study investigates how the situational risk of specific driving scenarios and the availability of visual environment information for passengers will affect the type of explanation needed by the AC passenger. The study was conducted through a series of different scenario tests presented to online study participants and focused on the human interaction to level 4 and 5 AC. This study's primary goal is to understand the human-AC interactions further, thus improving the human experience while riding in an AC. The results show that visual information availability affects the type of explanation passengers need. When no visual information is available, passengers are more satisfied with the type that explain the cause of AC's action (causal explanation). When the visual information is available, passengers are more satisfied with the type that provide intentions behind the AC's certain actions (intentional explanation). Results also show that despite no significant differences in trust found between the groups, participants showed slightly higher trust in the AC that provided causal explanations in situations without visual information available. This study contributes to a better understanding of the explanation type passengers of AC need in the various situational degree of risk and visual information availability. By leveraging this, we can create a better experience for passengers in the AC and eventually boost the adoption of the AC on the road. / Autonomous car (AC) har blivit allt vanligare under de senaste åren. Trots den snabba utvecklingen av själva kördelen hos AC behöver forskare fortfarande förbättra den övergripande upplevelsen för AC-passagerare och öka deras vilja att anta teknologin. När man kör i en AC behöver passagerare ha god situationsmedvetenhet för att känna sig bekväma och ha högre förtroende för systemet. Ett av alternativen för att förbättra situationsmedvetenheten är att ge passagerare en förklaring om situationen. Denna studie undersöker hur den situationella risken för specifika körsituationer och tillgängligheten av visuell miljöinformation för passagerare påverkar vilken typ av förklaring som behövs av AC-passageraren. Studien genomfördes genom en serie olika scenariotester som presenterades för deltagare i en online-studie och fokuserade på mänsklig interaktion med nivå 4 och 5 AC. Denna studiens främsta mål är att förstå människa-AC-interaktionen bättre och därmed förbättra den mänskliga upplevelsen vid färd i en AC. Resultaten visar att tillgängligheten av visuell information påverkar vilken typ av förklaring passagerarna behöver. När ingen visuell information finns tillgänglig är passagerarna mer nöjda med den typ som förklarar orsaken till AC:s agerande (orsaksförklaring). När den visuella informationen finns tillgänglig är passagerarna mer nöjda med den typ som ger intentioner bakom AC:s vissa handlingar (avsiktlig förklaring). Resultaten visar också att trots att inga signifikanta skillnader i tillit hittats mellan grupperna, visade deltagarna något högre förtroende för AC som gav orsaksförklaringar i situationer utan visuell information tillgänglig. Denna studie bidrar till en bättre förståelse för vilken typ av förklaring passagerare i AC behöver vid olika situationella riskgrader och tillgänglighet av visuell information. Genom att dra nytta av detta kan vi skapa en bättre upplevelse för passagerare i AC och på sikt öka antagandet av AC på vägarna.
266

A study on Cobot investment in the manufacturing industry / En studie om Cobot-investeringar i tillverkningsindustrin

Audo, Sandra January 2019 (has links)
A collaborative robot is something of growing interest for companies in the manufacturing industries to implement. However, a collaborative robot is quite new in today’s market. An issue that arises is that no implementation process for collaborative robots exists today, as well as no requirement guide for skills, as well as actors, has been defined. The aim of this project was to examine how an implementation process of collaborative robots in manufacturing companies could look like. Focusing on charting the integration process steps of a collaborative robot, and identifying the actors as well as skills needed for successful cobot integration, with the aim achieve the goal of this thesis by answering the research questions. The thesis had the following research questions: Research Question 1 – How is an integration process for implementing a cobot represented in the manufacturing companies? Research Question 2 – What particular skills as well as actors are required when implementing in a cobot in the manufacturing companies? To answer the research questions, the author conducted several interviews with different companies. The interview questions were mainly constructed in order to answer the RQs but also to get an understanding for the different aspects of what a cobot is, what is required as well as how it compares to a traditional industrial robot. The thesis resulted in an implementation process with several steps constructed in order to implement a cobot as well as different aspects of what skills and actors are needed. In order to separate the aspects, the respondents were categorized into different roles which are the developer, integrator and the user. The different roles were all vital, providing an understanding from different perspectives. Keywords: Collaborative robot, cobot, Human-robot interaction, Human-Robot Collaboration, Development strategies, Automation, Industry 4.0.
267

Can You Read My Mind? : A Participatory Design Study of How a Humanoid Robot Can Communicate Its Intent and Awareness

Thunberg, Sofia January 2019 (has links)
Communication between humans and interactive robots will benefit if people have a clear mental model of the robots' intent and awareness. The aim with this thesis was to investigate how human-robot interaction is affected by manipulation of social cues on the robot. The research questions were: How do social cues affect mental models of the Pepper robot, and how can a participatory design method be used for investigating how the Pepper robot could communicate intent and awareness? The hypothesis for the second question was that nonverbal cues would be preferred over verbal cues. An existing standard platform was used, Softbank's Pepper, as well as state-of-the-art tasks from the RoboCup@Home challenge. The rule book and observations from the 2018 competition were thematically coded and the themes created eight scenarios. A participatory design method called PICTIVE was used in a design study, where five student participants went through three phases, label, sketch and interview, to create a design for how the robot should communicate intent and awareness. The use of PICTIVE was a suitable way to extract a lot of design ideas. However, not all scenarios were optimal for the task. The design study confirmed the use of mediating physical attributes to alter the mental model of a humanoid robot to reach common ground. Further, it did not confirm the hypothesis that nonverbal cues would be preferred over verbal cues, though it did show that verbal cues would not be enough. This, however, needs to be further tested in live interactions.
268

Arquitetura híbrida para robôs móveis baseada em funções de navegação com interação humana. / Mobile robot architecture based on navigation function with human interaction.

Grassi Júnior, Valdir 19 May 2006 (has links)
Existem aplicações na área da robótica móvel em que, além da navegação autônoma do robô, é necessário que um usuário humano interaja no controle de navegação do robô. Neste caso, considerado como controle semi-autônomo, o usuário humano têm a possibilidade de alterar localmente a trajetória autônoma previamente planejada para o robô. Entretanto, o sistema de controle inteligente do robô, por meio de um módulo independente do usuário, continuamente evita colisões, mesmo que para isso os comandos do usuário precisem ser modificados. Esta abordagem cria um ambiente seguro para navegação que pode ser usado em cadeiras de rodas robotizadas e veículos robóticos tripulados onde a segurança do ser humano deve ser garantida. Um sistema de controle que possua estas características deve ser baseado numa arquitetura para robôs móveis adequada. Esta arquitetura deve integrar a entrada de comandos de um ser humano com a camada de controle autônomo do sistema que evita colisões com obstáculos estáticos e dinâmicos, e que conduz o robô em direção ao seu objetivo de navegação. Neste trabalho é proposta uma arquitetura de controle híbrida (deliberativa/reativa) para um robô móvel com interação humana. Esta arquitetura, desenvolvida principalmente para tarefas de navegação, permite que o robô seja operado em diferentes níveis de autonomia, possibilitando que um usuário humano compartilhe o controle do robô de forma segura enquanto o sistema de controle evita colisões. Nesta arquitetura, o plano de movimento do robô é representado por uma função de navegação. É proposto um método para combinar um comportamento deliberativo que executa o plano de movimento, com comportamentos reativos definidos no contexto de navegação, e com entradas contínuas de controle provenientes do usuário. O sistema de controle inteligente definido por meio da arquitetura foi implementado em uma cadeira de rodas robotizada. São apresentados alguns dos resultados obtidos por meio de experimentos realizados com o sistema de controle implementado operando em diferentes modos de autonomia. / There are some applications in mobile robotics that require human user interaction besides the autonomous navigation control of the robot. For these applications, in a semi-autonomous control mode, the human user can locally modify the autonomous pre-planned robot trajectory by sending continuous commands to the robot. In this case, independently from the user\'s commands, the intelligent control system must continuously avoid collisions, modifying the user\'s commands if necessary. This approach creates a safety navigation system that can be used in robotic wheelchairs and manned robotic vehicles where the human safety must be guaranteed. A control system with those characteristics should be based on a suitable mobile robot architecture. This architecture must integrate the human user\'s commands with the autonomous control layer of the system which is responsible for avoiding static and dynamic obstacles and for driving the robot to its navigation goal. In this work we propose a hybrid (deliberative/reactive) mobile robot architecture with human interaction. This architecture was developed mainly for navigation tasks and allows the robot to be operated on different levels of autonomy. The user can share the robot control with the system while the system ensures the user and robot\'s safety. In this architecture, a navigation function is used for representing the robot\'s navigation plan. We propose a method for combining the deliberative behavior responsible for executing the navigation plan, with the reactive behaviors defined to be used while navigating, and with the continuous human user\'s inputs. The intelligent control system defined by the proposed architecture was implemented in a robotic wheelchair, and we present some experimental results of the chair operating on different autonomy modes.
269

Arquitetura híbrida para robôs móveis baseada em funções de navegação com interação humana. / Mobile robot architecture based on navigation function with human interaction.

Valdir Grassi Júnior 19 May 2006 (has links)
Existem aplicações na área da robótica móvel em que, além da navegação autônoma do robô, é necessário que um usuário humano interaja no controle de navegação do robô. Neste caso, considerado como controle semi-autônomo, o usuário humano têm a possibilidade de alterar localmente a trajetória autônoma previamente planejada para o robô. Entretanto, o sistema de controle inteligente do robô, por meio de um módulo independente do usuário, continuamente evita colisões, mesmo que para isso os comandos do usuário precisem ser modificados. Esta abordagem cria um ambiente seguro para navegação que pode ser usado em cadeiras de rodas robotizadas e veículos robóticos tripulados onde a segurança do ser humano deve ser garantida. Um sistema de controle que possua estas características deve ser baseado numa arquitetura para robôs móveis adequada. Esta arquitetura deve integrar a entrada de comandos de um ser humano com a camada de controle autônomo do sistema que evita colisões com obstáculos estáticos e dinâmicos, e que conduz o robô em direção ao seu objetivo de navegação. Neste trabalho é proposta uma arquitetura de controle híbrida (deliberativa/reativa) para um robô móvel com interação humana. Esta arquitetura, desenvolvida principalmente para tarefas de navegação, permite que o robô seja operado em diferentes níveis de autonomia, possibilitando que um usuário humano compartilhe o controle do robô de forma segura enquanto o sistema de controle evita colisões. Nesta arquitetura, o plano de movimento do robô é representado por uma função de navegação. É proposto um método para combinar um comportamento deliberativo que executa o plano de movimento, com comportamentos reativos definidos no contexto de navegação, e com entradas contínuas de controle provenientes do usuário. O sistema de controle inteligente definido por meio da arquitetura foi implementado em uma cadeira de rodas robotizada. São apresentados alguns dos resultados obtidos por meio de experimentos realizados com o sistema de controle implementado operando em diferentes modos de autonomia. / There are some applications in mobile robotics that require human user interaction besides the autonomous navigation control of the robot. For these applications, in a semi-autonomous control mode, the human user can locally modify the autonomous pre-planned robot trajectory by sending continuous commands to the robot. In this case, independently from the user\'s commands, the intelligent control system must continuously avoid collisions, modifying the user\'s commands if necessary. This approach creates a safety navigation system that can be used in robotic wheelchairs and manned robotic vehicles where the human safety must be guaranteed. A control system with those characteristics should be based on a suitable mobile robot architecture. This architecture must integrate the human user\'s commands with the autonomous control layer of the system which is responsible for avoiding static and dynamic obstacles and for driving the robot to its navigation goal. In this work we propose a hybrid (deliberative/reactive) mobile robot architecture with human interaction. This architecture was developed mainly for navigation tasks and allows the robot to be operated on different levels of autonomy. The user can share the robot control with the system while the system ensures the user and robot\'s safety. In this architecture, a navigation function is used for representing the robot\'s navigation plan. We propose a method for combining the deliberative behavior responsible for executing the navigation plan, with the reactive behaviors defined to be used while navigating, and with the continuous human user\'s inputs. The intelligent control system defined by the proposed architecture was implemented in a robotic wheelchair, and we present some experimental results of the chair operating on different autonomy modes.
270

Apprentissage simultané d'une tâche nouvelle et de l'interprétation de signaux sociaux d'un humain en robotique / Learning from unlabeled interaction frames

Grizou, Jonathan 24 October 2014 (has links)
Cette thèse s'intéresse à un problème logique dont les enjeux théoriques et pratiques sont multiples. De manière simple, il peut être présenté ainsi : imaginez que vous êtes dans un labyrinthe, dont vous connaissez toutes les routes menant à chacune des portes de sortie. Derrière l'une de ces portes se trouve un trésor, mais vous n'avez le droit d'ouvrir qu'une seule porte. Un vieil homme habitant le labyrinthe connaît la bonne sortie et se propose alors de vous aider à l'identifier. Pour cela, il vous indiquera la direction à prendre à chaque intersection. Malheureusement, cet homme ne parle pas votre langue, et les mots qu'il utilise pour dire ``droite'' ou ``gauche'' vous sont inconnus. Est-il possible de trouver le trésor et de comprendre l'association entre les mots du vieil homme et leurs significations ? Ce problème, bien qu'en apparence abstrait, est relié à des problématiques concrètes dans le domaine de l'interaction homme-machine. Remplaçons le vieil homme par un utilisateur souhaitant guider un robot vers une sortie spécifique du labyrinthe. Ce robot ne sait pas en avance quelle est la bonne sortie mais il sait où se trouvent chacune des portes et comment s'y rendre. Imaginons maintenant que ce robot ne comprenne pas a priori le langage de l'humain; en effet, il est très difficile de construire un robot à même de comprendre parfaitement chaque langue, accent et préférence de chacun. Il faudra alors que le robot apprenne l'association entre les mots de l'utilisateur et leur sens, tout en réalisant la tâche que l'humain lui indique (i.e.trouver la bonne porte). Une autre façon de décrire ce problème est de parler d'auto-calibration. En effet, le résoudre reviendrait à créer des interfaces ne nécessitant pas de phase de calibration car la machine pourrait s'adapter,automatiquement et pendant l'interaction, à différentes personnes qui ne parlent pas la même langue ou qui n'utilisent pas les mêmes mots pour dire la même chose. Cela veut aussi dire qu'il serait facile de considérer d’autres modalités d'interaction (par exemple des gestes, des expressions faciales ou des ondes cérébrales). Dans cette thèse, nous présentons une solution à ce problème. Nous appliquons nos algorithmes à deux exemples typiques de l'interaction homme robot et de l'interaction cerveau machine: une tâche d'organisation d'une série d'objets selon les préférences de l'utilisateur qui guide le robot par la voix, et une tâche de déplacement sur une grille guidé par les signaux cérébraux de l'utilisateur. Ces dernières expériences ont été faites avec des utilisateurs réels. Nos résultats démontrent expérimentalement que notre approche est fonctionnelle et permet une utilisation pratique d’une interface sans calibration préalable. / This thesis investigates how a machine can be taught a new task from unlabeled humaninstructions, which is without knowing beforehand how to associate the human communicative signals withtheir meanings. The theoretical and empirical work presented in this thesis provides means to createcalibration free interactive systems, which allow humans to interact with machines, from scratch, using theirown preferred teaching signals. It therefore removes the need for an expert to tune the system for eachspecific user, which constitutes an important step towards flexible personalized teaching interfaces, a key forthe future of personal robotics.Our approach assumes the robot has access to a limited set of task hypotheses, which include the task theuser wants to solve. Our method consists of generating interpretation hypotheses of the teaching signalswith respect to each hypothetic task. By building a set of hypothetic interpretation, i.e. a set of signallabelpairs for each task, the task the user wants to solve is the one that explains better the history of interaction.We consider different scenarios, including a pick and place robotics experiment with speech as the modalityof interaction, and a navigation task in a brain computer interaction scenario. In these scenarios, a teacherinstructs a robot to perform a new task using initially unclassified signals, whose associated meaning can bea feedback (correct/incorrect) or a guidance (go left, right, up, ...). Our results show that a) it is possible tolearn the meaning of unlabeled and noisy teaching signals, as well as a new task at the same time, and b) itis possible to reuse the acquired knowledge about the teaching signals for learning new tasks faster. Wefurther introduce a planning strategy that exploits uncertainty from the task and the signals' meanings toallow more efficient learning sessions. We present a study where several real human subjects controlsuccessfully a virtual device using their brain and without relying on a calibration phase. Our system identifies, from scratch, the target intended by the user as well as the decoder of brain signals.Based on this work, but from another perspective, we introduce a new experimental setup to study howhumans behave in asymmetric collaborative tasks. In this setup, two humans have to collaborate to solve atask but the channels of communication they can use are constrained and force them to invent and agree ona shared interaction protocol in order to solve the task. These constraints allow analyzing how acommunication protocol is progressively established through the interplay and history of individual actions.

Page generated in 0.1217 seconds