• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 383
  • 383
  • 311
  • 126
  • 107
  • 69
  • 64
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 44
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Human-like Crawling for Humanoid Robots : Gait Evaluation on the NAO robot

Aspernäs, Andreas January 2018 (has links)
Human-robot interaction (HRI) is the study of how we as humans interact and communicate with robots and one of its subfields is working on how we can improve the collaboration between humans and robots. We need robots that are more user friendly and easier to understand and a key aspect of this is human-like movements and behavior. This project targets a specific set of motions called locomotion and tests them on the humanoid NAO robot. A human-like crawling gait was developed for the NAO robot and compared to the built-in walking gait through three kinds of experiments. The first one to compare the speed of the two gaits, the second one to estimate their sta- bility, and the third to examine how long they can operate by measuring the power consumption and temperatures in the joints. The results showed the robot was significantly slower when crawling compared to walking, and when still the robot was more stable while standing than on all-fours. The power consumption remained essentially the same, but the crawling gait ended up having a shorter operational time due to higher temperature increase in the joints. While the crawling gait has benefits of having a lower profile then the walking gait and could therefore more easily pass under low hanging obsta- cles, it does have major issues that needs to be addressed to become a viable solution. Therefore these are important factors to consider when developing gaits and designing robots, and motives further research to try and solve these problems.
222

Analyse audio-visuelle dans le cadre des interactions humaines avec les robots / Audio-Visual Analysis In the Framework of Humans Interacting with Robots

Gebru, Israel Dejene 13 April 2018 (has links)
Depuis quelques années, un intérêt grandissant pour les interactions homme-robot (HRI), avec pour but de développer des robots pouvant interagir (ou plus généralement communiquer) avec des personnes de manière naturelle. Cela requiert aux robots d'avoir la capacité non seulement de comprendre une conversation et signaux non verbaux associés à la communication (e.g. le regard et les expressions du visage), mais aussi la capacité de comprendre les dynamiques des interactions sociales, e.g. détecter et identifier les personnes présentes, où sont-elles, les suivre au cours de la conversation, savoir qui est le locuteur, à qui parle t-il, mais aussi qui regarde qui, etc. Tout cela nécessite aux robots d’avoir des capacités de perception multimodales pour détecter et intégrer de manière significative les informations provenant de leurs multiples canaux sensoriels. Dans cette thèse, nous nous concentrons sur les entrées sensorielles audio-visuelles du robot composées de microphones (multiples) et de caméras vidéo. Dans cette thèse nous nous concentrons sur trois tâches associés à la perception des robots, à savoir : (P1) localisation de plusieurs locuteurs, (P2) localisation et suivi de plusieurs personnes, et (P3) journalisation de locuteur. La majorité des travaux existants sur le traitement du signal et de la vision par ordinateur abordent ces problèmes en utilisant uniquement soit des signaux audio ou des informations visuelles. Cependant, dans cette thèse, nous prévoyons de les aborder à travers la fusion des informations audio et visuelles recueillies par deux microphones et une caméra vidéo. Notre objectif est d'exploiter la nature complémentaire des modalités auditive et visuelle dans l'espoir d'améliorer de manière significatives la robustesse et la performance par rapport aux systèmes utilisant une seule modalité. De plus, les trois problèmes sont abordés en considérant des scénarios d'interaction Homme-Robot difficiles comme, par exemple, un robot engagé dans une interaction avec un nombre variable de participants, qui peuvent parler en même temps et qui peuvent se déplacer autour de la scène et tourner la tête / faire face aux autres participants plutôt qu’au robot. / In recent years, there has been a growing interest in human-robot interaction (HRI), with the aim to enable robots to naturally interact and communicate with humans. Natural interaction implies that robots not only need to understand speech and non-verbal communication cues such as body gesture, gaze, or facial expressions, but they also need to understand the dynamics of the social interplay, e.g., find people in the environment, distinguish between different people, track them through the physical space, parse their actions and activity, estimate their engagement, identify who is speaking, who speaks to whom, etc. All these necessitate the robots to have multimodal perception skills to meaningfully detect and integrate information from their multiple sensory channels. In this thesis, we focus on the robot's audio-visual sensory inputs consisting of the (multiple) microphones and video cameras. Among the different addressable perception tasks, in this thesis we explore three, namely; (P1) multiple speakers localization, (P2) multiple-person location tracking, and (P3) speaker diarization. The majority of existing works in signal processing and computer vision address these problems by utilizing audio signals alone, or visual information only. However, in this thesis, we plan to address them via fusion of the audio and visual information gathered by two microphones and one video camera. Our goal is to exploit the complimentary nature of the audio and visual modalities with a hope of attaining significant improvements on robustness and performance over systems that use a single modality. Moreover, the three problems are addressed considering challenging HRI scenarios such as, eg a robot engaged in a multi-party interaction with varying number of participants, which may speak at the same time as well as may move around the scene and turn their heads/faces towards the other participants rather than facing the robot.
223

Human Robot Interaction for Autonomous Systems in Industrial Environments

Chadalavada, Ravi Teja January 2016 (has links)
The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional Automatic Guided Vehicles (AGV), which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. This work addresses the problem of providing information regarding a service robot’s intention to humans co-populating the environment. The overall goal is to make humans feel safer and more comfortable, even when they are in close vicinity of the robot. A spatial Augmented Reality (AR) system for robot intention communication by means of projecting proxemic information onto shared floor space is developed on a robotic fork-lift by equipping it with a LED projector. This helps in visualizing internal state information and intents on the shared floors spaces. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. A Likert scalebased evaluation which also includes comparisons to human-human intention communication was performed. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot. This kind of synergistic human-robot interaction in a work environment is expected to increase the robot’s acceptability in the industry.
224

Integração de sistemas cognitivo e robótico por meio de uma ontologia para modelar a percepção do ambiente / Integration of cognitive and robotic systems through an ontology to model the perception of the environment

Helio Azevedo 01 August 2018 (has links)
A disseminação do uso de robôs na sociedade moderna é uma realidade. Do começo restrito às operações fabris como pintura e soldagem, até o início de seu uso nas residências, apenas algumas décadas se passaram. A robótica social é uma área de pesquisa que visa desenvolver modelos para que a interação direta de robôs com seres humanos ocorra de forma natural. Um dos fatores que compromete a rápida evolução da robótica social é a dificuldade em integrar sistemas cognitivos e robóticos, principalmente devido ao volume e complexidade da informação produzida por um mundo caótico repleto de dados sensoriais. Além disso, a existência de múltiplas configurações de robôs, com arquiteturas e interfaces distintas, dificulta a verificação e repetibilidade dos experimentos realizados pelos diversos grupos de pesquisa. Esta tese contribui para a evolução da robótica social ao definir uma arquitetura, denominada Cognitive Model Development Environment (CMDE) que simplifica a conexão entre sistemas cognitivos e robóticos. Essa conexão é formalizada com uma ontologia, denominada OntPercept, que modela a percepção do ambiente a partir de informações sensoriais captadas pelos sensores presentes no agente robótico. Nos últimos anos, diversas ontologias foram propostas para aplicações robóticas, mas elas não são genéricas o suficiente para atender completamente às necessidades das áreas de robótica e automação. A formalização oferecida pela OntPercept facilita o desenvolvimento, a reprodução e a comparação de experimentos associados a robótica social. A validação do sistema proposto ocorre com suporte do simulador Robot House Simulator (RHS), que fornece um ambiente onde, o agente robótico e o personagem humano podem interagir socialmente com níveis crescentes de processamento cognitivo. A proposta da CMDE viabiliza a utilização de qualquer sistema cognitivo, em particular, o experimento elaborado para validação desta pesquisa utiliza Soar como arquitetura cognitiva. Em conjunto, os elementos: arquitetura CMDE, ontologia OntPercept e simulador RHS, todos disponibilizados livremente no GitHub, estabelecem um ambiente completo que propiciam o desenvolvimento de experimentos envolvendo sistemas cognitivos dirigidos para a área de robótica social. / The use of robots in modern society is a reality. From the beginning restricted to the manufacturing operations like painting and welding, until the beginning of its use in the residences, only a few decades have passed. Social robotics is an area that aims to develop models so that the direct interaction of robots with humans occurs naturally. One of the factors that compromises the rapid evolution of social robotics is the difficulty in integrating cognitive and robotic systems, mainly due to the volume and complexity of the information produced by a chaotic world full of sensory data. In addition, the existence of multiple configurations of robots, with different architectures and interfaces, makes it difficult to verify and repeat the experiments performed by the different research groups. This research contributes to the evolution of social robotics by defining an architecture, called Cognitive Model Development Environment (CMDE), which simplifies the connection between cognitive and robotic systems. This connection is formalized with an ontology, called OntPercept, which models the perception of the environment from the sensory information captured by the sensors present in the robotic agent. In recent years, several ontologies have been proposed for robotic applications, but they are not generic enough to fully address the needs of robotics and automation. The formalization offered by OntPercept facilitates the development, reproduction and comparison of experiments associated with social robotics. The validation of the proposed system occurs with support of the Robot House Simulator (RHS), which provides an environment where the robotic agent and the human character can interact socially with increasing levels of cognitive processing. All together, the elements: CMDE architecture, OntPercept ontology and RHS simulator, all freely available in GitHub, establish a complete environment that allows the dev
225

How Humans Adapt to a Robot Recipient : An Interaction Analysis Perspective on Human-Robot Interaction

Pelikan, Hannah January 2015 (has links)
This thesis investigates human-robot interaction using an Interaction Analysis methodology. Posing the question how humans manage the interaction with a robot, the study focuses on humans and how they adapt to the robot’s limited conversational and interactional capabilities. As Conversation Analytic research suggests that humans always adjust their actions to a specific recipient, the author assumed to also find this in the interaction with an artificial communicative partner. For this purpose a conventional robot was programmed to play a charade game with human participants. The interaction of the humans with the robot was filmed and analysed within an interaction analytic framework. The study suggests that humans adapt their recipient design with their changing assumptions about the conversational partner. Starting off with different conversational expectations, participants adapt turn design (word selection, turn size, loudness and prosody) first and turn-taking in a second step. Adaptation to the robot is deployed as a means to accomplish a successful interaction. The detailed study of the human perspective in this interaction can yield conclusions for how robots could be improved to facilitate the interaction. As humans adjust to the interactional limitations with varying speed and ease, the limits to which adaptation is most difficult should be addressed first.
226

Human-Telepresence Robot Proxemics Interaction : An ethnographic approach to non-verbal communication / 인간-텔레프레즌스 로봇 프로세믹스 상호작용 : 비언어적 의사소통에 대한 에스노그라피적 접근

Bang, GiHoon January 2018 (has links)
This research aims to find distinct and crucial factors needed in order to design a better robot through exploring the meaning of movement. The researcher conducted six-weeks of iterative work to collect data via an ethnographic method. The researcher examined the interactions between a telepresence robot and human beings in an authentic environment through the collected data and analyzed it based on proxemics theory. The research observed that the robot was given social space when it approached the participants with pauses in between movements. Furthermore, the research introduces proxemics pivot and its notion. Proxemics pivot refers to the part of the robot that people perceive as a standard point when they adjust the proximity between the robot and themselves. The proxemics pivot was considered “a face” and was attributed social properties; the other parts of the robot did not receive the same consideration.
227

Inferring intentions through state representations in cooperative human-robot environments / Déduction d’intentions au travers de la représentation d’états au sein des milieux coopératifs entre homme et robot

Schlenoff, Craig 30 June 2014 (has links)
Les humains et les robots travaillant en toute sécurité et en parfaite harmonie dans un environnement est l'un des objectifs futurs de la communauté robotique. Quand les humains et les robots peuvent travailler ensemble dans le même espace, toute une catégorie de tâches devient prête à l'automatisation, allant de la collaboration pour l'assemblage de pièces, à la manutention de pièces et de materiels ainsi qu'à leur livraison. Garantir la sûreté des humains nécessite que le robot puisse être capable de surveiller la zone de travail, déduire l'intention humaine, et être conscient suffisamment tôt des dangers potentiels afin de les éviter.Des normes existent sur la collaboration entre robots et humains, cependant elles se focalisent à limiter les distances d'approche et les forces de contact entre l'humain et le robot. Ces approches s'appuient sur des processus qui se basent uniquement sur la lecture des capteurs, et ne tiennent pas compte des états futurs ou des informations sur les tâches en question. Un outil clé pour la sécurité entre des robots et des humains travaillant dans un environnement inclut la reconnaissance de l'intention dans lequel le robot tente de comprendre l'intention d'un agent (l'humain) en reconnaissant tout ou partie des actions de l'agent pour l'aider à prévoir les actions futures de cet agent. La connaissance de ces actions futures permettra au robot de planifier sa contribution aux tâches que l'humain doit exécuter ou au minimum, à ne pas se mettre dans une position dangereuse.Dans cette thèse, nous présentons une approche qui est capable de déduire l'intention d'un agent grâce à la reconnaissance et à la représentation des informations de l'état. Cette approche est différente des nombreuses approches présentes dans la littérature qui se concentrent principalement sur la reconnaissance de l'activité (par opposition à la reconnaissance de l'état) et qui « devinent » des raisons pour expliquer les observations. Nous déduisons les relations détaillées de l'état à partir d'observations en utilisant Region Connection Calculus 8 (RCC-8) et ensuite nous déduisons les relations globales de l'état qui sont vraies à un moment donné. L'utilisation des informations sur l'état sert à apporter une contribution plus précise aux algorithmes de reconnaissance de l'intention et à générer des résultats qui sont equivalents, et dans certains cas, meilleurs qu'un être humain qui a accès aux mêmes informations. / Humans and robots working safely and seamlessly together in a cooperative environment is one of the future goals of the robotics community. When humans and robots can work together in the same space, a whole class of tasks becomes amenable to automation, ranging from collaborative assembly to parts and material handling to delivery. Proposed standards exist for collaborative human-robot safety, but they focus on limiting the approach distances and contact forces between the human and the robot. These standards focus on reactive processes based only on current sensor readings. They do not consider future states or task-relevant information. A key enabler for human-robot safety in cooperative environments involves the field of intention recognition, in which the robot attempts to understand the intention of an agent (the human) by recognizing some or all of their actions to help predict the human’s future actions.We present an approach to inferring the intention of an agent in the environment via the recognition and representation of state information. This approach to intention recognition is different than many ontology-based intention recognition approaches in the literature as they primarily focus on activity (as opposed to state) recognition and then use a form of abduction to provide explanations for observations. We infer detailed state relationships using observations based on Region Connection Calculus 8 (RCC-8) and then infer the overall state relationships that are true at a given time. Once a sequence of state relationships has been determined, we use a Bayesian approach to associate those states with likely overall intentions to determine the next possible action (and associated state) that is likely to occur. We compare the output of the Intention Recognition Algorithm to those of an experiment involving human subjects attempting to recognize the same intentions in a manufacturing kitting domain. The results show that the Intention Recognition Algorithm, in almost every case, performed as good, if not better, than a human performing the same activity.
228

A HUB-CI MODEL FOR NETWORKED TELEROBOTICS IN COLLABORATIVE MONITORING OF AGRICULTURAL GREENHOUSES

Ashwin Sasidharan Nair (6589922) 15 May 2019 (has links)
Networked telerobots are operated by humans through remote interactions and have found applications in unstructured environments, such as outer space, underwater, telesurgery, manufacturing etc. In precision agricultural robotics, target monitoring, recognition and detection is a complex task, requiring expertise, hence more efficiently performed by collaborative human-robot systems. A HUB is an online portal, a platform to create and share scientific and advanced computing tools. HUB-CI is a similar tool developed by PRISM center at Purdue University to enable cyber-augmented collaborative interactions over cyber-supported complex systems. Unlike previous HUBs, HUB-CI enables both physical and virtual collaboration between several groups of human users along with relevant cyber-physical agents. This research, sponsored in part by the Binational Agricultural Research and Development Fund (BARD), implements the HUB-CI model to improve the Collaborative Intelligence (CI) of an agricultural telerobotic system for early detection of anomalies in pepper plants grown in greenhouses. Specific CI tools developed for this purpose include: (1) Spectral image segmentation for detecting and mapping to anomalies in growing pepper plants; (2) Workflow/task administration protocols for managing/coordinating interactions between software, hardware, and human agents, engaged in the monitoring and detection, which would reliably lead to precise, responsive mitigation. These CI tools aim to minimize interactions’ conflicts and errors that may impede detection effectiveness, thus reducing crops quality. Simulated experiments performed show that planned and optimized collaborative interactions with HUB-CI (as opposed to ad-hoc interactions) yield significantly fewer errors and better detection by improving the system efficiency by between 210% to 255%. The anomaly detection method was tested on the spectral image data available in terms of number of anomalous pixels for healthy plants, and plants with stresses providing statistically significant results between the different classifications of plant health using ANOVA tests (P-value = 0). Hence, it improves system productivity by leveraging collaboration and learning based tools for precise monitoring for healthy growth of pepper plants in greenhouses.
229

Towards Automated Suturing of Soft Tissue: Automating Suturing Hand-off Task for da Vinci Research Kit Arm using Reinforcement Learning

Varier, Vignesh Manoj 14 May 2020 (has links)
Successful applications of Reinforcement Learning (RL) in the robotics field has proliferated after DeepMind and OpenAI showed the ability of RL techniques to develop intelligent robotic systems that could learn to perform complex tasks. Ever since the use of robots for surgical procedures, researchers have been trying to bring some sort of autonomy into the operating room. Surgical robotic systems such as da Vinci currently provide the surgeons with direct control. To relieve the stress and the burden on the surgeon using the da Vinci robot, semi-automating or automating surgical tasks such as suturing can be beneficial. This work presents a RL-based approach to automate the needle hand-off task. It puts forward two approaches based on the type of environment, a discrete and continuous space approach. For capturing a unique suturing style, user data was collected using the da Vinci Research Kit to generate a sparse reward function. It was used to derive an optimal policy using Q-learning for a discretized environment. Further, a RL framework for da Vinci Research Kit was developed using a real-time dynamics simulator - Asynchronous Multi-Body Framework (AMBF). A model was trained and evaluated to reach the desired goal using model-free RL techniques while considering the dynamics of the robot to help mitigate the difficulty in transferring trained model to real-world robots. Therefore, the developed RL framework would enable the RL community to train surgical robots using state of the art RL techniques and transfer it to real-world robots with minimal effort. Based on the results obtained, the viability of applying RL techniques to develop a supervised level of autonomy for performing surgical tasks is discussed. To summarize, this work mainly focuses on using RL to automate the suture hand-off task in order to move a step towards solving the greater problem of automating suturing.
230

Supporting the Implementation of Industrial Robots in Collaborative Assembly Applications / Stödja implementeringen av industrirobotar i samarbetande monteringsapplikationer

Andersson, Staffan January 2021 (has links)
Until recently, few technologies have been applicable to increase flexibility in the manufacturers’ assembly applications, but the introduction of industrial robots in collaborative assembly applications provides such opportunities. Specifically, these collaborative assembly applications present an opportunity to, in a fenceless environment, combine the flexibility of the human with the accuracy, repeatability, and strengths of the robot while utilizing less floor space and allowing portable applications. However, despite the benefits of industrial robots in collaborative assembly applications, there are significant gaps in the literature preventing their implementation. Based on this background, the objective of this work is to support the implementation of industrial robots in collaborative assembly applications. To fulfill this objective, this work included two empirical studies; first, an interview study mapped the attributes of industrial robots in collaborative assembly applications. Second, a multiple-case study mapped the critical challenges and enabling activities when implementing these collaborative assembly applications. The studies were also combined with literature reviews aiming to fill the theoretical gaps.  The work provides an implementation process with enabling activities that can mitigate critical challenges when implementing industrial robots in collaborative assembly applications. The implementation process shows enabling activities in the three first phases: pre-study, collaborative assembly application design, and assembly installation. These enabling activities are mapped to the 7M dimensions as a way to clearly show how they can support the implementation of industrial robots in collaborative assembly applications. The implementation process contributes to filling the identified gaps in the literature and provides practitioners with activities that managers could consider when implementing collaborative robots in collaborative assembly applications. Finally, this work suggests that future research could aim to validate the implementation process in a case study or investigate further the last two phases of the process. / Hittills har få tekniker kunnat öka flexibiliteten i tillverkarnas monteringsapplikationer, men introduktion av industrirobotar i samarbetande monteringsapplikationer öppnar upp för sådana möjligheter. Specifikt så presenterar dessa samarbetande monteringsapplikationer en möjlighet att, i en staketlös miljö, kombinera människans flexibilitet med industrirobotens precision, repeterbarhet och styrka men samtidigt nyttja litet golvutrymme och tillåta bärbarhet. Emellertid, trots fördelarna med industrirobotar i samarbetande monteringsapplikationer, finns det signifikanta gap i litteraturen som förhindrar dess implementering.  Baserat på denna bakgrund är syftet med detta arbete att stödja implementeringen av industrirobotar i samarbetande monteringsapplikationer.  För att fullfölja detta syfte inkluderade detta arbete två empiriska studier. Först, en intervjustudie som kartlagde attributen för industrirobotar i samarbetande monteringsapplikationer. För det andra, en flerfallstudie som kartlagde de kritiska utmaningarna och möjliggörande aktiviteterna för implementeringen av dessa samarbetande monteringsapplikationer. Studierna kombinerades också med litteraturstudier med målet att fylla de teoretiska gapen.  Detta arbete ger en implementeringsprocess med möjliggörande aktiviteter som kan mildra de kritiska utmaningarna under implementeringen av industrirobotar i samarbetande monteringsapplikationer. Implementeringsprocessen visar möjliggörande aktiviteter i de tre första faserna; förstudie, design av samarbetande monteringsapplikationer och monteringsinstallation.  Dessa möjliggörande aktiviteter är kartlagda mot 7M dimensionerna som ett sätt att tydligt visa hur dessa kan stödja implementeringen av industrirobotar i samarbetande monteringsapplikationer. Implementeringsprocessen bidrar till att fylla de identifierade gapen i litteraturen och ger till praktiker aktiviteter som ledare kan beakta vid implementeringen av industrirobotar i samarbetande monteringsapplikationer. Slutligen, detta arbete föreslår att framtida forskning syftar att validera implementeringsprocessen genom en fallstudie eller vidare undersöka de två sista faserna av denna process.

Page generated in 0.0555 seconds