Spelling suggestions: "subject:"humanrobot"" "subject:"humanoidrobot""
261 |
Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapyRicardo Fibe Gambirasio 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.
|
262 |
Arquitetura de controle inteligente para interação humano-robô / Control architecture for human-robot interactionSilas Franco dos Reis Alves 01 April 2016 (has links)
Supondo-se que os robôs coexistirão conosco num futuro próximo, é então evidente a necessidade de Arquiteturas de Controle Inteligentes voltadas para a Interação Humano-Robô. Portanto, este trabalho desenvolveu uma Organização de Arquitetura de Controle Inteligente comportamental, cujo objetivo principal é permitir que o robô interaja com as pessoas de maneira intuitiva e que motive a colaboração entre pessoas e robôs. Para isso, um módulo emocional sintético, embasado na teoria bidimensional de emoções, foi integrado para promover a adaptação dos comportamentos do robô, implementados por Esquemas Motores, e a comunicação de seu estado interno de modo inteligível. Esta Organização subsidiou a implantação da Arquitetura de Controle em uma aplicação voltada para a área assistencial da saúde, consistindo, destarte, em um estudo de caso em robótica social assistiva como ferramenta auxiliar para educação especial. Os experimentos realizados demonstraram que a arquitetura de controle desenvolvida é capaz de atender aos requisitos da aplicação, conforme estipulado pelos especialistas consultados. Isto posto, esta tese contribui com o projeto de uma arquitetura de controle capaz de agir mediante a avaliação subjetiva baseada em crenças cognitivas das emoções, o desenvolvimento de um robô móvel de baixo-custo, e a elaboração do estudo de caso em educação especial. / Assuming that robots will coexist with humans in the near future, it is conspicuous the need of Intelligent Control Architectures suitable for Human-Robot Interaction. Henceforth, this research has developed a behavioral Control Architecture Organization, whose main purpose is to allow the intuitive interaction of robot and people, thus fostering the collaboration between them. To this end, a synthetic emotional module, based on the Circumplex emotion theory, promoted the adaptation of the robot behaviors, implemented using Motor Schema theory, and the communication of its internal state. This Organization supported the adoption of the Control Architecture into an assistive application, which consists of the case study of assistive social robots as an auxiliary tool for special education. The experiments have demonstrated that the developed control architecture was able to meet the requirements of the application, which were conceived according to the opinion of the consulted experts. Thereafter, this thesis contributes with the design of a control architecture that is able to act upon the subjective evaluation based on cognitive beliefs of emotions, the development of a low-cost mobile robot, and the development of the case study in special education.
|
263 |
Programmation d'un robot par des non-experts / End-user Robot Programming in Cobotic EnvironmentsLiang, Ying Siu 12 June 2019 (has links)
Le sujet de recherche est dans la continuité des travaux réalisés au cours de mon M2R sur la programmation par démonstration appliqué à la cobotique en milieu industriel. Ce sujet est à la croisée de plusieurs domaines (interaction Humain-Robot, planification automatique, apprentissage artificiel). Il s'agit maintenant d'aller au delà de ces premiers résultats obtenus au cours de mon M2R et de trouver un cadre générique pour la programmation de « cobots » (robots collaboratifs) en milieu industriel. L'approche cobotique consiste à ce qu'un opérateur humain, en tant qu'expert métier directement impliqué dans la réalisation des tâches en ligne, apprenne au robot à effectuer de nouvelles tâches et à utiliser le robot comme assistant « agile ». Dans ce contexte la thèse propose un mode d'apprentissage de type « end-user programming », c'est-à-dire simple et ne nécessitant pas d'être expert en robotique pour programmer le robot industriel Baxter. / The increasing presence of robots in industries has not gone unnoticed.Cobots (collaborative robots) are revolutionising industries by allowing robots to work in close collaboration with humans.Large industrial players have incorporated them into their production lines, but smaller companies hesitate due to high initial costs and the lack of programming expertise.In this thesis we introduce a framework that combines two disciplines, Programming by Demonstration and Automated Planning, to allow users without programming knowledge to program a robot.The user constructs the robot's knowledge base by teaching it new actions by demonstration, and associates their semantic meaning to enable the robot to reason about them.The robot adopts a goal-oriented behaviour by using automated planning techniques, where users teach action models expressed in a symbolic planning language.In this thesis we present preliminary work on user experiments using a Baxter Research Robot to evaluate our approach.We conducted qualitative user experiments to evaluate the user's understanding of the symbolic planning language and the usability of the framework's programming process.We showed that users with little to no programming experience can adopt the symbolic planning language, and use the framework.We further present our work on a Programming by Demonstration system used for organisation tasks.The system includes a goal inference model to accelerate the programming process by predicting the user's intended product configuration.
|
264 |
Indoor Navigation for Mobile Robots : Control and RepresentationsAlthaus, Philipp January 2003 (has links)
This thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen. Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events. In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly. A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation. In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction. <b>Keywords:</b>mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction
|
265 |
Mission Specialist Human-Robot Interaction in Micro Unmanned Aerial SystemsPeschel, Joshua Michael 2012 August 1900 (has links)
This research investigated the Mission Specialist role in micro unmanned aerial systems (mUAS) and was informed by human-robot interaction (HRI) and technology findings, resulting in the design of an interface that increased the individual performance of 26 untrained CBRN (chemical, biological, radiological, nuclear) responders during two field studies, and yielded formative observations for HRI in mUAS. Findings from the HRI literature suggested a Mission Specialist requires a role-specific interface that shares visual common ground with the Pilot role and allows active control of the unmanned aerial vehicle (UAV) payload camera. Current interaction technology prohibits this as responders view the same interface as the Pilot and give verbal directions for navigation and payload control. A review of interaction principles resulted in a synthesis of five design guidelines and a system architecture that were used to implement a Mission Specialist interface on an Apple iPad. The Shared Roles Model was used to model the mUAS human-robot team using three formal role descriptions synthesized from the literature (Flight Director, Pilot, and Mission Specialist). The Mission Specialist interface was evaluated through two separate field studies involving 26 CBRN experts who did not have mUAS experience. The studies consisted of 52 mission trials to surveil, evaluate, and capture imagery of a chemical train derailment incident staged at Disaster City. Results from the experimental study showed that when a Mission Specialist was able to actively control the UAV payload camera and verbally coordinate with the Pilot, greater role empowerment (confidence, comfort, and perceived best individual and team performance) was reported by a majority of participants for similar tasks; thus, a role-specific interface is preferred and should be used by untrained responders instead of viewing the same interface as the Pilot in mUAS. Formative observations made during this research suggested: i) establishing common ground in mUAS is both verbal and visual, ii) type of coordination (active or passive) preferred by the Mission Specialist is affected by command-level experience and perceived responsibility for the robot, and iii) a separate Pilot role is necessary regardless of preferred coordination type in mUAS. This research is of importance to HRI and CBRN researchers and practitioners, as well as those in the fields of robotics, human-computer interaction, and artificial intelligence, because it found that a human Pilot role is necessary for assistance and understanding, and that there are hidden dependencies in the human-robot team that affect Mission Specialist performance.
|
266 |
Indoor Navigation for Mobile Robots : Control and RepresentationsAlthaus, Philipp January 2003 (has links)
<p>This thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen.</p><p>Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events.</p><p>In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly.</p><p>A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation.</p><p>In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction.</p><p><b>Keywords:</b>mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction</p>
|
267 |
Methodology for creating human-centered robots : design and system integration of a compliant mobile baseWong, Pius Duc-min 30 July 2012 (has links)
Robots have growing potential to enter the daily lives of people at home, at work, and in cities, for a variety of service, care, and entertainment tasks. However, several challenges currently prevent widespread production and use of such human-centered robots. The goal of this thesis was first to help overcome one of these broad challenges: the lack of basic safety in human-robot physical interactions. Whole-body compliant control algorithms had been previously simulated that could allow safer movement of complex robots, such as humanoids, but no such robots had yet been documented to actually implement these algorithms. Therefore a wheeled humanoid robot "Dreamer" was developed to implement the algorithms and explore additional concepts in human-safe robotics. The lower mobile base part of Dreamer, dubbed "Trikey," is the focus of this work. Trikey was iteratively developed, undergoing cycles of concept generation, design, modeling, fabrication, integration, testing, and refinement. Test results showed that Trikey and Dreamer safely performed movements under whole-body compliant control, which is a novel achievement. Dreamer will be a platform for future research and education in new human-friendly traits and behaviors. Finally, this thesis attempts to address a second broad challenge to advancing the field: the lack of standard design methodology for human-centered robots. Based on the experience of building Trikey and Dreamer, a set of consistent design guidelines and metrics for the field are suggested. They account for the complex nature of such systems, which must address safety, performance, user-friendliness, and the capability for intelligent behavior. / text
|
268 |
A Formal Approach to Social Learning: Exploring Language Acquisition Through ImitationCederborg, Thomas 10 December 2013 (has links) (PDF)
The topic of this thesis is learning through social interaction, consisting of experiments that focus on word acquisition through imitation, and a formalism aiming to provide stronger theoretical foundations. The formalism is designed to encompass essentially any situation where a learner tries to figure out what a teacher wants it to do by interaction or observation. It groups learners that are interpreting a broad range of information sources under the same theoretical framework. A teachers demonstration, it's eye gaze during a reproduction attempt and a teacher speech comment are all treated as the same type of information source. They can all tell the imitator what the demonstrator wants it to do, and they need to be interpreted in some way. By including them all under the same framework, the formalism can describe any agent that is trying to figure out what a human wants it to do. This allows us to see parallels between existing research, and it provides a framing that makes new avenues of research visible. The concept of informed preferences is introduced to deal with cases such as "the teacher would like the learner to perform an action, but if it knew the consequences of that action, would prefer another action" or "the teacher is very happy with the end result after the learner has cleaned the apartment, but if it knew that the cleaning produced a lot of noise that disturbed the neighbors, it would not like the cleaning strategy". The success of a learner is judged according to the informed teachers opinion of what would be best for the uninformed version. A series of simplified setups are also introduced showing how a toy world setup can be reduced to a crisply defined inference problem with a mathematically defined success criteria (any learner architecture-setup pair has a numerical success value). An example experiment is presented where a learner is concurrently estimating the task and what the evaluative comments of a teacher means. This experiment shows how the ideas of learning to interpret information sources can be used in practice. The first of the learning from demonstration experiments presented investigates a learner, specifically an imitator, that can learn an unknown number of tasks from unlabeled demonstrations. The imitator has access to a set of demonstrations, but it must infer the number of tasks and determine what demonstration is of what task (there are no symbols or labels attached to the demonstrations). The demonstrator is attempting to teach the imitator a rule where the task to perform is dependent on the 2D position of an object. The objects 2D position is set at a random location within four different, well separated, rectangles, each location indicating that a specific task should be performed. Three different coordinate systems were available, and each task was defined in one of them (for example ''move the hand to the object and then draw a circle around it"). To deal with this setup, a local version of Gaussian Mixture Regression (GMR) was used called Incremental Local Online Gaussian Mixture Regression (ILO-GMR). A small and fixed number of gaussians are fitted to local data, informs policy, and then new local points are gathered. Three other experiments extends the types of contexts to include the actions of another human, making the investigation of language learning possible (a word is learnt by imitating how the demonstrator responds to someone uttering the word). The robot is presented with a setup containing two humans, one demonstrator (who performs hand movements), and an interactant (who might perform some form of communicative act). The interactants behavior is treated as part of the context and the demonstrators behavior is assumed to be an appropriate response to this extended context. Two experiments explore the simultaneous learning of linguistic and non linguistic tasks (one demonstration could show the appropriate response to an interactant speech utterance and another demonstration could show the appropriate response to an object position). The imitator is not given access to any symbolic information about what word or hand sign was spoken, and must infer how many words where spoken, how many times linguistic information was present, and what demonstrations where responses to what word. Another experiment explores more advanced types of linguistic conventions and demonstrator actions (simple word order grammar in interactant communicative acts, and the imitation of internal cognitive operations performed by the demonstrator as a response). Since a single general imitation learning mechanism can deal with the acquisition of all the different types of tasks, it opens up the possibility that there might not be a need for a separate language acquisition system. Being able to learn a language is certainly very useful when growing up in a linguistic community, but this selection pressure can not be used to explain how the linguistic community arose in the first place. It will be argued that a general imitation learning mechanism is both useful in the absence of language, and will result in language given certain conditions such as shared intentionality and the ability to infer the intentions and mental states of others (all of which can be useful to develop in the absence of language). It will be argued that the general tendency to adopt normative rules is a central ingredient for language (not sufficient, and not necessary while adopting an already established language, but certainly very conducive for a community establishing linguistic conventions).
|
269 |
Modeling of operator action for intelligent control of haptic human-robot interfacesGallagher, William John 13 January 2014 (has links)
Control of systems requiring direct physical human-robot interaction (pHRI) requires special consideration of the motion, dynamics, and control of both the human and the robot. Humans actively change their dynamic characteristics during motion, and robots should be designed with this in mind. Both the case of humans trying to control haptic robots using physical contact and the case of using wearable robots that must work with human muscles are pHRI systems.
Force feedback haptic devices require physical contact between the operator and the machine, which creates a coupled system. This human contact creates a situation in which the stiffness of the system changes based on how the operator modulates the stiffness of their arm. The natural human tendency is to increase arm stiffness to attempt to stabilize motion. However, this increases the overall stiffness of the system, making it more difficult to control and reducing stability. Instability poses a threat of injury or load damage for large assistive haptic devices with heavy loads. Controllers do not typically account for this, as operator stiffness is often not directly measurable. The common solution of using a controller with significantly increased controller damping has the disadvantage of slowing the device and decreasing operator efficiency. By expanding the information available to the controller, it can be designed to adjust a robot's motion based on the how the operator is interacting with it and allow for faster movement in low stiffness situations. This research explored the utility of a system that can estimate operator arm stiffness and compensate accordingly. By measuring muscle activity, a model of the human arm was utilized to estimate the stiffness level of the operator, and then adjust the gains of an impedance-based controller to stabilize the device. This achieved the goal of reducing oscillations and increasing device performance, as demonstrated through a series of user trials with the device. Through the design of this system, the effectiveness of a variety of operator models were analyzed and several different controllers were explored. The final device has the potential to increase the performance of operators and reduce fatigue due to usage, which in industrial settings could translate into better efficiency and higher productivity.
Similarly, wearable robots must consider human muscle activity. Wearable robots, often called exoskeleton robots, are used for a variety of tasks, including force amplification, rehabilitation, and medical diagnosis. Force amplification exoskeletons operate much like haptic assist devices, and could leverage the same adaptive control system. The latter two types, however, are designed with the purpose of modulating human muscles, in which case the wearer's muscles must adapt to the way the robot moves, the reverse of the robot adapting to how the human moves. In this case, the robot controller must apply a force to the arm to cause the arm muscles to adapt and generate a specific muscle activity pattern. This related problem is explored and a muscle control algorithm is designed that allows a wearable robot to induce a specified muscle pattern in the wearer's arm.
The two problems, in which the robot must adapt to the human's motion and in which the robot must induce the human to adapt its motion, are related critical problems that must be solved to enable simple and natural physical human robot interaction.
|
270 |
Collective dynamics and control of a fleet of heterogeneous marine vehiclesWang, Chuanfeng 13 January 2014 (has links)
Cooperative control enables combinations of sensor data from multiple autonomous underwater vehicles (AUVs) so that multiple AUVs can perform smarter behaviors than a single AUV. In addition, in some situations, a human-driven underwater vehicle (HUV) and a group of AUVs need to collaborate and preform formation behaviors. However, the collective dynamics of a fleet of heterogeneous underwater vehicles are more complex than the non-trivial single vehicle dynamics, resulting in challenges in analyzing the formation behaviors of a fleet of heterogeneous underwater vehicles. The research addressed in this dissertation investigates the collective dynamics and control of a fleet of heterogeneous underwater vehicles, including multi-AUV systems and systems comprised of an HUV and a group of AUVs (human-AUV systems). This investigation requires a mathematical motion model of an underwater vehicle. This dissertation presents a review of a six-degree-of-freedom (6DOF) motion model of a single AUV and proposes a method of identifying all parameters in the model based on computational fluid dynamics (CFD) calculations. Using the method, we build a 6DOF model of the EcoMapper and validate the model by field experiments. Based upon a generic 6DOF AUV model, we study the collective dynamics of a multi-AUV system and develop a method of decomposing the collective dynamics. After the collective dynamics decomposition, we propose a method of achieving orientation control for each AUV and formation control for the multi-AUV system. We extend the results and propose a cooperative control for a human-AUV system so that an HUV and a group of AUVs will form a desired formation while moving along a desired trajectory as a team. For the post-mission stage, we present a method of analyzing AUV survey data and apply this method to AUV measurement data collected from our field experiments carried out in Grand Isle, Louisiana in 2011, where AUVs were used to survey a lagoon, acquire bathymetric data, and measure the concentration of reminiscent crude oil in the water of the lagoon after the BP Deepwater Horizon oil spill in the Gulf of Mexico in 2010.
|
Page generated in 0.0391 seconds