• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 139
  • 50
  • 47
  • 34
  • 29
  • 25
  • 25
  • 21
  • 18
  • 18
  • 18
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Human Intention Recognition Based Assisted Telerobotic Grasping of Objects in an Unstructured Environment

Khokar, Karan Hariharan 01 January 2013 (has links)
In this dissertation work, a methodology is proposed to enable a robot to identify an object to be grasped and its intended grasp configuration while a human is teleoperating a robot towards the desired object. Based on the detected object and grasp configuration, the human is assisted in the teleoperation task. The environment is unstructured and consists of a number of objects, each with various possible grasp configurations. The identification of the object and the grasp configuration is carried out in real time, by recognizing the intention of the human motion. Simultaneously, the human user is assisted to preshape over the desired grasp configuration. This is done by scaling the components of the remote arm end-effector motion that lead to the desired grasp configuration and simultaneously attenuating the components that are in perpendicular directions. The complete process occurs while manipulating the master device and without having to interact with another interface. Intention recognition from motion is carried out by using Hidden Markov Model (HMM) theory. First, the objects are classified based on their shapes. Then, the grasp configurations are preselected for each object class. The selection of grasp configurations is based on the human knowledge of robust grasps for the various shapes. Next, an HMM for each object class is trained by having a skilled teleoperator perform repeated preshape trials over each grasp configuration of the object class in consideration. The grasp configurations are modeled as the states of each HMM whereas the projections of translation and orientation vectors, over each reference vector, are modeled as observations. The reference vectors are the ideal translation and rotation trajectories that lead the remote arm end-effector towards a grasp configuration. During an actual grasping task performed by a novice or a skilled user, the trained model is used to detect their intention. The output probability of the HMM associated with each object in the environment is computed as the user is teleoperating towards the desired object. The object that is associated with the HMM which has the highest output probability, is taken as the desired object. The most likely Viterbi state sequence of the selected HMM gives the desired grasp configuration. Since an HMM is associated with every object, objects can be shuffled around, added or removed from the environment without the need to retrain the models. In other words, the HMM for each object class needs to be trained only once by a skilled teleoperator. The intention recognition algorithm was validated by having novice users, as well as the skilled teleoperator, grasp objects with different grasp configurations from a dishwasher rack. Each object had various possible grasp configurations. The proposed algorithm was able to successfully detect the operator's intention and identify the object and the grasp configuration of interest. This methodology of grasping was also compared with unassisted mode and maximum-projection mode. In the unassisted mode, the operator teleoperated the arm without any assistance or intention recognition. In the maximum-projection mode, the maximum projection of the motion vectors was used to determine the intended object and the grasp configuration of interest. Six healthy and one wheelchair-bound individuals, each executed twelve pick-and-place trials in intention-based assisted mode and unassisted mode. In these trials, they picked up utensils from the dishwasher and laid them on a table located next to it. The relative positions and orientations of the utensils were changed at the end of every third trial. It was observed that the subjects were able to pick-and-place the objects 51% faster and with less number of movements, using the proposed method compared to the unassisted method. They found it much easier to execute the task using the proposed method and experienced less mental and overall workloads. Two able-bodied subjects also executed three preshape trials over three objects in intention-based assisted and maximum projection mode. For one of the subjects, the objects were shuffled at the end of the six trials and she was asked to carry out three more preshape trials in the two modes. This time, however, the subject was made to change their intention when she was about to preshape to the grasp configurations. It was observed that intention recognition was consistently accurate through the trajectory in the intention-based assisted method except at a few points. However, in the maximum-projection method the intention recognition was consistently inaccurate and fluctuated. This often caused to subject to be assisted in the wring directions and led to extreme frustration. The intention-based assisted method was faster and had less hand movements. The accuracy of the intention based method did not change when the objects were shuffled. It was also shown that the model for intention recognition can be trained by a skilled teleoperator and be used by a novice user to efficiently execute a grasping task in teleoperation.
102

Contrôle d'humanoïdes pour réaliser des tâches haptiques en coopération avec un opérateur humain

Evrard, Paul 07 December 2009 (has links) (PDF)
(résumé en anglais uniquement)
103

Apprentissage du modèle d'action pour une interaction socio-communicative des hommes-robots / Action Model Learning for Socio-Communicative Human Robot Interaction

Arora, Ankuj 08 December 2017 (has links)
Conduite dans le but de rendre les robots comme socio-communicatifs, les chercheurs ont cherché à mettre au point des robots dotés de compétences sociales et de «bon sens» pour les rendre acceptables. Cette intelligence sociale ou «sens commun» du robot est ce qui finit par déterminer son acceptabilité sociale à long terme.Cependant, ce n'est pas commun. Les robots peuvent donc seulement apprendre à être acceptables avec l'expérience. Cependant, en enseignant à un humanoïde, les subtilités d'une interaction sociale ne sont pas évidentes. Même un échange de dialogue standard intègre le panel le plus large possible de signes qui interviennent dans la communication et sont difficiles à codifier (synchronisation entre l'expression du corps, le visage, le ton de la voix, etc.). Dans un tel scénario, l'apprentissage du modèle comportemental du robot est une approche prometteuse. Cet apprentissage peut être réalisé avec l'aide de techniques d'IA. Cette étude tente de résoudre le problème de l'apprentissage des modèles comportementaux du robot dans le paradigme automatisé de planification et d'ordonnancement (APS) de l'IA. Dans le domaine de la planification automatisée et de l'ordonnancement (APS), les agents intelligents nécessitent un modèle d'action (plans d'actions dont les exécutions entrelacées effectuent des transitions de l'état système) afin de planifier et résoudre des problèmes réels. Au cours de cette thèse, nous présentons deux nouveaux systèmes d'apprentissage qui facilitent l'apprentissage des modèles d'action et élargissent la portée de ces nouveaux systèmes pour apprendre les modèles de comportement du robot. Ces techniques peuvent être classées dans les catégories non optimale et optimale. Les techniques non optimales sont plus classiques dans le domaine, ont été traitées depuis des années et sont de nature symbolique. Cependant, ils ont leur part de quirks, ce qui entraîne un taux d'apprentissage moins élevé que souhaité. Les techniques optimales sont basées sur les progrès récents dans l'apprentissage en profondeur, en particulier la famille à long terme (LSTM) de réseaux récurrents récurrents. Ces techniques sont de plus en plus séduisantes par la vertu et produisent également des taux d'apprentissage plus élevés. Cette étude met en vedette ces deux techniques susmentionnées qui sont testées sur des repères d'IA pour évaluer leurs prouesses. Ils sont ensuite appliqués aux traces HRI pour estimer la qualité du modèle de comportement du robot savant. Ceci est dans l'intérêt d'un objectif à long terme d'introduire l'autonomie comportementale dans les robots, afin qu'ils puissent communiquer de manière autonome avec les humains sans avoir besoin d'une intervention de «magicien». / Driven with the objective of rendering robots as socio-communicative, there has been a heightened interest towards researching techniques to endow robots with social skills and ``commonsense'' to render them acceptable. This social intelligence or ``commonsense'' of the robot is what eventually determines its social acceptability in the long run.Commonsense, however, is not that common. Robots can, thus, only learn to be acceptable with experience. However, teaching a humanoid the subtleties of a social interaction is not evident. Even a standard dialogue exchange integrates the widest possible panel of signs which intervene in the communication and are difficult to codify (synchronization between the expression of the body, the face, the tone of the voice, etc.). In such a scenario, learning the behavioral model of the robot is a promising approach. This learning can be performed with the help of AI techniques. This study tries to solve the problem of learning robot behavioral models in the Automated Planning and Scheduling (APS) paradigm of AI. In the domain of Automated Planning and Scheduling (APS), intelligent agents by virtue require an action model (blueprints of actions whose interleaved executions effectuates transitions of the system state) in order to plan and solve real world problems. During the course of this thesis, we introduce two new learning systems which facilitate the learning of action models, and extend the scope of these new systems to learn robot behavioral models. These techniques can be classified into the categories of non-optimal and optimal. Non-optimal techniques are more classical in the domain, have been worked upon for years, and are symbolic in nature. However, they have their share of quirks, resulting in a less-than-desired learning rate. The optimal techniques are pivoted on the recent advances in deep learning, in particular the Long Short Term Memory (LSTM) family of recurrent neural networks. These techniques are more cutting edge by virtue, and produce higher learning rates as well. This study brings into the limelight these two aforementioned techniques which are tested on AI benchmarks to evaluate their prowess. They are then applied to HRI traces to estimate the quality of the learnt robot behavioral model. This is in the interest of a long term objective to introduce behavioral autonomy in robots, such that they can communicate autonomously with humans without the need of ``wizard'' intervention.
104

Ambiente de treinamento por teleoperação para novos usuários de cadeiras de rodas motorizadas baseado em múltiplos métodos de condução

92-99394-9353 10 August 2018 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-09-17T17:22:44Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_impressao.pdf: 2956113 bytes, checksum: e6a4b36626de2a1892da7e9ffd7ac14a (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-09-17T17:22:55Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_impressao.pdf: 2956113 bytes, checksum: e6a4b36626de2a1892da7e9ffd7ac14a (MD5) / Made available in DSpace on 2018-09-17T17:22:55Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_impressao.pdf: 2956113 bytes, checksum: e6a4b36626de2a1892da7e9ffd7ac14a (MD5) Previous issue date: 2018-08-10 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Currently, diverse existing training environments help new users of electric powered wheelchairs (EPW) learn how to drive, acquaint and improve their abilities with these assistive devices. Several authors are developing such environments, and most of them use virtually simulated wheelchairs. Despite the similarities between virtual and real wheelchairs, it is easier to drive the real device because representation of the wheelchair physical behavior is still a problem for virtual simulated environments. Concerning the driving methods, most of them are based on a joystick, which does not give the opportunity for users to test, practice and acquaint themselves with new technologies, such as driving through eye movements. This work implements and tests a more realistic approach for a training environment dedicated to new users of EPW. The proposed system is based on a real EPW controlled by teleoperation, and it is flexible enough to attend to multiple driving methods. An architecture that allows a user to send command messages to control a real EPW through the Internet was implemented to validate the system. The implemented driving methods were conventional joystick, eye-tracker and a generic human-machine interface. For the system’s evaluation, scenarios were created considering the implemented driving methods, and also scenarios considering a long distance teleoperation. The experimental results suggest that new users can practice safely using a real EPW through the Internet, even in a situation with a communication delay of 130.2 ms (average). Furthermore, the proposed system showed potential for attending new EPW users with different types of disabilities and to be a low-cost approach that could be applied in developing countries. / Atualmente, diversos ambientes de treinamento existentes ajudam novos usuários de cadeira de rodas motorizada (CRM) a aprender a comandar, se familiarizar e aprimorar suas habilidades. Vários autores estão desenvolvendo esses ambientes, e a maioria deles está usando CRM virtualmente simulada. Apesar das semelhanças entre a CRM virtual e a real, observouse que é mais fácil comandar o dispositivo real. Isso ocorre porque nesses ambientes virtuais, a representação do comportamento físico da CRM ainda é um problema. Outro aspecto observado, foi a respeito dos métodos de condução, onde a maioria dos trabalhos utiliza apenas o joystick. Porém, esse método não oferece a oportunidade a usuários com deficiência severa de aprender a comandar a partir de novas tecnologias, como por exemplo, o rastreamento ocular. Para superar essas dificuldades, este trabalho propõe, implementa e valida uma abordagem mais realista, a qual é baseada em treinamento por teleoperação e por múltiplos métodos de condução. Foi implementada uma arquitetura que permite ao usuário enviar comandos remotamente para comandar uma CRM real a longas distâncias. Os métodos de condução implementados foram por joystick, eye-tracker e por meio de uma interface humanomáquina genérica. Para a avaliação do sistema, foram criados cenários considerando diferentes configurações. Os resultados experimentais sugerem que novos usuários podem praticar com segurança utilizando uma CRM real através da Internet, mesmo em uma situação com delay de 130,2 ms (média). O sistema proposto mostrou potencial em atender novos usuários de CRM com diferentes tipos de deficiência, bem como de ser uma abordagem de baixo custo com possibilidade de ser aplicada em países em desenvolvimento.
105

Vision-based Driver Assistance Systems for Teleoperation of OnRoad Vehicles : Compensating for Impaired Visual Perception Capabilities Due to Degraded Video Quality / Visuella förarhjälpmedel för fjärrstyrning av fordon

Matts, Tobias, Sterner, Anton January 2020 (has links)
Autonomous vehicles is going to be a part of future transport of goods and people, but to make them usable in unpredictable situations presented in real traffic, there is need for backup systems for manual vehicle control. Teleoperation, where a driver controls the vehicle remotely, has been proposed as a backup system for this purpose. This technique is highly dependent on stable and large wireless network bandwidth to transmit high resolution video from the vehicle to the driver station. Reduction in network bandwidth, resulting in a reduced level of detail in the video stream, could lead to a higher risk of driver error. This thesis is a two part investigation. One part looking into whether lower resolution and increased lossy compression of video at the operator station affects driver performance and safety of operation during teleoperation. The second part covers implementation of two vision-based driver assistance systems, one which detects and highlights vehicles and pedestrians in front of the vehicle, and one which detects and highlights lane markings. A driving test was performed at an asphalt track with white markings for track boundaries, with different levels of video quality presented to the driver. Reducing video quality did have a negative effect on lap time and increased the number of times the track boundary was crossed. The test was performed with a small group of drivers, so the results can only be interpreted as an indication toward that video quality can negatively affect driver performance. The vision-based driver assistance systems for detection and marking of pedestrians was tested by showing a test group pre-recorded video shot in traffic, and them reacting when they saw a pedestrian about to cross the road. The results of a one-way analysis of variance, shows that video quality significantly affect reaction times, with p = 0.02181 at significance level α = 0.05. A two-way analysis of variance was also conducted, accounting for video quality, the use of a driver assistance system marking pedestrians, and the interaction between these two. The results point to that marking pedestrians in very low quality video does help reduce reaction times, but the results are not significant at significance level α = 0.05.
106

Situational Awareness Monitoring for Humans-In-The-Loop of Telepresence Robotic Systems

Kanyok, Nathan J. 21 November 2019 (has links)
No description available.
107

Remote Control Operation of Autonomous Cars Over Cellular Network Using PlayStation Controller

Hemlin, Karl, Persson, Frida January 2019 (has links)
A big challenge regarding the development of autonomous vehicles is how to handle complex situations. If an autonomous vehicle ends up in a situation where it cannot make a decision on its own it will cause the car to stop, unable to continue driving. For these situations, human intervention is required. By making it possible to control the car remotely there is no need for an actual human in the car. Instead, a human operator can remotely control one or several cars from a distance. The purpose of this project was to identify such complex situations, evaluate remote control options and implement one of these controllers to drive the SVEA cars in the Smart Mobility Lab. After evaluation of possible remote control options, the PlayStation controller was chosen to be the simplest and most intuitive steering option. The controller was successfully implemented first in simulation and then on the SVEA cars in the Smart Mobility Lab. A test track was designed to measure the performance of the implemented controller and to be able to measure user-friendliness through a survey. It was concluded that a majority of the participants would not feel comfortable steering a real car using the PlayStation controller. However, a more extensive evaluation would be required to draw any major conclusions.
108

Identifying users based on their VR behavioral patterns

Ritola, Nicklas January 2022 (has links)
A$ Virtual Reality (VR) becomes increasingly popular and affordable, and is applied in other fields than entertainment, such as education and industrial use, there is also a grow­ing risk related to its integrity and security. VR equipment tracks user biometric data as a means to interact with the VR environment, which creates sets of biometric data that could be used to identify u ers. Such biometric tempJates are potentially harmful if stolen by a malicious third party. This thesis investigates if user identification is possible within a set of participants ( =10) through a study using their movement and eye biometric data gathered within VR sessions, where they perform a teleoperation task designed to sim­ulate a real-world use case. By performing 3 data collection sessions for each participant and using the gathered data to train 4 classification models, we show that a high level of accuracy can be attained while using simple machine learning approaches, achieving a peak accuracy of 89.26% with a data5et designed to challenge our models. We further ana­lyze the accuracy results from the trained models, and di5cuss the identification power of different data types, which highlights how the characteristics of the task performed affects the usefulness of data types.
109

MAPPING STRATEGIES OF DISTANCE INFORMATION BASED ON CONTINUOUS VIBROTACTILE AMPLITUDE AND FREQUENCY VARIATION - THESIS JOHANNES F. RUESCHEN

Johannes Friedrich Rueschen (14238116) 09 December 2022 (has links)
<p>Our study investigates how different mapping strategies of distance information affect performance in an object exploration task with a teleoperated virtual robot. The task was to find an object inside a backpack using a simulated robotic gripper. A virtual proximity sensor tracked the distance between the tip of the gripper and the object. The distance was conveyed as a vibration pattern on the users index finger. This is the only information that was received to guide the user towards the object. The goal was to locate the hidden object by moving the tip of the gripper as quickly and as closely towards the object as possible without touching it. We implemented three different mapping strategies that utilized continuous frequency and amplitude variations of sinusoidal vibrations to encode distance. The present study provides empirical evidence that the mapping strategy can affect accuracy when approaching an object. We found that linear feedback sensations help to sense the rate of approach. Non- linear feedback perception can provide cues that enable more accurate approximation of the absolute distance. We found that experienced participants could selectively attend to and integrate frequency and intensity cues when both modalities are changed simultaneously. Inexperienced participants were not able to make this distinction and found it difficult to interpret such a signal. They preferred one-dimensional changes.</p>
110

Teleoperation and the influence of driving feedback on drivers’ behaviour and experience

Zhao, Lin January 2023 (has links)
Automated vehicles (AVs) have been developing at a rapid pace over the past few years. However, many difficulties still remain for achieving full Level-5 AVs. This signifies that AVs still require human operators to intervene or assist, such as taking over control of AVs or selecting their routes. Therefore, teleoperation can be seen as a subsystem of AVs that can remotely control and supervise a vehicle when needed. However, teleoperated driving conditions are largely different from real-life driving, so remote drivers may experience different driving feedback. In such a situation, therefore, the driving behaviour and performance of remote drivers can also be impacted. The following three studies were conducted to investigate these points. First, a seamless comparative study was carried out between teleoperated and real-life driving. Driving behaviour and performance were compared in two scenarios: slalom and lane following. Significant differences in driving behaviour and performance between them were found in the study. The lane following deviation during teleoperated driving is much greater than that of real-life driving. In addition, remote drivers are more likely to drive slower and make more steering corrections in lane following manoeuvres. Second, three types of steering force feedback (SFF) modes were compared separately in both teleoperated and real-life driving to investigate the effect of SFF on driving experience. The three SFF modes consist of Physical model-based steering force Feedback (PsF), Modular model-based steering force Feedback (MsF), and No steering force Feedback (NsF). The difference between PsF and MsF is that the main forces come from different sources, namely the estimated tyre force and steering motor current, respectively. As expected, the experimental results indicate that NsF would significantly reduce the driving experience in both driving conditions. In addition, remote drivers were found to require reduced steering feedback force and returnability. Finally, the influence of motion-cueing, sound, and vibration feedback on driving behaviour and experience was studied in a virtual teleoperation platform based on the IPG CarMaker environment. The prototype of a teleoperated driving station (TDS) with motion-cueing, sound, and vibration feedback was first developed to study human factors in teleoperated driving. Then, the low-speed disturbance scenario and high-speed dynamic scenario were used separately to investigate how these factors affect driving. Experimental results indicate that sound and vibration feedback can be an important factor in speed control by providing remote drivers a sense of speed. In the low-speed disturbance scenario, motion-cueing feedback can help with road surface perception and improve the driving experience. However, it did not significantly improve driving performance in the high-speed dynamic scenario. The research conducted reveals how driving behaviour may change in teleoperated driving and how different driving feedback influences it. These results could provide guidance for improving teleoperated driving in future research and serve as a guide for policymaking related to teleoperation. / Självkörande fordon (AV) har utvecklats i snabb takt de senaste åren. Men det finns fortfarande många utmaningar innan man når  helt självkörande fordon. Följaktligen krävs fortfarande säkerhetsförare när AV-enheter är i drift och i framtida drift kan AV-enheter stöta på oväntade problem som en människa behöver lösa. Fjärrövervakning kan därför ses som ett  backupsystem, som kan fjärrstyra och övervaka fordonet när det inte fungerar. Men situationen  vid fjärrstyrning är helt annorlunda än för körning i verkligheten, där fjärroperatörer kan uppleva olik återkoppling  och känslor jämfört med körning i verkligheten. Därför kan även fjärroperatörernas körbeteende och prestanda ändras i denna situation. För att undersöka detta utfördes följande tre studier. För det första genomfördes en sömlös jämförelsestudie mellan fjärrstyrning och verklig körning. Körbeteende och prestanda jämfördes i två scenarier, nämligen slalom och linjeföljning. Signifikanta skillnader i körbeteende och prestanda hittades mellan fjärrstyrning och körning i verkligheten. Avvikelse från linjeföljning vid fjärrstyrning är betydligt större än för körning i verkligheten. Dessutom är det mer sannolikt att fjärroperatörer kör i lägre hastigheter och gör fler styrkorrigeringar vid fjärrstyrning.  För det andra jämfördes tre typer av styrkraftsåterkopplingsmodeller (SFF) separat i både fjärrstyrning och verklig körning för att undersöka SFF:s inverkan på körupplevelsen. De tre SFF-modellerna inkluderar en  modell för fysisk återkoppling (PsF), modell för modulär återkoppling (MsF) och ingen återkoppling (NsF). Skillnaden mellan PsF och MsF är att huvudkrafterna härrör från olika källor, nämligen respektive från den matematiskt uppskattade däckkraften och från styrmotorströmmen. Som förväntat tyder resultaten av experimentet på att NsF avsevärt skulle minska körupplevelsen vid både fjärrstyrning och körning i verkligheten. Vid fjärrstyrning upptäcktes också  att operatörer kräver minskad styråterkopplingskraft och returförmåga.  Slutligen studerades påverkan av rörelsestyrning, ljud och vibrationsfeedback på körbeteende och upplevelse. Prototypen av fjärrkontrolltorn  med rörelsestyrning, ljud och vibrationsfeedback utvecklades först för att studera mänskliga faktorer vid fjärrstyrning. Sedan användes ett låghastighetsscenario med störningar och det dynamiska höghastighetsscenariot separat för att undersöka hur dessa faktorer påverkar körning vid fjärrstyrning. Resultaten av experimentet indikerar att ljud- och vibrationsåterkoppling kan vara till stor hjälp för att reglera  hastigheten genom att ge operatörerna medvetenhet om hastighet. I låghastighetsscenariot kan återkoppling  från rörelsestyrning underlätta uppfattningen av vägytan och förbättra körupplevelsen. Den ökade dock inte nämnvärt dynamisk körprestanda  i hög hastighet.  Denna forskning undersöker hur körbeteendet kan förändras vid fjärrstyrning och hur olika återkopplingar till operatör påverka körning på distans. Dessa resultat kan  ge vägledning om hur man kan förbättra fjärrstyrning i framtida forskning och fungera som en referens för skapande av regler kopplat till fjärrövervakning och fjärrstyrning. / <p>QC 230504</p>

Page generated in 0.1436 seconds