• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 12
  • 6
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 179
  • 55
  • 43
  • 42
  • 38
  • 36
  • 34
  • 31
  • 27
  • 25
  • 23
  • 23
  • 21
  • 20
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Haptic Teleoperation for Robotic-Assisted Surgery / Téléopération avec retour haptique pour chirurgies assistées par robot

Albakri, Abdulrahman 16 December 2015 (has links)
Dans ce travail de thèse, nous examinons les principaux facteurs affectant la transparence d'un schéma de téléopération dans le contexte de la robotique médicale.Afin de déterminer ces facteurs, une analyse approfondie de l'état de l'art a été réalisée ce qui a permis de proposer une nouvelle classification de schémas de téléopération avec retour haptique.Le rôle de ces principaux facteurs a été analysé.Ces facteurs sont liés à l'architecture de commande appliquée, aux perturbations provoquées par les mouvements physiologiques des tissus manipulés ainsi qu'à la précision du modèle d'interaction robot-tissue.Les performances du schéma de téléopération à architecture 3-canaux ont été analysées en simulation pour choisir une architecture de commande dédiée aux applications médicales.Ensuite, l'influence des mouvements physiologiques de l'environnement manipulé sur la transparence du système a été analysée et un nouveau modèle d'interaction avec des tissus mous a été proposé.Un schéma de commande de téléopération basé modèle d'interaction a été proposé en se basant sur une analyse de passivité du port d'interaction robot-environnement.Enfin, l'importance de la précision du modèle d'interaction (robot-tissue) sur la transparence du schéma de téléopération avec retour d'effort basé-modèle a été analysée.Cette analyse a été validée en théorie et expérimentalement en implémentant le modèle Hunt-Crossly dans une commande utilisant un AOB pour réaliser une téléopération avec retour haptique.En conclusion de ce travail, les résultats de cette thèse ont été discutés et les perspectives futures ont également été proposées. / This thesis investigates the major factors affecting teleoperation transparency in medical context.A wide state of art survey is carried out and a new point of view to classify haptic teleoperation literature is proposed in order to extract the decisive factors providing a transparent teleoperation.Furthermore, the roles of three aspects have been analysed.First, The role of the applied control architecture.To this aim, the performances of 3-channel teleoperation are analysed and guidelines to select a suitable control architecture for medical applications are proposed.The validation of these guidelines is illustrated through simulations.Second, the effects of motion disturbance in the manipulated environment on telepresence are analysed.Consequently, a new model of such moving environment is proposed and the applicability of the proposed model is shown through interaction port passivity investigation.Third analysed factor is the role of the interaction model accuracy on the transparency of interaction control based haptic teleoperation.This analysis is performed theoretically and experimentally by the design and implementation of Hunt-Crossly in AOB interaction control haptic teleoperation.The results are discussed and the future perspectives are proposed.
72

Teleoperation of MRI-Compatible Robots with Hybrid Actuation and Haptic Feedback

Shang, Weijian 28 January 2015 (has links)
Image guided surgery (IGS), which has been developing fast recently, benefits significantly from the superior accuracy of robots and magnetic resonance imaging (MRI) which is a great soft tissue imaging modality. Teleoperation is especially desired in the MRI because of the highly constrained space inside the closed-bore MRI and the lack of haptic feedback with the fully autonomous robotic systems. It also very well maintains the human in the loop that significantly enhances safety. This dissertation describes the development of teleoperation approaches and implementation on an example system for MRI with details of different key components. The dissertation firstly describes the general teleoperation architecture with modular software and hardware components. The MRI-compatible robot controller, driving technology as well as the robot navigation and control software are introduced. As a crucial step to determine the robot location inside the MRI, two methods of registration and tracking are discussed. The first method utilizes the existing Z shaped fiducial frame design but with a newly developed multi-image registration method which has higher accuracy with a smaller fiducial frame. The second method is a new fiducial design with a cylindrical shaped frame which is especially suitable for registration and tracking for needles. Alongside, a single-image based algorithm is developed to not only reach higher accuracy but also run faster. In addition, performance enhanced fiducial frame is also studied by integrating self-resonant coils. A surgical master-slave teleoperation system for the application of percutaneous interventional procedures under continuous MRI guidance is presented. The slave robot is a piezoelectric-actuated needle insertion robot with fiber optic force sensor integrated. The master robot is a pneumatic-driven haptic device which not only controls the position of the slave robot, but also renders the force associated with needle placement interventions to the surgeon. Both of master and slave robots mechanical design, kinematics, force sensing and feedback technologies are discussed. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. MRI compatibility is evaluated extensively. Teleoperated needle steering is also demonstrated under live MR imaging. A control system of a clinical grade MRI-compatible parallel 4-DOF surgical manipulator for minimally invasive in-bore prostate percutaneous interventions through the patient’s perineum is discussed in the end. The proposed manipulator takes advantage of four sliders actuated by piezoelectric motors and incremental rotary encoders, which are compatible with the MRI environment. Two generations of optical limit switches are designed to provide better safety features for real clinical use. The performance of both generations of the limit switch is tested. MRI guided accuracy and MRI-compatibility of whole robotic system is also evaluated. Two clinical prostate biopsy cases have been conducted with this assistive robot.
73

Network-based Haptic Systems with Time-Delays / Systèmes haptiques en réseau avec retards de communication

Liacu, Bogdan Cristian 20 November 2012 (has links)
Au cours des dernières décennies, les environnements virtuels se sont de plus en plus répandus et sont largement utilisés dans de nombreux domaines comme, par exemple, le prototypage, la formation à l’utilisation de différents outils/appareils, l’aide à la réalisation de tâches difficiles, etc. L’interaction avec la réalité virtuelle, ainsi que le retour d’effort, sont assurés par des interfaces haptiques. En général, ces systèmes sont affectés par des retards de communication et de traitement, entraînant une détérioration des performances. Dans cette thèse, une étude complète des méthodes existantes, les outils théoriques et de nouvelles solutions sont proposés dans le cadre de l’haptique. Dans un premier temps, une étude comparative, fondée sur des résultats expérimentaux obtenus sur un système haptique à un degré de liberté, met en évidence les avantages et les inconvénients des algorithmes de commande les plus classiques, transposés du domaine de la téléopération à l’haptique. Sont ensuite examinés les outils théoriques nécessaires à l’analyse de la stabilité des systèmes à retard selon différentes situations, tenant compte des limites physiques des plates-formes expérimentales considérées. En plus du cas classique du retard constant, des incertitudes sont également considérées et modélisées par plusieurs types de distributions (distribution uniforme, normale et gamma avec gap). Finalement, pour surmonter les inconvénients liés aux retards, deux nouvelles approches sont proposées. Tout d’abord, la commande de type prédicteur de Smith est reprise et une solution spécifique pour les systèmes haptiques est mise en oeuvre. L’idée principale consiste à introduire dans le prédicteur de Smith les forces liées à l’environnement en utilisant les informations complémentaires issues de la réalité virtuelle, en ce qui concerne les distances entre l’objet virtuel piloté et d’autres objets présents dans la scène. Pour surmonter la perte de performances induite par l’utilisation d’un gain fixe dans les correcteurs, commun à toutes les situations (mouvements libres ou restreints), la seconde approche propose un correcteur Proportionnel Dérivé incluant une stratégie de séquencement de gain en fonction de la distance jusqu’à une éventuelle collision. Les deux approches sont validées expérimentalement sur une plateforme haptique à trois degrés de liberté, pour différents scénarios de complexité progressive, partant de situations avec des mouvements simples - libre et restreints, des contacts avec des objets en mouvement, pour arriver à des situations plus complexes - boîte virtuelle avec des murs fixes ou mobiles. / During the last decades, virtual environments have become very popular and are largely used in many domains as, for example, prototyping, trainings for different devices, assistance in completing difficult tasks, etc. The interaction with the virtual reality, as well as the feedback force, is assured by haptic interfaces. Generally, such systems are affected by communication and processing time-delays, resulting in a deterioration of performances. In this thesis, a complete study of the existing methods, as well as theoretical tools and new solutions, are proposed for the haptic framework. First, a comparative study, based on the experimental results obtained on a 1-dof haptic system, highlights the advantages and drawbacks of the most common control algorithms ported from teleoperation to haptics. Next, the theoretical tools needed in analyzing the stability of the delayed systems in different situations, as well as the physical limitations of the experimental platforms considered, are examined. Besides the standard case of constant time-delays, uncertainties are also considered and modeled by different types of distributions (uniform, normal and gamma distribution with gap). In the sequel, for overcoming the drawback of time-delays, two new approaches are proposed. First, the use of Smith predictor-based control is addressed and a specific solution for haptic systems is developed and discussed. The main idea is to introduce into the Smith predictor the environmental forces by using the additional information from the virtual reality regarding the distances between the controlled virtual object and other objects in the scene. To overcome the loss of performances induced by using a fixed gain in the controllers for all situations (free or restricted motions), the second approach proposes a gain-scheduling Proportional Derivative control strategy depending on the distance until a possible collision. Both approaches are experimentally validated on a 3-dof haptic platform, under different scenarios elaborated gradually from simple situations - free and restricted motion, contacts with moving objects, to more complex situations - virtual box with fixed or moving sides.
74

Design and Implementation of a Hard Real-Time Telerobotic Control System Using Sensor-Based Assist Functions

Veras-Jorge, Eduardo J 21 November 2008 (has links)
This dissertation presents a novel concept of a hard real-time telerobotic control system using sensory-based assistive functions combining autonomous control mode, force and motion-based virtual fixtures, and scaled teleoperation. The system has been implemented as a PC-based multithreaded, real-time controller with a haptic user interface and a 6-DoF slave manipulator. A telerobotic system is a system that allows a human to control a manipulator remotely and the human control is combined with computer control. A telerobotic control system with sensor-based assistance capabilities enables the user to make high-level decisions, such as target object selection, and it enables the system to generate trajectories and virtual constraints to be used for autonomous motion or scaled teleoperation. The design and realization of a telerobotic system with the capabilities of sensing and manipulating objects with haptic feedback, either real or virtual, require utilization of sensor-based assist functions through an efficient real-time control scheme. This dissertation addresses the problem of integrating sensory information and the calculation of sensor-based assist functions (SAF's) in hard real-time using PC-based resources. The SAF's calculations are based on information from a laser range finder, with additional visual feedback from a camera, and haptic measurements for motion assistance and scaling during the approach to a target and while following a desired path. This research compares the performance of the autonomous control mode, force and motion-based virtual fixtures, and scaled teleoperation. The results show that a versatile PC-based real-time telerobotic platform adaptable to a wide range of users and tasks is achievable. A key aspect is the real-time operation and performance with multithreaded software architecture. This platform can be used for several applications in areas such as rehabilitation engineering and clinical research, surgery, defense, and assistive technology solutions.
75

Design and Evaluation of 3D Multimodal Virtual Environments for Visually Impaired People

Huang, Ying Ying January 2010 (has links)
Spatial information presented visually is not easily accessible to visually impairedusers. Current technologies, such as screen readers, cannot intuitively conveyspatial layout or structure. This lack of overview is an obstacle for a visuallyimpaired user, both when using the computer individually and when collaboratingwith other users. With the development of haptic and audio technologies, it ispossible to let visually impaired users access to three-dimensional (3D) VirtualReality (VR) environments through the senses of touch and hearing.The work presented in this thesis comprises investigations of haptic and audiointeraction for visually impaired computer users in two stages.The first stage of my research focused on collaborations between sighted andblind-folded computer users in a shared virtual environment. One aspect Iconsidered is how different modalities affect one’s awareness of the other’sactions, as well as of one’s own actions, during the work process. The secondaspect I investigated is common ground, i.e. how visually impaired people obtaina common understanding of the elements of their workspace through differentmodalities. A third aspect I looked at was how different modalities affectperceived social presence, i.e. their ability to perceive the other person’sintentions and emotions. Finally, I attempted to understand how human behaviorand efficiency in task performance are affected when different modalities are usedin collaborative situations.The second stage of my research focused on how the visually impaired access3D multimodal virtual environment individually. I conducted two studies basedon two different haptic and audio prototypes concerning understanding the effectof haptic-audio modalities on navigation and interface design. One prototype thatI created was a haptic and audio game, a labyrinth. The other is a virtualsimulation environment based on the real world of Kulturhuset in Stockholm. Oneaspect I investigated in this individual interaction is how it is possible for users toaccess the spatial layout through a multimodal virtual environment. The secondaspect I investigated is usability; how the haptic and audio cues help visuallyimpaired people understand the spatial layout. The third aspect concernsnavigation and cognitive mapping in a multimodal virtual environment.This thesis contributes to the field of human-computer interaction for thevisually impaired with a set of studies of multimodal interactive systems, andbrings new perspectives to the enhancement of understanding real environmentsfor visually impaired users through a haptic and audio virtual computerenvironment. / QC20100701
76

Mobile manipulation in unstructured environments with haptic sensing and compliant joints

Jain, Advait 22 August 2012 (has links)
We make two main contributions in this thesis. First, we present our approach to robot manipulation, which emphasizes the benefits of making contact with the world across all the surfaces of a manipulator with whole-arm tactile sensing and compliant actuation at the joints. In contrast, many current approaches to mobile manipulation assume most contact is a failure of the system, restrict contact to only occur at well modeled end effectors, and use stiff, precise control to avoid contact. We develop a controller that enables robots with whole-arm tactile sensing and compliant actuation at the joints to reach to locations in high clutter while regulating contact forces. We assume that low contact forces are benign and our controller does not place any penalty on contact forces below a threshold. Our controller only requires haptic sensing, handles multiple contacts across the surface of the manipulator, and does not need an explicit model of the environment prior to contact. It uses model predictive control with a time horizon of length one, and a linear quasi-static mechanical model that it constructs at each time step. We show that our controller enables both a real and simulated robots to reach goal locations in high clutter with low contact forces. While doing so, the robots bend, compress, slide, and pivot around objects. To enable experiments on real robots, we also developed an inexpensive, flexible, and stretchable tactile sensor and covered large surfaces of two robot arms with these sensors. With an informal experiment, we show that our controller and sensor have the potential to enable robots to manipulate in close proximity to, and in contact with humans while keeping the contact forces low. Second, we present an approach to give robots common sense about everyday forces in the form of probabilistic data-driven object-centric models of haptic interactions. These models can be shared by different robots for improved manipulation performance. We use pulling open doors, an important task for service robots, as an example to demonstrate our approach. Specifically, we capture and model the statistics of forces while pulling open doors and drawers. Using a portable custom force and motion capture system, we create a database of forces as human operators pull open doors and drawers in six homes and one office. We then build data-driven models of the expected forces while opening a mechanism, given knowledge of either its class (e.g, refrigerator) or the mechanism identity (e.g, a particular cabinet in Advait's kitchen). We demonstrate that these models can enable robots to detect anomalous conditions such as a locked door, or collisions between the door and the environment faster and with lower excess force applied to the door compared to methods that do not use a database of forces.
77

Design and Development of a Framework to Bridge the Gap Between Real and Virtual

Hossain, SK Alamgir 01 November 2011 (has links)
Several researchers have successfully developed realistic models of real world objects/ phenomena and then have simulated them in the virtual world. In this thesis, we propose the opposite: instantiating virtual world events in the real world. The interactive 3D virtual environment provides a useful, realistic 3D world that resembles objects/phenomena of a real world, but it has limited capability to communicate with the physical environment. We argue that new and intuitive 3D user interfaces, such as 3D virtual environment interfaces, may provide an alternative form of media for communicating with the real environment. We propose a 3D virtual world-based add-on architecture that achieves a synchronized virtual-real communication. In this framework, we explored the possibilities of integrating haptic and real world object interactions with Linden Lab's multiuser online 3D virtual world, Second Life. We enhanced the open source Second Life viewer client in order to facilitate communications between the real and virtual world. Moreover, we analyzed the suitability of such an approach in terms of user perception, intuition and other common parameters. Our experiments suggest that the proposed approach not only demonstrates a more intuitive mode of communication system, but also is appealing and useful to the user. Some of the potential applications of the proposed approach include remote child-care, communication between distant lovers, stress recovery, and home automation.
78

Haptic Image Exploration

Lareau, David 12 January 2012 (has links)
The haptic exploration of 2-D images is a challenging problem in computer haptics. Research on the topic has primarily been focused on the exploration of maps and curves. This thesis describes the design and implementation of a system for the haptic exploration of photographs. The system builds on various research directions related to assistive technology, computer haptics, and image segmentation. An object-level segmentation hierarchy is generated from the source photograph to be rendered haptically as a contour image at multiple levels-of-detail. A tool for the authoring of object-level hierarchies was developed, as well as an innovative type of user interaction by region selection for accurate and efficient image segmentation. According to an objective benchmark measuring how the new method compares with other interactive image segmentation algorithms shows that our region selection interaction is a viable alternative to marker-based interaction. The hierarchy authoring tool combined with precise algorithms for image segmentation can build contour images of the quality necessary for the images to be understood by touch with our system. The system was evaluated with a user study of 24 sighted participants divided in different groups. The first part of the study had participants explore images using haptics and answer questions about them. The second part of the study asked the participants to identify images visually after haptic exploration. Results show that using a segmentation hierarchy supporting multiple levels-of-detail of the same image is beneficial to haptic exploration. As the system gains maturity, it is our goal to make it available to blind users.
79

A Method of Measuring Force/Torque at the Tool/Tissue Interface in Endoscopy

Bakirtzian, Armen 14 December 2010 (has links)
The adoption of Minimally Invasive Surgery (MIS) and Robot-Assisted MIS has resulted in the distortion of haptic cues surgeons rely on. The application of excessive force during port creation has lead to increased surgical access trauma. This study aims to quantify the forces experienced during port creation with a blunt-ended Threaded Visual Cannula (TVC) in an effort to ameliorate patient safety, provide a quantitative platform for surgeon training, and offer a gateway for the eventual automation of this problematic aspect of MIS. A method of determining the torque encountered during port creation was established. It was found that the magnitude of torque required to cannulate different materials was unique and was dictated by the friction observed at the tool/tissue interface. Furthermore, the ability to detect instantaneous changes in torque arising from the transition between two different media was not found to be possible with the current design of the TVC.
80

A Method of Measuring Force/Torque at the Tool/Tissue Interface in Endoscopy

Bakirtzian, Armen 14 December 2010 (has links)
The adoption of Minimally Invasive Surgery (MIS) and Robot-Assisted MIS has resulted in the distortion of haptic cues surgeons rely on. The application of excessive force during port creation has lead to increased surgical access trauma. This study aims to quantify the forces experienced during port creation with a blunt-ended Threaded Visual Cannula (TVC) in an effort to ameliorate patient safety, provide a quantitative platform for surgeon training, and offer a gateway for the eventual automation of this problematic aspect of MIS. A method of determining the torque encountered during port creation was established. It was found that the magnitude of torque required to cannulate different materials was unique and was dictated by the friction observed at the tool/tissue interface. Furthermore, the ability to detect instantaneous changes in torque arising from the transition between two different media was not found to be possible with the current design of the TVC.

Page generated in 0.0454 seconds