• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 18
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 80
  • 39
  • 32
  • 28
  • 26
  • 20
  • 17
  • 16
  • 14
  • 14
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Indoor navigation of mobile robots based on visual memory and image-based visual servoing / Navigation d'un robot mobile en milieu intérieur par asservissement visuel à partir d'une mémoire visuelle

Bista, Suman Raj 20 December 2016 (has links)
Cette thèse présente une méthode de navigation par asservissement visuel à l'aide d'une mémoire d'images. Le processus de navigation est issu d'informations d'images 2D sans utiliser aucune connaissance 3D. L'environnement est représenté par un ensemble d'images de référence avec chevauchements, qui sont automatiquement sélectionnés au cours d'une phase d'apprentissage préalable. Ces images de référence définissent le chemin à suivre au cours de la navigation. La commutation des images de référence au cours de la navigation est faite en comparant l'image acquise avec les images de référence à proximité. Basé sur les images actuelles et deux images de référence suivantes, la vitesse de rotation d'un robot mobile est calculée en vertu d'une loi du commandé par asservissement visuel basé image. Tout d'abord, nous avons utilisé l'image entière comme caractéristique, où l'information mutuelle entre les images de référence et la vue actuelle est exploitée. Ensuite, nous avons utilisé des segments de droite pour la navigation en intérieur, où nous avons montré que ces segments sont de meilleurs caractéristiques en environnement intérieur structuré. Enfin, nous avons combiné les segments de droite avec des points pour augmenter l'application de la méthode à une large gamme de scénarios d'intérieur pour des mouvements sans heurt. La navigation en temps réel avec un robot mobile équipé d'une caméra perspective embarquée a été réalisée. Les résultats obtenus confirment la viabilité de notre approche et vérifient qu'une cartographie et une localisation précise ne sont pas nécessaire pour une navigation intérieure utile. / This thesis presents a method for appearance-based navigation from an image memory by Image-Based Visual Servoing (IBVS). The entire navigation process is based on 2D image information without using any 3D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are selected automatically during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images during navigation is done by comparing the current acquired image with nearby reference images. Based on the current image and two succeeding key images, the rotational velocity of a mobile robot is computed under IBVS control law. First, we have used the entire image as a fea-ture, where mutual information between reference images and the current view is exploited. Then, we have used line segments for the indoor navigation, where we have shown that line segments are better features for the structured indoor environment. Finally, we combined line segments with point-based features for increasing the application of the method to a wide range of indoor scenarios with smooth motion. Real-time navigation with a Pioneer 3DX equipped with an on-board perspective camera has been performed in indoor environment. The obtained results confirm the viability of our approach and verify that accurate mapping and localization are not mandatory for a useful indoor navigation.
12

Image-based visual servoing of a quadrotor using model predictive control

Sheng, Huaiyuan 19 December 2019 (has links)
With numerous distinct advantages, quadrotors have found a wide range of applications, such as structural inspection, traffic control, search and rescue, agricultural surveillance, etc. To better serve applications in cluttered environment, quadrotors are further equipped with vision sensors to enhance their state sensing and environment perception capabilities. Moreover, visual information can also be used to guide the motion control of the quadrotor. This is referred to as visual servoing of quadrotor. In this thesis, we identify the challenging problems arising in the area of visual servoing of the quadrotor and propose effective control strategies to address these issues. The control objective considered in this thesis is to regulate the relative pose of the quadrotor to a ground target using a limited number of sensors, e.g., a monocular camera and an inertia measurement unit. The camera is attached underneath the center of the quadrotor and facing down. The ground target is a planar object consisting of multiple points. The image features are selected as image moments defined in a ``virtual image plane". These image features offer an image kinematics that is independent of the tilt motion of the quadrotor. This independence enables the separation of the high level visual servoing controller design from the low level attitude tracking control. A high-gain observer-based model predictive control (MPC) scheme is proposed in this thesis to address the image-based visual servoing of the quadrotor. The high-gain observer is designed to estimate the linear velocity of the quadrotor which is part of the system states. Due to a limited number of sensors on board, the linear velocity information is not directly measurable. The high-gain observer provides the estimates of the linear velocity and delivers them to the model predictive controller. On the other hand, the model predictive controller generates the desired thrust force and yaw rate to regulate the pose of the quadrotor relative to the ground target. By using the MPC controller, the tilt motion of the quadrotor can be effectively bounded so that the scene of the ground target is well maintained in the field of view of the camera. This requirement is referred to as visibility constraint. The satisfaction of visibility constraint is a prerequisite of visual servoing of the quadrotor. Simulation and experimental studies are performed to verify the effectiveness of the proposed control strategies. Moreover, image processing algorithms are developed to extract the image features from the captured images, as required by the experimental implementation. / Graduate / 2020-12-11
13

Modified System Design and Implementation of an Intelligent Assistive Robotic Manipulator

Paperno, Nicholas 01 January 2015 (has links)
This thesis presents three improvements to the current UCF MANUS systems. The first improvement modifies the existing fine motion controller into PI controller that has been optimized to prevent the object from leaving the view of the cameras used for visual servoing. This is achieved by adding a weight matrix to the proportional part of the controller that is constrained by an artificial ROI. When the feature points being used are approaching the boundaries of the ROI, the optimized controller weights are calculated using quadratic programming and added to the nominal proportional gain portion of the controller. The second improvement was a compensatory gross motion method designed to ensure that the desired object can be identified. If the object cannot be identified after the initial gross motion, the end-effector will then be moved to one of three different locations around the object until the object is identified or all possible positions are checked. This framework combines the Kanade-Lucase-Tomasi local tracking method with the ferns global detector/tracker to create a method that utilizes the strengths of both systems to overcome their inherent weaknesses. The last improvement is a particle-filter based tracking algorithm that robustifies the visual servoing function of fine motion. This method performs better than the current global detector/tracker that was being implemented by allowing the tracker to successfully track the object in complex environments with non-ideal conditions.
14

Teleoperation of an Industrial Robot Using Resolved Motion Rate Control with Visual Servoing

Karadogan, Ernur 12 October 2005 (has links)
No description available.
15

Automatic guidance of robotized 2D ultrasound probes with visual servoing based on image moments.

Mebarki, Rafik 25 March 2010 (has links) (PDF)
This dissertation presents a new 2D ultrasound-based visual servoing method. The main goal is to automatically guide a robotized 2D ultrasound probe held by a medical robot in order to reach a desired cross-section ultrasound image of an object of interest. This method allows to control both the in-plane and out-of-plane motions of a 2D ultrasound probe. It makes direct use of the 2D ultrasound image in the visual servo scheme, where the feed-back visual features are combinations of image moments. To build the servo scheme, we develop the analytical form of the interaction matrix that relates the image moments time variation to the probe velocity. That modeling is theoretically verified on simple shapes like spherical and cylindrical objects. In order to be able to automatically position the 2D ultrasound probe with respect to an observed object, we propose six relevant independent visual features to control the 6 degrees of freedom of the robotic system. Then, the system is endowed with the capability of automatically interacting with objects without any prior information about their shape, 3D parameters, nor 3D location. To do so, we develop on-line estimation methods that identify the parameters involved in the built visual servo scheme. We conducted both simulation and experimental trials respectively on simulated volumetric objects, and on both objects and soft tissues immersed in a water-filled tank. Successful results have been obtained, which show the validity of the developed methods and their robustness to different errors and perturbations especially those inherent to the ultrasound modality. Keywords: Medical robotics, visual servoing, 2D ultrasound imaging, kinematics modeling, model-free servoing.
16

Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision System

Bdiwi, Mohamad 12 August 2014 (has links) (PDF)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows: • How to define which subspaces should be vision, position or force controlled? • When the controller should switch from one control mode to another one? • How to insure that the visual information could be reliably used? • How to define the most appropriated vision/force control structure? In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed. In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely: 1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge. 2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable. 3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene. 4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user. If the previous properties are relatively achieved, the proposed robot system can: • Perform different successive and complex tasks. • Grasp/contact and track imprecisely placed objects with different poses. • Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events. • Benefit from all the advantages of different vision/force control structures. • Benefit from all the information provided by the sensors. • Reduce the human intervention or reprogramming during the execution of the task. • Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
17

Image Processing Based Control of Mobile Robotics

January 2016 (has links)
abstract: Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses various control objectives for ground vehicles. There are two main objectives within this thesis, first is the use of visual information to control a Differential-Drive Thunder Tumbler (DDTT) mobile robot and second is the solution to a minimum time optimal control problem for the robot around a racetrack. One method to do the first objective is by using the Position Based Visual Servoing (PBVS) approach in which a camera looks at a target and the position of the target with respect to the camera is estimated; once this is done the robot can drive towards a desired position (x_ref, z_ref). Another method is called Image Based Visual Servoing (IBVS), in which the pixel coordinates (u,v) of markers/dots placed on an object are driven towards the desired pixel coordinates (u_ref, v_ref) of the corresponding markers. By doing this, the mobile robot gets closer to a desired pose (x_ref, z_ref, theta_ref). For the second objective, a camera-based and noncamera-based (v,theta) cruise-control systems are used for the solution of the minimum time problem. To set up the minimum time problem, optimal control theory is used. Then a direct method is implemented by discretizing states and controls of the system. Finally, the solution is obtained by modeling the problem in AMPL and submitting to the nonlinear optimization solver KNITRO. Simulation and experimental results are presented. The DDTT-vehicle used within this thesis has different components as summarized below: (1) magnetic wheel-encoders/IMU for inner-loop speed-control and outer-loop directional control, (2) Arduino Uno microcontroller-board for encoder-based inner-loop speed-control and encoder-IMU-based outer-loop cruise-directional-control, (3) Arduino motor-shield for inner-loop speed-control, (4) Raspberry Pi II computer-board for outer-loop vision-based cruise-position-directional-control, (5) Raspberry Pi 5MP camera for outer-loop cruise-position-directional control. Hardware demonstrations shown in this thesis are summarized: (1) PBVS without pan camera, (2) PBVS with pan camera, (3) IBVS with 1 marker/dot, (4) IBVS with 2 markers, (5) IBVS with 3 markers, (6) camera and (7) noncamera-based (v,theta) cruise control system for the minimum time problem. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2016
18

Sistema de controle servo visual de uma câmera pan-tilt com rastreamento de uma região de referência. / Visual servoing system of a pan-tilt camera using region template tracking.

Kikuchi, Davi Yoshinobu 19 April 2007 (has links)
Uma câmera pan-tilt é capaz de se movimentar em torno de dois eixos de rotação (pan e tilt), permitindo que sua lente possa ser apontada para um ponto qualquer no espaço. Uma aplicação possível dessa câmera é mantê-la apontada para um determinado alvo em movimento, através de posicionamentos angulares pan e tilt adequados. Este trabalho apresenta uma técnica de controle servo visual, em que, inicialmente, as imagens capturadas pela câmera são utilizadas para determinar a posição do alvo. Em seguida, calculam-se as rotações necessárias para manter a projeção do alvo no centro da imagem, em um sistema em tempo real e malha fechada. A técnica de rastreamento visual desenvolvida se baseia em comparação de uma região de referência, utilizando a soma dos quadrados das diferenças (SSD) como critério de correspondência. Sobre essa técnica, é adicionada uma extensão baseada no princípio de estimação incremental e, em seguida, o algoritmo é mais uma vez modificado através do princípio de estimação em multiresolução. Para cada uma das três configurações, são realizados testes para comparar suas performances. O sistema é modelado através do princípio de fluxo óptico e dois controladores são apresentados para realimentar o sistema: um proporcional integral (PI) e um proporcional com estimação de perturbações externas através de um filtro de Kalman (LQG). Ambos são calculados utilizando um critério linear quadrático e os desempenhos deles também são analisados comparativamente. / A pan-tilt camera can move around two rotational axes (pan and tilt), allowing its lens to be pointed to any point in space. A possible application of the camera is to keep it pointed to a certain moving target, through appropriate angular pan-tilt positioning. This work presents a visual servoing technique, which uses first the images captured by the camera to determinate the target position. Then the method calculates the proper rotations to keep the target position in image center, establishing a real-time and closed-loop system. The developed visual tracking technique is based on template region matching, and makes use of the sum of squared differences (SSD) as similarity criterion. An extension based on incremental estimation principle is added to the technique, and then the algorithm is modified again by multiresolution estimation method. Experimental results allow a performance comparison between the three configurations. The system is modeled through optical flow principle and this work presents two controllers to accomplish the system feedback: a proportional integral (PI) and a proportional with external disturbances estimation by a Kalman filter (LQG). Both are determined using a linear quadratic method and their performances are also analyzed comparatively.
19

Nonlinear control and visual servoing of autonomous robots / Commande non linéaire et asservissement visuel de robots autonomes

Dib, Alaa 21 October 2011 (has links)
Dans ce travail de thèse, on s’intéresse au problème de déplacement et de la localisation d'un robot mobile autonome dans son environnement local. La première partie du manuscrit les deux tâches de mouvement de base : c'est-à-dire, la stabilisation et le suivi de trajectoire. Deux stratégies de commande ont été traitées: le mode de glissement intégral, et la méthode dite «Immersion et Invariance». La deuxième partie porte sur l'asservissement visuel, les deux techniques 2D et 3D d'asservissement visuel ont été appliquées. Les moments d'image ont été choisis comme indices visuels car ils sont moins sensibles au bruit d'image et autres erreurs de mesure. Une nouvelle approche de l'asservissement visuel qui repose sur l'image est ici proposée. Elle est basée sur la génération de trajectoires sur le plan de l'image directement (calcul des valeurs des primitives d’image correspondantes à une trajectoire cartésienne donnée). Cette approche garantit que la robustesse et la stabilité bien connues de l'asservissement 2D ont été étendues en raison du fait que les emplacements initial et désiré de la caméra sont proches. Les trajectoires obtenues garantissent aussi que la cible reste dans le champ de vue de la caméra et que le mouvement du robot correspondant est physiquement réalisable. Des tests expérimentaux ont été effectués et des résultats satisfaisants ont été obtenus à partir des implémentations des stratégies de commande et d'asservissement visuel. Bien qu'ils soient développés et expérimentés dans le cadre spécifique d'un robot de type unicycle, ces travaux sont assez génériques pour être appliqués sur autres types de véhicules. / This thesis focuses on the problem of moving and localizing an autonomous mobile robot in its local environments. The first part of the manuscript concerns two basic motion tasks, namely the stabilization and trajectory tracking. Two control strategies were discussed: the integral sliding mode, and the method known as “Immersion and Invariance” for nonlinear control. The second part focuses on both 2D and 3D visual servoing techniques. Image moments were chosen as visual features as they provide a more geometric and intuitive meaning than other features, and they are less sensitive to image noise and other measurement errors. A new approach to visual servoing based on image is herein proposed. It is based on the generation of trajectories directly on the image plane (Calculation of the image features corresponding to a given Cartesian path). This approach ensures that the robustness and stability are extended due to the fact that the initial and desired locations of the camera are close. The trajectories obtained guarantee that the target remains in the field of view of the camera and the corresponding movement of the robot is physically feasible. Experimental tests have been conducted, and satisfactory results have been obtained from both implementations regarding motion control and visual servoing strategies. Although developed and tested in the specific context of a unicycle type robot, this work is generic enough to be applied to other types of vehicles.
20

A robotic control framework for quantitative ultrasound elastography / Un cadre général de contrôle robotique pour l’élastographie ultrasonore quantitative

Patlan-Rosales, Pedro Alfonso 26 January 2018 (has links)
Cette thèse concerne le développement d'un cadre de contrôle robotique pour l'élastographie ultrasonore quantitative. L'élastographie ultrasonore est une technique qui dévoile les paramètres élastiques du tissu qui sont généralement liés à une pathologie. Cette thèse propose trois nouvelles approches robotiques différentes pour pour assister la procédure d'élastographie. La première approche concerne le contrôle d'un robot actionnant une sonde à ultrasons pour effectuer un mouvement de palpation nécessaire pour l'élastographie par ultrasons. L'élasticité du tissu est utilisée pour concevoir une loi d'asservissement afin de maintenir un tissu d'intérêt rigide dans le champ de vision de la sonde ultrasonore. De plus, l'orientation de la sonde est contrôlée par un utilisateur humain pour explorer différentes vues du tissu pendant que l'élastographie est effectuée. La seconde approche exploite le recalage d'images déformables avec des images ultrasonores pour estimer l'élasticité tissulaire et aider à la compensation automatique par asservissement visuel ultrasonore d'un mouvement introduit dans le tissu. La troisième approche offre une méthodologie pour ressentir l'élasticité du tissu en déplaçant une sonde virtuelle dans l'image ultrasonore avec un dispositif haptique pendant que le robot effectue un mouvement de palpation. Les résultats expérimentaux des trois approches robotiques obtenus sur des fantômes constitués de tissus démontrent l'efficacité des méthodes proposées et ouvre des perspectives intéressantes pour l'élastographie ultrasonore assistée par robot. / This thesis concerns the development of a robotic control framework for quantitative ultrasound elastography. Ultrasound elastography is a technology that unveils elastic parameters of a tissue, which are commonly related with certain pathologies. This thesis proposes three novel robotic approaches to assist examiners with elastography. The first approach deals with the control of a robot actuating an ultrasound probe to perform palpation motion required for ultrasound elastography. The elasticity of the tissue is used to design a servo control law to keep a stiff tissue of interest in the field of view of the ultrasound probe. Additionally, the orientation of the probe is controlled by a human user to explore other tissue while elastography is performed. The second approach exploits deformable image registration of ultrasound images to estimate the tissue elasticity and to help in the automatic compensation by ultrasound visual servoing of a motion introduced into the tissue. The third approach offers a methodology to feel the elasticity of the tissue by moving a virtual probe in the ultrasound image with a haptic device while the robot is performing palpation motion. Experimental results of the three robotic approaches over phantoms with tissue-like offer an excellent perspective for robotic-assistance for ultrasound elastography.

Page generated in 0.066 seconds