• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 33
  • 28
  • 28
  • 22
  • 18
  • 15
  • 15
  • 12
  • 12
  • 10
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Desenvolvimento de esquema de controle com realimenta??o visual para um rob? manipulador

Soares, Allan Aminadab Andr? Freire 22 March 2005 (has links)
Made available in DSpace on 2014-12-17T14:56:01Z (GMT). No. of bitstreams: 1 AllanAAFS.pdf: 1304979 bytes, checksum: 691afb58d42d67dd5158ddcf00b02ce5 (MD5) Previous issue date: 2005-03-22 / This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose / Este trabalho prop?e um esquema de controle cinem?tico, utilizando realimenta??o visual para um bra?o rob?tico com cinco graus de liberdade. Foi desenvolvido um m?todo que utiliza t?cnicas de vis?o computacional, para a determina??o da posi??o e orienta??o cartesiana tridimensional (pose) do bra?o rob?tico a partir da imagem do mesmo fornecida por uma c?mera. Um r?tulo triangular colorido ? disposto sobre a garra do manipulador rob?tico e regras heur?sticas eficientes s?o utilizadas para obter os v?rtices desse r?tulo na imagem. M?todos num?ricos s?o usados para recupera??o da pose da garra a partir desses v?rtices. Um esquema de calibra??o de cores fundamentado no algoritmo K-means foi implementado de modo a garantir a robustez do sistema de vis?o na presen?a de varia??es na ilumina??o. Os par?metros extr?nsecos da c?mera s?o calibrados usando-se quatro pontos coplanares extra?dos da imagem, cujas posi??es no plano cartesiano em rela??o a um referencial fixo s?o conhecidas. Para duas poses distintas da garra, inicial e final, adquiridas atrav?s da imagem, interpola-se uma trajet?ria de refer?ncia em espa?o cartesiano. O esquema de controle proposto possui como sinal de erro a diferen?a entre a pose de refer?ncia e a pose atual da garra. Ap?s a aplica??o de ganhos, o sinal de erro ? mapeado em incrementos de junta utilizando-se a pseudoinversa do jacobiano do manipulador. Esses incrementos s?o aplicados ?s juntas do manipulador, deslocando a garra para a pose de refer?ncia
62

Navigation augmentée d'informations de fluorescence pour la chirurgie laparoscopique robot-assistée / Navigation augmented fluorescence informations for the laparoscopic surgeryrobot-assisted

Agustinos, Anthony 06 April 2016 (has links)
La chirurgie laparoscopique reproduit fidèlement les principes de la chirurgie conventionnelle avec une agressioncorporelle minimale. Si cette chirurgie apparaît comme étant très avantageuse pour le patient, il s’agit d’une interventionchirurgicale difficile où la complexité du geste chirurgical est accrue, en comparaison avec la chirurgie conventionnelle.Cette complexité réside en partie dans la manipulation des instruments chirurgicaux et la visualisation dela scène chirurgicale (notamment le champ de visualisation restreint d’un endoscope classique). La prise de décisionsdu chirurgien pourrait être améliorée en identifiant des zones critiques ou d’intérêts non visibles dans la scènechirurgicale.Mes travaux de recherche visent à combiner la robotique, la vision par ordinateur et la fluorescence pour apporterune réponse à ces difficultés : l’imagerie de fluorescence fournira une information visuelle supplémentaire pour aiderle chirurgien dans la détermination des zones à opérer ou à éviter (par exemple, visualisation du canal cystique lorsd’une cholécystectomie). La robotique assurera la précision et l’efficience du geste du chirurgien ainsi qu’une visualisationet un suivi "plus intuitif" de la scène chirurgicale. L’association de ces deux technologies permettra de guideret sécuriser le geste chirurgical.Une première partie de ce travail a consisté en l’extraction d’informations visuelles dans les deux modalités d’imagerie(laparoscopie/fluorescence). Des méthodes de localisation 2D/3D en temps réel d’instruments chirurgicaux dansl’image laparoscopique et de cibles anatomiques dans l’image de fluorescence ont été conçues et développées.Une seconde partie a consisté en l’exploitation de l’information visuelle bimodale pour l’élaboration de lois de commandepour des robots porte-endoscope et porte-instrument. Des commandes par asservissement visuel d’un robotporte-endoscope pour suivre un ou plusieurs instruments dans l’image laparoscopique ou une cible d’intérêt dansl’image de fluorescence ont été mises en oeuvre.Dans l’objectif de pouvoir commander un robot porte-instrument, enfonction des informations visuelles fournies par le système d’imagerie, une méthode de calibrage basée sur l’exploitationde l’information 3D de la localisation d’instruments chirurgicaux a également été élaborée. Cet environnementmultimodal a été évalué quantitativement sur banc d’essai puis sur spécimens anatomiques.À terme ce travail pourra s’intégrer au sein d’architectures robotisées légères, non-rigidement liées, utilisant des robotsde comanipulation avec des commandes plus élaborées tel que le retour d’effort. Une telle "augmentation" descapacités de visualisation et d’action du chirurgien pourraient l’aider à optimiser la prise en charge de son patient. / Laparoscopic surgery faithfully reproduce the principles of conventional surgery with minimal physical aggression.If this surgery appears to be very beneficial for the patient, it is a difficult surgery where the complexity of surgicalact is increased, compared with conventional surgery. This complexity is partly due to the manipulation of surgicalinstruments and viewing the surgical scene (including the restricted field of view of a conventional endoscope). Thedecisions of the surgeon could be improved by identifying critical or not visible areas of interest in the surgical scene.My research aimed to combine robotics, computer vision and fluorescence to provide an answer to these problems :fluorescence imaging provides additional visual information to assist the surgeon in determining areas to operate or tobe avoided (for example, visualization of the cystic duct during cholecystectomy). Robotics will provide the accuracyand efficiency of the surgeon’s gesture as well as a visualization and a "more intuitive" tracking of the surgical scene.The combination of these two technologies will help guide and secure the surgical gesture.A first part of this work consisted in extracting visual information in both imagingmodalities (laparoscopy/fluorescence).Localization methods for 2D/3D real-time of laparoscopic surgical instruments in the laparoscopic image and anatomicaltargets in the fluorescence image have been designed and developed. A second part consisted in the exploitationof the bimodal visual information for developing control laws for robotics endoscope holder and the instrument holder.Visual servoing controls of a robotic endoscope holder to track one or more instruments in laparoscopic image ora target of interest in the fluorescence image were implemented. In order to control a robotic instrument holder withthe visual information provided by the imaging system, a calibration method based on the use of 3D information of thelocalization of surgical instruments was also developed. This multimodal environment was evaluated quantitativelyon the test bench and on anatomical specimens.Ultimately this work will be integrated within lightweight robotic architectures, not rigidly linked, using comanipulationrobots with more sophisticated controls such as force feedback. Such an "increase" viewing capabilities andsurgeon’s action could help to optimize the management of the patient.
63

Suivi des structures offshore par commande référencée vision et multi-capteurs / Offshore structure following by means of sensor servoing and sensor fusion

Krupiński, Szymon 10 July 2014 (has links)
Cette thèse vise à rendre possible l’utilisation des véhicules sous-marins autonomes (AUVs) dans l’inspection visuelle des structures offshore. Deux tâches sont identifiées: le suivi des structures rectilignes et la stabilisation devant les cibles planaires. Les AUVs complétement actionnés et équipés d'une centrale inertielle, un DVL, un altimètre et une caméra vidéo sont visés. La dynamique en 6 d.d.l. d'un AUV est rappelée. L'architecture de contrôle reflétant la structure en cascade de la dynamique est choisie. Une boucle interne asservie la vitesse du véhicule à la consigne et une boucle externe calcule la vitesse de référence à partir des informations visuelles. Le suivi de pipe est assuré par l'asservissement visuel 2D qui calcule la vitesse de référence à partir des bords du pipeline détectés dans l’image. La convergence globale asymptotique et localement exponentielle de la position, de l’orientation et de la vitesse sont obtenues. Le contrôleur de stabilisation utilise la matrice d’homographie. Seule la connaissance imprécise de l’orientation de la cible est nécessaire. L’information cartésienne de la profondeur de la cible est estimée à l’aide d’un observateur. La convergence quasi-globale et localement exponentielle de ce contrôleur est démontrée. Afin de tester ces méthodes un simulateur a été développé. Des images de synthèse de haute-fidélité sont générées à partir de simulateur Morse. Elles sont traitées en temps réel à l’aide de la bibliothèque OpenCV. Un modèle Simulink calcule la dynamique complète des 6 d.d.l. du véhicule simulé. Des résultats détaillés sont présentés et mettent en avant les résultats théoriques obtenus. / This thesis deals with a control system for a underwater autonomous vehicle given a two consequent tasks: following a linear object and stabilisation with respect to a planar target using an on-board camera. The proposed solution of this control problem takes advantage of a cascading nature of the system and divides it into a velocity pilot control and two visual servoing schemes. The serving controllers generate the reference velocity on the basis of visual information; line following is based on binormalized Pluecker coordinates of parallel lines corresponding to the pipe contours detected in the image, while the stabilisation relies on the planar homography matrix of observed object features, w.r.t. the image of the same object observed at the desired pose. The pilot, constructed on the full 6 d.o.f. nonlinear model of the AUV, assures that the vehicle’s linear and angular velocities converge to their respective setpoints. Both image servoing schemes are based on minimal assumptions and knowledge of the environment. Validation is provided by a high-fidelity 6 d.o.f. dynamics simulation coupled with a challenging 3D visual environment, which generates images for automatic processing and visual servoing. A custom simulator is built that consist of a Simulink model for dynamics simulation and the MORSE robot and sensor simulator, bound together by ROS message passing libraries. The OpenCV library is used for real-time image processing. Methods of visual data filtering are described. Thus generated experimental data is provided that confirms the desired properties of the control scheme presented earlier.
64

Robust micro/nano-positioning by visual servoing / Micro et nano-positionnement robuste par l'asservissement visuel

Cui, Le 26 January 2016 (has links)
Avec le développement des nanotechnologies, il est devenu possible et souhaitable de créer et d'assembler des nano-objets. Afin d'obtenir des processus automatisés robustes et fiables, la manipulation à l'échelle nanométrique est devenue, au cours des dernières années, une tâche primordiale. La vision est un moyen indispensable pour observer le monde à l'échelle micrométrique et nanométrique. Le contrôle basé sur la vision est une solution efficace pour les problèmes de contrôle de la robotique. Dans cette thèse, nous abordons la problématique du micro- et nano-positionnement par asservissement visuel via l'utilisation d'un microscope électronique à balayage (MEB). Dans un premier temps, la formation d'image MEB et les modèles géométriques de la vision appliqués aux MEB sont étudiés afin de présenter, par la suite, une méthode d'étalonnage de MEB par l'optimisation non-linéaire considérant les modèles de projection perspective et parallèle. Dans cette étude, il est constaté qu'il est difficile d'observer l'information de profondeur à partir de la variation de la position de pixel de l'échantillon dans l'image MEB à un grossissement élevé. Afin de résoudre le problème de la non-observabilité du mouvement dans l'axe de la profondeur du MEB, les informations de défocalisation d'image sont considérées comme caractéristiques visuelles pour commander le mouvement sur cet axe. Une méthode d'asservissement visuelle hybride est alors proposée pour effectuer le micro-positionnement en 6 degrés de liberté en utilisant les informations de défocalisation d'image et de photométrique d'image. Cette méthode est ensuite validée via l'utilisation d'un robot parallèle dans un MEB. Finalement, un système de contrôle en boucle fermée pour l'autofocus du MEB est introduit et validé par des expériences. Une méthode de suivi visuel et d'estimation de pose 3D, par la mise en correspondance avec un modèle de texture, est proposée afin de réaliser le guidage visuel dans un MEB. Cette méthode est robuste au flou d'image à cause de la défocalisation provoquée par le mouvement sur l'axe de la profondeur car le niveau de défocalisation est modélisée dans ce cadre de suivi visuel. / With the development of nanotechnology, it became possible to design and assemble nano-objects. For robust and reliable automation processes, handling and manipulation tasks at the nanoscale is increasingly required over the last decade. Vision is one of the most indispensable ways to observe the world in micrioscale and nanoscale. Vision-based control is an efficient solution for control problems in robotics. In this thesis, we address the issue of micro- and nano-positioning by visual servoing in a Scanning Electron Microscope (SEM). As the fundamental knowledge, the SEM image formation and SEM vision geometry models are studied at first. A nonlinear optimization process for SEM calibration has been presented considering both perspective and parallel projection model. In this study, it is found that it is difficult to observe the depth information from the variation of the pixel position of the sample in SEM image at high magnification. In order to solve the problem that the motion along the depth direction is not observable in a SEM, the image defocus information is considered as a visual feature to control the motion along the depth direction. A hybrid visual servoing scheme has been proposed for 6-DoF micro-positioning task using both image defocus information and image photometric information. It has been validated using a parallel robot in a SEM. Based on the similar idea, a closed-loop control scheme for SEM autofocusing task has been introduced and validated by experiments. In order to achieve the visual guidance in a SEM, a template-based visual tracking and 3D pose estimation framework has been proposed. This method is robust to the defocus blur caused by the motion along the depth direction since the defocus level is modeled in the visual tracking framework.
65

Vizuální zpětnovazební řízení pro humanoidního robota / Visual servoing for humanoid robot

Nedvědický, Pavel January 2020 (has links)
This thesis deals with construction of a cheap robotic manipulator, which should be used for exhibitions and educational purposes. This project is a teamwork of two students. A robotic arm with four degrees of freedom was developed. Control and power electronics were installed for whole robot. The software’s aim is to develop a software that can control the robot by visual feedback, obtained from image processing of an image from 3D camera. Lastly, a graphic user interface for robot movement control is presented.
66

Vision-Based Human Directed Robot Guidance

Arthur, Richard B. 11 October 2004 (has links) (PDF)
This paper describes methods to track a user-defined point in the vision of a robot as it drives forward. This tracking allows a robot to keep itself directed at that point while driving so that it can get to that user-defined point. I develop and present two new multi-scale algorithms for tracking arbitrary points between two frames of video, as well as through a video sequence. The multi-scale algorithms do not use the traditional pyramid image, but instead use a data structure called an integral image (also known as a summed area table). The first algorithm uses edge-detection to track the movement of the tracking point between frames of video. The second algorithm uses a modified version of the Moravec operator to track the movement of the tracking point between frames of video. Both of these algorithms can track the user-specified point very quickly. Implemented on a conventional desktop, tracking can proceed at a rate of at least 20 frames per second.
67

MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL

Solanes Galbis, Juan Ernesto 24 November 2015 (has links)
[EN] This thesis deals with two characteristic problems in visual feedback robot control: 1) sensor latency; 2) providing suitable trajectories for the robot and for the measurement in the image. All the approaches presented in this work are analyzed and implemented on a 6 DOF industrial robot manipulator or/and a wheeled robot. Focusing on the sensor latency problem, this thesis proposes the use of dual-rate high order holds within the control loop of robots. In this sense, the main contributions are: - Dual-rate high order holds based on primitive functions for robot control (Chapter 3): analysis of the system performance with and without the use of this multi-rate technique from non-conventional control. In addition, as consequence of the use of dual-rate holds, this work obtains and validates multi-rate controllers, especially dual-rate PIDs. - Asynchronous dual-rate high order holds based on primitive functions with time delay compensation (Chapter 3): generalization of asynchronous dual-rate high order holds incorporating an input signal time delay compensation component, improving thus the inter-sampling estimations computed by the hold. It is provided an analysis of the properties of such dual-rate holds with time delay compensation, comparing them with estimations obtained by the equivalent dual-rate holds without this compensation, as well as their implementation and validation within the control loop of a 6 DOF industrial robot manipulator. - Multi-rate nonlinear high order holds (Chapter 4): generalization of the concept of dual-rate high order holds with nonlinear estimation models, which include information about the plant to be controlled, the controller(s) and sensor(s) used, obtained from machine learning techniques. Thus, in order to obtain such a nonlinear hold, it is described a methodology non dependent of the machine technique used, although validated using artificial neural networks. Finally, an analysis of the properties of these new holds is carried out, comparing them with their equivalents based on primitive functions, as well as their implementation and validation within the control loop of an industrial robot manipulator and a wheeled robot. With respect to the problem of providing suitable trajectories for the robot and for the measurement in the image, this thesis presents the novel reference features filtering control strategy and its generalization from a multi-rate point of view. The main contributions in this regard are: - Reference features filtering control strategy (Chapter 5): a new control strategy is proposed to enlarge significantly the solution task reachability of robot visual feedback control. The main idea is to use optimal trajectories proposed by a non-linear EKF predictor-smoother (ERTS), based on Rauch-Tung-Striebel (RTS) algorithm, as new feature references for an underlying visual feedback controller. In this work it is provided both the description of the implementation algorithm and its implementation and validation utilizing an industrial robot manipulator. - Dual-rate Reference features filtering control strategy (Chapter 5): a generalization of the reference features filtering approach from a multi-rate point of view, and a dual Kalman-smoother step based on the relation of the sensor and controller frequencies of the reference filtering control strategy is provided, reducing the computational cost of the former algorithm, as well as addressing the problem of the sensor latency. The implementation algorithms, as well as its analysis, are described. / [ES] La presente tesis propone soluciones para dos problemas característicos de los sistemas robóticos cuyo bucle de control se cierra únicamente empleando sensores de visión artificial: 1) la latencia del sensor; 2) la obtención de trayectorias factibles tanto para el robot así como para las medidas obtenidas en la imagen. Todos los métodos propuestos en este trabajo son analizados, validados e implementados utilizando brazo robot industrial de 6 grados de libertad y/o en un robot con ruedas. Atendiendo al problema de la latencia del sensor, esta tesis propone el uso de retenedores bi-frequencia de orden alto dentro de los lazos de control de robots. En este aspecto las principales contribuciones son: -Retenedores bi-frecuencia de orden alto basados en funciones primitivas dentro de lazos de control de robots (Capítulo 3): análisis del comportamiento del sistema con y sin el uso de esta técnica de control no convencional. Además, como consecuencia del empleo de los retenedores, obtención y validación de controladores multi-frequencia, concretamente de PIDs bi-frecuencia. -Retenedores bi-frecuencia asíncronos de orden alto basados en funciones primitivas con compensación de retardos (Capítulo 3): generalización de los retenedores bi-frecuencia asíncronos de orden alto incluyendo una componente de compensación del retardo en la señal de entrada, mejorando así las estimaciones inter-muestreo calculadas por el retenedor. Se proporciona un análisis de las propiedades de los retenedores con compensación del retardo, comparándolas con las obtenidas por sus predecesores sin compensación, así como su implementación y validación en un brazo robot de 6 grados de libertad. -Retenedores multi-frecuencia no lineales de orden alto (Capítulo 4): generalización del concepto de retenedor bi-frecuencia de orden alto con modelos de estimación no lineales, los cuales incluyen información tanto de la planta a controlar, como del controlador(es) y sensor(es) empleado(s), obtenida a partir de técnicas de aprendizaje. Así pues, para obtener dicho retenedor no lineal, se describe una metodología independiente de la herramienta de aprendizaje utilizada, aunque validada con el uso de redes neuronales artificiales. Finalmente se realiza un análisis de las propiedades de estos nuevos retenedores, comparándolos con sus predecesores basados en funciones primitivas, así como su implementación y validación en un brazo robot de 6 grados de libertad y en un robot móvil con ruedas. Por lo que respecta al problema de generación de trayectorias factibles para el robot y para la medida en la imagen, esta tesis propone la nueva estrategia de control basada en el filtrado de la referencia y su generalización desde el punto de vista multi-frecuencial. -Estrategia de control basada en el filtrado de la referencia (Capítulo 5): una nueva estrategia de control se propone para ampliar significativamente el espacio de soluciones de los sistemas robóticos realimentados con sensores de visión artificial. La principal idea es utilizar las trayectorias óptimas obtenidas por una trayectoria predicha por un filtro de Kalman seguido de un suavizado basado en el algoritmo Rauch-Tung-Striebel (RTS) como nuevas referencias para un controlador dado. En este trabajo se proporciona tanto la descripción del algoritmo como su implementación y validación empleando un brazo robótico industrial. -Estrategia de control bi-frecuencia basada en el filtrado de la referencia (Capítulo 5): generalización de la estrategia de control basada en filtrado de la referencia desde un punto de vista multi-frecuencial, con un filtro de Kalman multi-frecuencia y un Kalman-smoother dual basado en la relación existente entre las frecuencias del sensor y del controlador, reduciendo así el coste computacional del algoritmo y, al mismo tiempo, dando solución al problema de la latencia del sensor. La validación se realiza utilizando un barzo robot industria asi / [CA] La present tesis proposa solucions per a dos problemes característics dels sistemes robòtics el els que el bucle de control es tanca únicament utilitzant sensors de visió artificial: 1) la latència del sensor; 2) l'obtenció de trajectòries factibles tant per al robot com per les mesures en la imatge. Tots els mètodes proposats en aquest treball son analitzats, validats e implementats utilitzant un braç robot industrial de 6 graus de llibertat i/o un robot amb rodes. Atenent al problema de la latència del sensor, esta tesis proposa l'ús de retenidors bi-freqüència d'ordre alt a dins del llaços de control de robots. Al respecte, les principals contribucions son: - Retenidors bi-freqüència d'ordre alt basats en funcions primitives a dintre dels llaços de control de robots (Capítol 3): anàlisis del comportament del sistema amb i sense l'ús d'aquesta tècnica de control no convencional. A més a més, com a conseqüència de l'ús dels retenidors, obtenció i validació de controladors multi-freqüència, concretament de PIDs bi-freqüència. - Retenidors bi-freqüència asíncrons d'ordre alt basats en funcions primitives amb compensació de retards (Capítol 3): generalització dels retenidors bi-freqüència asíncrons d'ordre alt inclouen una component de compensació del retràs en la senyal d'entrada al retenidor, millorant així les estimacions inter-mostreig calculades per el retenidor. Es proporciona un anàlisis de les propietats dels retenidors amb compensació del retràs, comparant-les amb les obtingudes per el seus predecessors sense la compensació, així com la seua implementació i validació en un braç robot industrial de 6 graus de llibertat. - Retenidors multi-freqüència no-lineals d'ordre alt (Capítol 4): generalització del concepte de retenidor bi-freqüència d'ordre alt amb models d'estimació no lineals, incloent informació tant de la planta a controlar, com del controlador(s) i sensor(s) utilitzat(s), obtenint-la a partir de tècniques d'aprenentatge. Així doncs, per obtindre el retenidor no lineal, es descriu una metodologia independent de la ferramenta d'aprenentatge utilitzada, però validada amb l'ús de rets neuronals artificials. Finalment es realitza un anàlisis de les propietats d'aquestos nous retenidors, comparant-los amb els seus predecessors basats amb funcions primitives, així com la seua implementació i validació amb un braç robot de 6 graus de llibertat i amb un robot mòbil de rodes. Per el que respecta al problema de generació de trajectòries factibles per al robot i per la mesura en la imatge, aquesta tesis proposa la nova estratègia de control basada amb el filtrat de la referència i la seua generalització des de el punt de vista multi-freqüència. - Estratègia de control basada amb el filtrat de la referència (Capítol 5): una nova estratègia de control es proposada per ampliar significativament l'espai de solucions dels sistemes robòtics realimentats amb sensors de visió artificial. La principal idea es la d'utilitzar les trajectòries optimes obtingudes per una trajectòria predita per un filtre de Kalman seguit d'un suavitzat basat en l'algoritme Rauch-Tung-Striebel (RTS) com noves referències per a un control donat. En aquest treball es proporciona tant la descripció del algoritme així com la seua implementació i validació utilitzant un braç robòtic industrial de 6 graus de llibertat. - Estratègia de control bi-freqüència basada en el filtrat (Capítol 5): generalització de l'estratègia de control basada am filtrat de la referència des de un punt de vista multi freqüència, amb un filtre de Kalman multi freqüència i un Kalman-Smoother dual basat amb la relació existent entre les freqüències del sensor i del controlador, reduint així el cost computacional de l'algoritme i, al mateix temps, donant solució al problema de la latència del sensor. L'algoritme d'implementació d'aquesta tècnica, així com la seua validaci / Solanes Galbis, JE. (2015). MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57951
68

[en] A ROBUST VISUAL SERVOING APPROACH FOR ROBOTIC FRUIT HARVESTING / [pt] UMA ABORDAGEM DE SERVOVISÃO ROBUSTA PARA COLHEITA ROBÓTICA DE FRUTAS

JUAN DAVID GAMBA CAMACHO 05 February 2019 (has links)
[pt] Neste trabalho, apresenta-se diferentes esquemas de controle servovisuais para tarefas robóticas de colheita de fruta, na presença de incertezas paramétricas nos modelos do sistema. O primeiro esquema combina as abordagens de servovisão baseada em posição (PBVS) e servovisão baseada em imagem (IBVS) para realizar, respectivamente, a aproximação até a fruta e, em seguida, um ajuste fino para a colheita. O segundo esquema usa uma abordagem de servovisão híbrida (HVS) para realizar a tarefa de colheita completa, projetando uma lei de controle adequada que combina vetores de erro definidos no espaço operacional e no espaço da imagem. A fase de detecção utiliza um algoritmo baseado no espaço de cores OTHA e limiar da imagem Otsu para um rápido reconhecimento de frutos maduros em cenários complexos. Além disso, um método de detecção mais preciso emprega uma Rede Neural Convolucional Profunda (DCNN) pré-treinada baseada em uma versão Segnet minimizada para uma inferência rápida durante a execução da tarefa. A localização do objeto é realizada empregando uma técnica de triangulação de imagem, que combina os algoritmos SURF e RANSAC ou ORB e BF-Matcher para extrair a característica da imagem da fruta e associa-lo com o seu ponto correspondente na outra visualização. No entanto, como esses algoritmos exigem um elevado custo computacional para os requisitos da tarefa, um método de estimativa mais rápido utiliza o centróide da fruta e transformação homogênea para descobrir os pontos correspondentes. Finalmente, um esquema de controle em modos deslizantes (SMC) baseado em visão e uma função de monitoramento de comutação são empregados para lidar com incertezas nos parâmetros de calibração do sistema de câmera-robô. Nesse sentido, é possível garantir a estabilidade assintótica e a convergência do erro da característica da imagem, mesmo que o ângulo de desalinhamento, em torno do eixo z, entre os sistemas de coordenadas da câmera e do efetuador seja incerto. / [en] In this work, we present different eye-in-hand visual servoing control schemes applied to a robotic harvesting task of soft fruits in the presence of parametric uncertainties in the system models. The first scheme combines position-based visual servoing (PBVS) and image-based visual servoing (IBVS) approaches in order to perform respectively an approach phase to the fruit and then a fine tuning of the end-effector to harvest. The second scheme uses a hybrid visual servoing (HVS) approach to fulfill the complete harvesting task, by designing a suitable control law which combines error vectors defined in both the image and operational spaces. For detecting the fruits, an algorithm based on the combination of the OHTA color space and Otsu’s threshold method for a fast recognition of mature fruits in complex scenarios. In addition, a more accurate detection method employs a pre-trained deep encoder-decoder algorithm based on a minimized Segnet version for a fast and cheap inference during the task execution. The object localization is accomplished by employing an image triangulation technique, which combines the speeded-up-robust-features (SURF) and the-randomsample-consensus (RANSAC) or the Oriented FAST and Rotated BRIEF and the Brute-Force Matcher (BF-Matcher) algorithms to extract the fruit image feature and match it to its correspondent feature-point into the other view of the stereo camera. However, since these algorithms are computationally expensive for the task requirements, a faster estimation method uses the fruit centroid and a homogeneous transformation for discovering matching points. Finally, a vision-based sliding-mode-control scheme and a switching monitoring function are employed to cope with uncertainties in the calibration parameters of the camera-robot system. In this context, it is possible to guarantee the asymptotic stability and convergence of the image feature error, even if the misalignment angle, around the z-axis, between the camera and end-effector frames is uncertain. 3D computer simulations and preliminary experimental results, obtained with a Mitsubishi robot arm RV-2AJ carrying out a simple strawberry picking task, are included to illustrate the performance and effectiveness of the proposed control scheme.
69

Assisted visual servoing by means of structured light

Pagès Marco, Jordi 25 November 2005 (has links)
Aquesta tesi tracta sobre la combinació del control visual i la llum estructurada. El control visual clàssic assumeix que elements visuals poden ser fàcilment extrets de les imatges. Això fa que objectes d'aspecte uniforme o poc texturats no es puguin tenir en compte. En aquesta tesi proposem l'ús de la llum estructurada per dotar d'elements visuals als objectes independentment de la seva aparença.En primer lloc, es presenta un ampli estudi de la llum estructurada, el qual ens permet proposar un nou patró codificat que millora els existents. La resta de la tesi es concentra en el posicionament d'un robot dotat d'una càmara respecte diferentsobjectes, utilitzant la informació proveïda per la projecció de diferents patrons de llum. Dos configuracions han estat estudiades: quan el projector de llum es troba separat del robot,i quan el projector està embarcat en el robot juntament amb la càmara. Les tècniques proposades en la tesi estan avalades per un ampli estudi analític i validades per resultats experimentals. / This thesis treats about the combination of visual servoing and structured light. Classic visual servoing assumes that visual features can be extracted from the images. However, uniform ornon-textured objects, or objects for which extracting features is too complex or too time consuming cannot be taken into account.This thesis proposes the use of structured light patterns for providing suitable visual features independently of the object appearance.Firstly, a comprehensive survey on coded structured light patterns is presented. Then, a new pattern improving the existing ones isproposed. The remaining of the thesis is devoted to position an eye-in-hand robot with respect to objects by using features provided by light patterns. Two configurations are tested. In thefirst one, an off-board video-projector is used while in the second, an onboard structured light emitter is exploited. The techniques proposed in the thesis are supported by theoreticalanalysis and they are validated by experimental results.
70

Estudo de uma técnica para o tratamento de dead-times em operações de rastreamento de objetos por servovisão

Saqui, Diego 22 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:15Z (GMT). No. of bitstreams: 1 6235.pdf: 6898238 bytes, checksum: 058a3b75f03de2058255b7fa7db30dac (MD5) Previous issue date: 2014-05-22 / Financiadora de Estudos e Projetos / Visual servoing is a technique that uses computer vision to acquire visual information (by camera) and a control system with closed loop circuit to control robots. One typical application of visual servoing is tracking objects on conveyors in industrial environments. Visual servoing has the advantage of obtaining a large amount of information from the environment and greater flexibility in operations than other types of sensors. A disadvantage are the delays, known as dead-times or time-delays that can occur during the treatment of visual information in computer vision tasks or other tasks of the control system that need large processing capacity. The dead-times in visual servoing applied in industrial operations such as in the tracking of objects on conveyors are critical and can negatively affect production capacity in manufacturing environments. Some methodogies can be found in the literature for this problem and some of these methodologies are often based on the Kalman filter. In this work a technique was selected based on the formulation of the Kalman filter that already had a study on the prediction of future pose of objects with linear motion. This methodology has been studied in detail, tested and analyzed through simulations for other motions and some applications. Three types of experiments were generated: one for different types of motions and two others applied in different types of signals in the velocity control systems. The results from the motion of the object shown that the technique is able to estimate the future pose of objects with linear motion and smooth curves, but it is inefficient for drastic changes in motion. With respect to the signal to be filtered in the velocity control, the methodogy has been shown applicable (with motions conditions) only in the estimation of pose of the object after the occurrence of dead-times caused by computer vision and this information is subsequently used to calculate the future error of the object related to the robotic manipulator used to calculate the velocity of the robot. The trying to apply the methodogy directly on the error used to calculate the velocity to be applied to the robot did not produce good results. With the results the methodogy can be applied for object tracking with linear motion and smooth curves as in the case of objects transported by conveyors in industrial environments. / Servovisao e uma tecnica que utiliza visao computacional para obter informacoes visuais (atraves de camera) e um sistema de controle com circuito em malha fechada para controlar robos. Uma das aplicacoes tipicas de servovisao e no rastreamento de objetos sobre esteiras transportadoras em ambientes industriais. Servovisao possui a vantagem em relacao a outros tipos de sensores de permitir a obtencao de um grande numero de informacoes a partir do ambiente e maior flexibilidade nas operacoes. Uma desvantagem sao os atrasos conhecidos como dead-times ou time-delays que podem ocorrer durante o tratamento de informacoes visuais nas tarefas de visao computacional ou em outras tarefas do sistema de controle que necessitam de grande capacidade de processamento. Os dead-times em servovisao aplicada em operacoes industriais como no rastreamento de objetos em esteiras transportadoras sao criticos e podem afetar negativamente na capacidade de producao em ambientes de manufatura. Algumas metodologias podem ser encontradas na literatura para esse tipo de problema sendo muitas vezes baseadas no filtro de Kalman. Nesse trabalho foi selecionada uma metodologia baseada na formulacao do filtro de Kalman que ja possui um estudo na previsao futura de pose de objetos com movimentacao linear. Essa metodologia foi estudada detalhadamente, testada atraves de simulacoes e analisada sobre outros tipos de movimentos e algumas aplicacoes. No total foram gerados tres tipos de experimentos: um para diferentes tipos de movimentacao e outros dois aplicados em diferentes tipos de sinais no controlador de velocidades. Os resultados a partir da movimentacao do objeto demonstraram que o metodo e capaz de estimar a pose futura de objetos com movimento linear e com curvas suaves, porem e ineficiente para alteracoes drasticas no movimento. Com relacao ao sinal a ser filtrado no controlador de velocidades a metodologia se demonstrou aplicavel (com as condicoes de movimento) somente na estimativa da pose do objeto apos a ocorrencia de dead-times causados por visao computacional e posteriormente essa informacao e utilizada para calcular o erro futuro do objeto em relacao ao manipulador robotico utilizado no calculo da velocidade do robo. A tentativa de aplicacao da tecnica diretamente no erro utilizado no calculo da velocidade a ser aplicada ao robo nao apresentou bons resultados. Com os resultados obtidos a metodologia se demonstrou eficiente para o rastreamento de objetos de forma linear e curvas suaves como no caso de objetos transportados por esteiras em ambientes industriais.

Page generated in 0.0319 seconds