• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 38
  • 29
  • 25
  • 25
  • 20
  • 18
  • 18
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Shape and Pose Recovery of Novel Objects Using Three Images from a Monocular Camera in an Eye-In-Hand Configuration

Colbert, Steven C. 06 April 2010 (has links)
Knowing the shape and pose of objects of interest is critical information when planning robotic grasping and manipulation maneuvers. The ability to recover this information from objects for which the system has no prior knowledge is a valuable behavior for an autonomous or semiautonomous robot. This work develops and presents an algorithm for the shape and pose recovery of unknown objects using no a priori information. Using a monocular camera in an eye-in-hand configuration, three images of the object of interest are captured from three disparate viewing directions. Machine vision techniques are employed to process these images into silhouettes. The silhouettes are used to generate an approximation of the surface of the object in the form of a three dimensional point cloud. The accuracy of this approximation is improved by fitting an eleven parameter geometric shape to the points such that the fitted shape ignores disturbances from noise and perspective projection effects. The parametrized shape represents the model of the unknown object and can be utilized for planning robot grasping maneuvers or other object classification tasks. This work is implemented and tested in simulation and hardware. A simulator is developed to test the algorithm for various three dimensional shapes and any possible imaging positions. Several shapes and viewing configurations are tested and the accuracy of the recoveries are reported and analyzed. After thorough testing of the algorithm in simulation, it is implemented on a six axis industrial manipulator and tested on a range of real world objects: both geometric and amorphous. It is shown that the accuracy of the hardware implementation performs exceedingly well and approaches the accuracy of the simulator, despite the additional sources of error and uncertainty present.
102

Multi-camera uncalibrated visual servoing

Marshall, Matthew Q. 20 September 2013 (has links)
Uncalibrated visual servoing (VS) can improve robot performance without needing camera and robot parameters. Multiple cameras improve uncalibrated VS precision, but no works exist simultaneously using more than two cameras. The first data for uncalibrated VS simultaneously using more than two cameras are presented. VS performance is also compared for two different camera models: a high-cost camera and a low-cost camera, the difference being image noise magnitude and focal length. A Kalman filter based control law for uncalibrated VS is introduced and shown to be stable under the assumptions that robot joint level servo control can reach commanded joint offsets and that the servoing path goes through at least one full column rank robot configuration. Adaptive filtering by a covariance matching technique is applied to achieve automatic camera weighting, prioritizing the best available data. A decentralized sensor fusion architecture is utilized to assure continuous servoing with camera occlusion. The decentralized adaptive Kalman filter (DAKF) control law is compared to a classical method, Gauss-Newton, via simulation and experimentation. Numerical results show that DAKF can improve average tracking error for moving targets and convergence time to static targets. DAKF reduces system sensitivity to noise and poor camera placement, yielding smaller outliers than Gauss-Newton. The DAKF system improves visual servoing performance, simplicity, and reliability.
103

Emulering av en produktioncell med Visionguidning : Virtuell idrifttagning / Emulation of a productioncell including robot vision : Virtual commissioning

Einevik, Johan, Kurri, John January 2017 (has links)
Genom att använda sig utav en virtuell kopia utav en produktionscell kan programmering och funktionstester av olika paneler testas i ett tidigt stadie. En virtuell kopia bidrar också till enklare felsökning och minskning av kostnader vid idrifttagning. Tanken med projektet är att undersöka i vilken utsträckning som emuleringsmodellen kan ersätta den riktiga cellen vid ett funktionstest för leverantören. Det som också undersöks är i vilken utsträckning riktiga CAD-ritningar kan användas och vilka krav som ställs på ritningarna för att underlätta emulering. Projektet hade flera utmaningar och en av dem som uppkom under projektets gång var problemet med att det inte gick att emulera säkerhetssystemen. Detta löstes genom att bygla alla säkerhetskretsar i PLC-programmet. En viktig del i emulering är kommunikation mellan de olika programvarorna i systemet. I projektets visade det sig fördelaktigt att dela upp programmen i emuleringssystemet för att fördela resurserna över tre datorer. Att använda sig utav en emuleringsmodell istället för en riktig produktionscell är fortfarande i forskningsstadiet men genom projektet har många användningsområden identifierats och skulle kunna förändra idrifttagning i framtiden. / Using a virtual twin of a production cell, makes it possible for programming and different functional testing of panels to be performed in early stages of development. A virtual twin contributes to a simpler debugging and to identify problems and minimize cost in commissioning of the production cell. The aim for the project is to investigate how well an emulated cell will perform compared to the real production cell in a factory acceptance test. Another objective is to investigate how you can use real CAD models in the emulation and what type of criteria the models should meet. The project had a lot of challenges and one of them was the difficulty to emulate the safety systems. This was solved by bypassing the safety in the PLC program. One important thing about emulation is communication between the different software used in the system. In this project, it proved successful to distribute the software on three computers to ease the workload of the programs used in the emulation. To use the emulated model instead of the real system is still in the research phase but in this project a lot of useful applications could be identified that could change commissioning in the future.
104

Navigation for automatic guided vehicles using omnidirectional optical sensing

Kotze, Benjamin, Johannes January 2013 (has links)
Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013 / Automatic Guided Vehicles (AGVs) are being used more frequently in a manufacturing environment. These AGVs are navigated in many different ways, utilising multiple types of sensors for detecting the environment like distance, obstacles, and a set route. Different algorithms or methods are then used to utilise this environmental information for navigation purposes applied onto the AGV for control purposes. Developing a platform that could be easily reconfigured in alternative route applications utilising vision was one of the aims of the research. In this research such sensors detecting the environment was replaced and/or minimised by the use of a single, omnidirectional Webcam picture stream utilising an own developed mirror and Perspex tube setup. The area of interest in each frame was extracted saving on computational recourses and time. By utilising image processing, the vehicle was navigated on a predetermined route. Different edge detection methods and segmentation methods were investigated on this vision signal for route and sign navigation. Prewitt edge detection was eventually implemented, Hough transfers used for border detection and Kalman filtering for minimising border detected noise for staying on the navigated route. Reconfigurability was added to the route layout by coloured signs incorporated in the navigation process. The result was the manipulation of a number of AGV’s, each on its own designated coloured signed route. This route could be reconfigured by the operator with no programming alteration or intervention. The YCbCr colour space signal was implemented in detecting specific control signs for alternative colour route navigation. The result was used generating commands to control the AGV through serial commands sent on a laptop’s Universal Serial Bus (USB) port with a PIC microcontroller interface board controlling the motors by means of pulse width modulation (PWM). A total MATLAB® software development platform was utilised by implementing written M-files, Simulink® models, masked function blocks and .mat files for sourcing the workspace variables and generating executable files. This continuous development system lends itself to speedy evaluation and implementation of image processing options on the AGV. All the work done in the thesis was validated by simulations using actual data and by physical experimentation.
105

Uncalibrated robotic visual servo tracking for large residual problems

Munnae, Jomkwun 17 November 2010 (has links)
In visually guided control of a robot, a large residual problem occurs when the robot configuration is not in the neighborhood of the target acquisition configuration. Most existing uncalibrated visual servoing algorithms use quasi-Gauss-Newton methods which are effective for small residual problems. The solution used in this study switches between a full quasi-Newton method for large residual case and the quasi-Gauss-Newton methods for the small case. Visual servoing to handle large residual problems for tracking a moving target has not previously appeared in the literature. For large residual problems various Hessian approximations are introduced including an approximation of the entire Hessian matrix, the dynamic BFGS (DBFGS) algorithm, and two distinct approximations of the residual term, the modified BFGS (MBFGS) algorithm and the dynamic full Newton method with BFGS (DFN-BFGS) algorithm. Due to the fact that the quasi-Gauss-Newton method has the advantage of fast convergence, the quasi-Gauss-Newton step is used as the iteration is sufficiently near the desired solution. A switching algorithm combines a full quasi-Newton method and a quasi-Gauss-Newton method. Switching occurs if the image error norm is less than the switching criterion, which is heuristically selected. An adaptive forgetting factor called the dynamic adaptive forgetting factor (DAFF) is presented. The DAFF method is a heuristic scheme to determine the forgetting factor value based on the image error norm. Compared to other existing adaptive forgetting factor schemes, the DAFF method yields the best performance for both convergence time and the RMS error. Simulation results verify validity of the proposed switching algorithms with the DAFF method for large residual problems. The switching MBFGS algorithm with the DAFF method significantly improves tracking performance in the presence of noise. This work is the first successfully developed model independent, vision-guided control for large residual with capability to stably track a moving target with a robot.
106

Obstacle detection using a monocular camera

Goroshin, Rostislav 19 May 2008 (has links)
The objective of this thesis is to develop a general obstacle segmentation algorithm for use on board a ground based unmanned vehicle (GUV). The algorithm processes video data captured by a single monocular camera mounted on the GUV. We make the assumption that the GUV moves on a locally planar surface, representing the ground plane. We start by deriving the equations of the expected motion field (observed by the camera) induced by the motion of the robot on the ground plane. Given an initial view of a presumably static scene, this motion field is used to generate a predicted view of the same scene after a known camera displacement. This predicted image is compared to the actual image taken at the new camera location by means of an optical flow calculation. Because the planar assumption is used to generate the predicted image, portions of the image which mismatch the prediction correspond to salient feature points on objects which lie above or below the ground plane, we consider these objects obstacles for the GUV. We assume that these salient feature points (called seed pixels ) capture the color statistics of the obstacle and use them to initialize a Bayesian region growing routine to generate a full obstacle segmentation. Alignment of the seed pixels with the obstacle is not guaranteed due to the aperture problem, however successful segmentations were obtained for natural scenes. The algorithm was tested off line using video captured by a camera mounted on a GUV.
107

Stochastically optimized monocular vision-based navigation and guidance

Watanabe, Yoko. January 2007 (has links)
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2008. / Committee Chair: Johnson, Eric; Committee Co-Chair: Calise, Anthony; Committee Member: Prasad, J.V.R.; Committee Member: Tannenbaum, Allen; Committee Member: Tsiotras, Panagiotis.
108

Uma Nova Abordagem para Identificação e Reconhecimento de Marcos Naturais Utilizando Sensores RGB-D

Castro, André Luiz Figueiredo de 17 February 2017 (has links)
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-10T11:47:56Z No. of bitstreams: 1 arquivototal.pdf: 11498612 bytes, checksum: 9182e9402b0905c4209bc405c726f8cc (MD5) / Made available in DSpace on 2017-08-10T11:47:56Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 11498612 bytes, checksum: 9182e9402b0905c4209bc405c726f8cc (MD5) Previous issue date: 2017-02-17 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / With the advance in the research of mobile robots localization algorithms, the need for natural landmark identification and recognition has increased. The detection of natural landmarks is a challenging task because their appearance can be different in shape and design and, as well, they suffer influence of the environment illumination. As an example, a typical 2D object recognition algorithm may not be able to handle the large optical variety of doors and staircases in large corridors. On another direction, recent improvements in low-cost 3D sensors (of the type RGB-D) enable robots to perceive the environment as a 3D spatial structure. Thus, using this new technology, an algorithm for natural landmark identification and recognition based on images acquired from an RGB-D camera is proposed. Basically, during the identification phase that is a first step for working with landmarks, the algorithm exploits the basic structural knowledge about the landmarks by extracting their edges and creating a cloud of edge points. In the next, the recognition phase, the edges are used with a proposed on-the-fly unsupervised recognition algorithm in order to demonstrate the effectiveness of the approach in recognizing doors and staircases. Two methods of recognition have been proposed and results show that a general technique of the two methods passes from the 96 of accuracy. Future approaches propose a mix of these two methods for better results of recognition, as well as inclusion of new objects such as drinking fountains, dumps and compare this modified approach with other approaches that require training, such as nearest K-neighbors, Bayes and neural networks . / Com o avanço na pesquisa de algoritmos de localização de robôs móveis, a necessidade de identificação e reconhecimento de pontos de referência naturais aumentou. A detecção de marcos naturais é uma tarefa desafiadora, porque a sua aparência pode ser diferente em forma e design e, também, eles sofrem influência da iluminação do ambiente. Como um exemplo, um algoritmo de reconhecimento de objeto 2D típico pode não ser capaz de lidar com a grande variedade óptica de portas e escadas em corredores grandes. Em outra direção, as melhorias recentes em sensores 3D de baixo custo (do tipo RGB-D) permitem aos robôs perceber o ambiente como uma estrutura espacial 3D. Assim, usando esta nova tecnologia, um algoritmo para identificação e reconhecimento de marco natural baseado em imagens adquiridas a partir de uma câmera RGB-D é proposto. Basicamente, durante a fase de identificação que é um primeiro passo para trabalhar com marcos, o algoritmo explora o conhecimento estrutural básico sobre os pontos de referência, extraindo suas bordas e criando uma nuvem de pontos de borda. No próxima, a fase de reconhecimento, as arestas são usadas com um algoritmo de reconhecimento não supervisionado proposto on-the-fly para demonstrar a eficácia da abordagem no reconhecimento de portas e escadarias. Dois métodos de Reconhecimento foram propostos e resultados mostram que a eficiência geral dos dois métodos passa dos 96% de Precisão de reconhecimento. Abordagens futuras propõem-se a fusão dos dois métodos para melhores resultados no reconhecimento, bem como inclusão de novos objetos como bebedouros, lixeiras e comparar essa abordagem modificada com outras abordagens que necessitam de treinamento, como K-Neighbouring mais próximo, Bayes e redes neurais.
109

Uma aplicação de navegação robótica autônoma através de visão computacional estéreo / Autonomous application of robotic navigation using computer stereo vision

Diaz Espinosa, Carlos Andrés 16 August 2018 (has links)
Orientador: Paulo Roberto Gardel Kurka / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-16T16:41:02Z (GMT). No. of bitstreams: 1 DiazEspinosa_CarlosAndres_M.pdf: 5130242 bytes, checksum: 334f37aa82bbde2c9ddbfe192baa7c48 (MD5) Previous issue date: 2010 / Resumo: O presente trabalho descreve uma técnica de navegação autônoma, utilizando imagens estereoscópicas de câmeras para estimar o movimento de um robô em um ambiente desconhecido. Um método de correlação de pontos em imagens unidimensionais é desenvolvido para a identificação de pontos homólogos de duas imagens em uma cena. Utilizam-se métodos de segmentação de bordas ou contornos para extrair as principais características inerentes nas imagens. Constrói-se um mapa de profundidade dos pontos da imagem com maior similitude dentre os objetos visíveis no ambiente, utilizando um processo de triangulação. Finalmente a estimação do movimento bidimensional do robô é calculada aproveitando a relação epipolar entre dois ou mais pontos em pares de imagens. Experimentos realizados em ambientes virtuais e testes práticos verificam a viabilidade e robustez dos métodos em aplicações de navegação robótica / Abstract: The present work describes a technique for autonomous navigation using stereoscopic camera images to estimate the movement of a robot in an unknown environment. A onedimensional image point correlation method is developed for the identification of similar image points of a scene. Boundary or contour segments are used to extract the principal characteristics of the images. A depth map is built for the points with grater similarity, among the scene objects depicted, using a triangulation process. Finally, the bi-dimensional movement of a robot is estimated through epipolar relations between two or more correlated points in pairs of images. Virtual ambient and practical robot tests are preformed to evaluate the viability of employment and robustness of the proposed techniques / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
110

'Theta'-FAMs : memórias associativas fuzzy baseadas em funções-'theta' / 'Theta'-FAMs : fuzzy associative memories based on functions-'theta'

Esmi, Estevão, 1982- 25 August 2018 (has links)
Orientador: Peter Sussner / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T00:54:31Z (GMT). No. of bitstreams: 1 Esmi_Estevao_D.pdf: 1836434 bytes, checksum: 5c3a2879200ff2c7bb59b21e24a173fc (MD5) Previous issue date: 2014 / Resumo: Muitas das memórias associativas fuzzy (FAMs) da literatura correspondem a redes neurais com uma única camada de pesos que armazenam de forma distributiva as informações das associações desejadas. As principais aplicações deste tipo de mémorias associativas são encontradas em sistemas baseados em regras fuzzy. Nesta tese introduzimos a classe de memórias associativas fuzzy-T (T-FAMs) que, em contraste com estes outros modelos, representam redes neurais fuzzy com duas camadas. Caso particulares de T-FAMs, denominadas S-FAMs (duais) e E-FAMs, são baseadas em medidas de subsethood e equivalência fuzzy. Resultados gerais sobre a capacidade de armazenamento e a capacidade de correção de erro das T-FAMs também foram providenciados. Adicionalmente, introduzimos um algoritmo geral de treinamento para T-FAM cuja convergência é sempre garantida. Apresentamos ainda um algoritmo alternativo para treinamento de uma certa classe de E-FAMs que além de ajustar os seus parâmetros também determina automaticamente a topologia da rede. Finalmente, comparamos as taxas de classificação produzidas pelas T-FAMs com alguns classificadores bem conhecidos em diversos problemas de classificação disponíveis na internet. Além disso, aplicamos com sucesso as T-FAMs em um problema de auto-localização de robô móvel baseado em visão / Abstract: Most fuzzy associative memories in the literature correspond to neural networks with a single layer of weights that distributively contains the information about the associations to be stored. The main applications of these types of associative memory can be found in fuzzy rule-base systems. In contrast, we present in this thesis the class of T-fuzzy associative memories (T-FAMs) that represent fuzzy neural networks with two layers. Particular cases of T-FAMs, called (dual) S-FAMs and E-FAMs, are based on fuzzy subsethood and equivalence measures. We provide theoretical results concerning the storage capability and error correction capability of T-FAMs. Furthermore, we introduce a general training algorithm for T-FAM that is guaranteed to converge in a finite numbers of iterations. We also proposed another alternative training algorithm for a certain type of E-FAM that not only adjusts the parameters of the corresponding network but also automatically determines its topology. We compare the classification rates produced by T-FAMs with that ones of some well-known classifiers in several benchmark classification problems that are available on the internet. Finally, we successful apply T-FAM approach to a problem of vision-based selflocalization in mobile robotics / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada

Page generated in 0.0789 seconds