• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 23
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 185
  • 185
  • 64
  • 35
  • 30
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 21
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Stereo vision based obstacle avoidance in indoor environments

Chiu, Tekkie Tak-Kei, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This thesis presents an indoor obstacle avoidance system for car-like mobile robot. The system consists of stereo vision, map building, and path planning. Stereo vision is performed on stereo images to create a geometric map of the environment. A fast sparse stereo approach is employed. For different areas of the image there are different optimal values of disparity range. A multi-pass method to combine results at different disparity range is proposed. To reduce computational complexity the matching is limited to areas that are likely to generate useful data. The stereo vision system outputs a more complete disparity map. Abstract Map building involves converting the disparity map into map coordinates using triangulation and generating a list of obstacles. Occupancy grids are built to aid a hierarchical collision detection. The fast collision detection method is used by the path planner. Abstract A steering set path planner calculates a path that can be directly used by a car-like mobile robot. An adaptive approach using occupancy grid information is proposed to improve efficiency. Using a non-fixed steering set the path planner spends less computation time in areas away from obstacles. The path planner populates a discrete tree to generate a smooth path. Two tree population methods were trialled to execute the path planner. The methods are implemented and experimented on a real car-like mobile robot.
12

Stereo imaging and obstacle detection methods for vehicle guidance

Zhao, Jun, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2008 (has links)
With modern day computer power, developing intelligent vehicles is fast becoming a reality. An Intelligent Vehicle is a vehicle equipped with sensors and computing that allow it to perceive the world around it, and to decide on appropriate action. Vision cameras are a good choice to sense the environment. One key task of the camera in an intelligent vehicle is to detect and localise the obstacles, which is the preparation of path planning. Stereo vision based obstacle detection is used in this research. It does not analyse semantic meaning of image features, but directly measures the 3-D coordinates of image pixels, and thus is suitable for obstacle detection in an unknown environment. In this research, a novel correlation based stereo vision method is developed which greatly improves its accuracy while maintaining its real-time performance. Since a vision system provides a large amount of data, extracting refined information may sometimes be complex. In obstacle detection tasks, the purpose is to distinguish the obstacle pixels from the ground pixels in the disparity image. V-Disparity image approach is used in this research to detect the ground plane, however this approach relies heavily on sufficient road features. In this research, a correlation method to locate the ground plane in the disparity image, even without significant road features, is developed. Moreover, traditional V-Disparity images have difficulties detecting non-flat ground, thus having limited applications. This research also develops a method to detect non-flat ground using V-Disparity images, thus greatly widening its application.
13

Intelligent Stereo Video Monitoring System for Paramedic Helmet

Liu, Yang January 2017 (has links)
During the first aid process, when patients are threatened by poor medical conditions, ambulance paramedics are required to administer emergency treatment based on instruc- tions provided by a remote emergency doctor through voice communication. However, such voice communication is always limited in expressing abundant detailed information for the patient. This thesis presents a framework for a stereoscopic and intelligent telemedicine sys- tem that can provide 3D live video communication between paramedics and emergency doctors. The proposed system captures 3D video from the paramedic headset carried by the paramedics, transmits the video through wireless live streaming, and displays the video with a 3D effect for emergency doctors in the hospital. The video can be analyzed to extract information about the patient through embedded algorithm such as face de- tection algorithm. In this thesis, the hardware, functional mechanism and face detection algorithm are introduced separately. The hardware of the system consists of a paramedic headset, a server box and a 3D PC, which are used to capture 3D video, transmit video through live streaming and display video with a stereo effect, respectively. The functional mechanism includes two subsystems, which work for pushing the stereo video to multiple live streams and displaying the 3D video from the live stream. In order to detect the patient information from the video, a multi-task face detection algorithm is applied to analyze the stereo video using deep learning technology. We improved the neural networks of face detection by utilizing 1 ⇥ 1 convolutional layers and retrain the network based on the transfer learning to achieve better and faster performance. This system has achieved good and stable performance in network delay (0.0489ms) and objective video quality evaluations. The face detection algorithm has achieved no- table accuracy (91.78% In FDDB dataset) and efficiency (19.71 ms/frame).
14

Aplikace stereovize a počítačového vidění / Computer vision and stereo vision

Bubák, Martin January 2014 (has links)
This dissertation work is describing the usage of the software tool Computer Vision System Toolbox to create applications in computer vision. At the beginning of the work is performed background research of image scanning and its representation by using colour models. It is followed by a description of epipolar geometry and lastly is stated a description of the Computer Vision System Toolbox. In the next section of the work we deal with setting of used Basler cameras and processing of the scanned image. The following is a description how to create applications for object detection and after this description, we get to know applications for creation of depth maps area.
15

A Computer Vision Tool For Use in Horticultural Research

Thoreson, Marcus Alexander 13 February 2017 (has links)
With growing concerns about global food supply and environmental impacts of modern agriculture, we are seeing an increased demand for more horticultural research. While research into plant genetics has seen an increased throughput from recent technological advancements, plant phenotypic research throughput has lagged behind. Improvements in open-source image processing software and image capture hardware have created an opportunity for the development of more competitively-priced, faster data-acquisition tools. These tools could be used to collect measurements of plants' phenotype on a much larger scale without sacrificing data quality. This paper demonstrates the feasibility of creating such a tool. The resulting design utilized stereo vision and image processes in the OpenCV project to measure a representative collection of observable plant traits like leaflet length or plant height. After the stereo camera was assembled and calibrated, visual and stereo images of potato plant canopies and tubers(potatoes) were collected. By processing the visual data, the meaningful regions of the image (the canopy, the leaflets, and the tubers) were identified. The same regions in the stereo images were used to determine plant physical geometry, from which the desired plant measurements were extracted. Using this approach, the tool had an average accuracy of 0.15 inches with respect to distance measurements. Additionally, the tool detected vegetation, tubers, and leaves with average Dice indices of 0.98, 0.84, and 0.75 respectively. To compare the tool's utility to that of traditional implements, a study was conducted on a population of 27 potato plants belonging to 9 separate genotypes. Both newly developed and traditional measurement techniques were used to collect measurements of a variety of the plants' characteristics. A multiple linear regression of the plant characteristics on the plants' genetic data showed that the measurements collected by hand were generally better correlated with genetic characteristics than those collected using the developed tool; the average adjusted coefficient of determination for hand-measurements was 0.77, while that of the tool-measurements was 0.66. Though the aggregation of this platform's results is unsatisfactory, this work has demonstrated that such an alternative to traditional data-collection tools is certainly attainable. / Master of Science
16

Semi-Dense Stereo Reconstruction from Aerial Imagery for Improved Obstacle Detection

Donnelly, James Joseph 22 November 2019 (has links)
Visual perception has been a significant subject matter of robotics research for decades but has accelerated in recent years as both technology and community are more prepared to take on new challenges with autonomous systems. In this thesis, a framework for 3D reconstruction using a stereo camera for the purpose of obstacle detection and mapping is presented. In this application, a UAV works collaboratively with a UGV to provide high level information of the environment by using a downward facing stereo camera. The approach uses frame to frame SURF feature matching to detect candidate points within the camera image. These feature points are projected into a sparse cloud of 3D points using stereophotogrammetry for ICP registration to estimate the rigid transformation between frames. The RTK-GPS constrained pose estimate from the UAV is fused with the feature matched estimate to align the reconstruction and eliminate drift. The reconstruction was tested on both simulated and real data. The results indicate that this approach improves frame to frame registration and produces a well aligned reconstruction for a single pass compared to using the raw UAV position estimate alone. However, multi-pass registration errors occur on the order of about 0.6 meters between parallel passes, and approximately 2 degrees of local rotation error when compared to a reconstruction produced with Agisoft Metashape. However, the proposed system performed at an average frame rate of about 1.3 Hz compared to Agisoft at 0.03 Hz. Overall, the system improved obstacle registration and can perform online within existing ROS frameworks. / Master of Science / Visual perception has been a significant subject matter of robotics research for decades but has accelerated in recent years as both technology and community are more prepared to take on new challenges with autonomous systems. In this thesis, a framework for 3D reconstruction using cameras for the purpose of obstacle detection and mapping is presented. In this application, a UAV works collaboratively with a UGV to provide high level information of the environment by using a downward facing stereo camera. The approach uses features extracted from camera images to detect candidate points to be aligned. These feature points are projected into a sparse cloud of 3D points using stereo triangulation techniques. The 3D points are aligned using an iterative solver to estimate the translation and rotation between frames. The RTK (Real Time Kinematic) GPS constrained position and orientation estimate from the UAV is combined with the feature matched estimate to align the reconstruction and eliminate accumulated errors. The reconstruction was tested on both simulated and real data. The results indicate that this approach improves frame to frame registration and produces a well aligned reconstruction for a single pass compared to using the raw UAV position estimate alone. However, multi-pass registration errors occur on the order of about 0.6 meters between parallel passes that overlap, and approximately 2 degrees of local rotation error when compared to a reconstruction produced with the commercial product, Agisoft. However, the proposed system performed at an average frame rate of about 1.3 Hz compared to Agisoft at 0.03 Hz. Overall, the system improved obstacle registration and can perform online within existing Robot Operating System frameworks.
17

Capacités audiovisuelles en robot humanoïde NAO / Audio-visual capabilities in humanoid robot NAO

Sanchez-Riera, Jordi 14 June 2013 (has links)
Dans cette thèse nous avons l'intention d'enquêter sur la complémentarité des données auditives et visuelles sensorielles pour la construction d'une interprétation de haut niveau d'une scène. L'audiovisuel (AV) d'entrée reçus par le robot est une fonction à la fois l'environnement extérieur et de la localisation réelle du robot qui est étroitement liée à ses actions. La recherche actuelle dans AV analyse de scène a eu tendance à se concentrer sur les observateurs fixes. Toutefois, la preuve psychophysique donne à penser que les humains utilisent petite tête et les mouvements du corps, afin d'optimiser l'emplacement de leurs oreilles à l'égard de la source. De même, en marchant ou en tournant, le robot mai être en mesure d'améliorer les données entrantes visuelle. Par exemple, dans la perception binoculaire, il est souhaitable de réduire la distance de vue à un objet d'intérêt. Cela permet à la structure 3D de l'objet à analyser à une profondeur de résolution supérieure. / In this thesis we plan to investigate the complementarity of auditory and visual sensory data for building a high-level interpretation of a scene. The audiovisual (AV) input received by the robot is a function of both the external environment and of the robot's actual localization which is closely related to its actions. Current research in AV scene analysis has tended to focus on fixed perceivers. However, psychophysical evidence suggests that humans use small head and body movements, in order to optimize the location of their ears with respect to the source. Similarly, by walking or turning, the robot may be able to improve the incoming visual data. For example, in binocular perception, it is desirable to reduce the viewing distance to an object of interest. This allows the 3D structure of the object to be analyzed at a higher depth-resolution.
18

Uma contribuição ao desenvolvimento de sistemas baseados em visão estéreo para o auxílio a navegação de robôs móveis e veículos inteligentes / A contribution to the development of stereo vision based to aid the mobile robot navigation and intelligent vehicles

Fernandes, Leandro Carlos 04 December 2014 (has links)
Esta tese visa apresentar uma contribuição ao desenvolvimento de sistemas computacionais, baseados principalmente em visão computacional, usados para o auxílio a navegação de robôs móveis e veículos inteligentes. Inicialmente, buscou-se apresentar uma proposição de uma arquitetura de um sistema computacional para veículos inteligente que permita a construção de sistemas que sirvam tanto para o apoio ao motorista, auxiliando-o em sua forma de condução, quanto para o controle autônomo, proporcionando maior segurança e autonomia do tráfego de veículos em meio urbano, em rodovias e inclusive no meio rural. Esta arquitetura vem sendo aperfeiçoada e validada junto as plataformas CaRINA I e CaRINA II (Carro Robótico Inteligente para Navegação Autônoma), que também foram alvo de desenvolvimentos e pesquisas junto a esta tese, permitindo também a experimentação prática dos conceitos propostos nesta tese. Neste contexto do desenvolvimento de veículos inteligentes e autônomos, o uso de sensores para a percepção 3D do ambiente possui um papel muito importante, permitindo o desvio de obstáculos e navegação autônoma, onde a adoção de sensores de menor custo tem sido buscada a m de viabilizar aplicações comerciais. As câmeras estéreo são dispositivos que se enquadram nestes requisitos de custo e percepção 3D, destacando-se como sendo o foco da proposta de um novo método automático de calibração apresentado nesta tese. O método proposto permite estimar os parâmetros extrínsecos de um sistema de câmeras estéreo através de um processo evolutivo que considera apenas a coerência e a qualidade de alguns elementos do cenário quanto ao mapa de profundidade. Esta proposta apresenta uma forma original de calibração que permite a um usuário, sem grandes conhecimentos sobre visão estéreo, ajustar o sistema de câmeras para novas configurações e necessidades. O sistema proposto foi testado com imagens reais, obtendo resultados bastante promissores, se comparado aos métodos tradicionais de calibração de câmeras estéreo que fazem uso de um processo interativo de estimação dos parâmetros através da apresentação e uso de um padrão xadrez. Este método apresenta-se como uma abordagem promissora para realizar a fusão dos dados de câmeras e sensores, permitindo o ajuste das matrizes de transformação (parâmetros extrínsecos do sistema), a m de obter uma referência única onde são representados e agrupados os dados vindos dos diferentes sensores. / This thesis aims to provide a contribution to computer systems development based on computer vision used to aid the navigation of mobile robots and intelligent vehicles. Initially, we propose a computer system architecture for intelligent vehicles where intention is to support both the driver, helping him in driving way; and the autonomous control, providing greater security and autonomy of vehicular traffic in urban areas, on highways and even in rural areas. This architecture has been validated and improved with CaRINA I and CaRINA II platforms development, which were also subject of this thesis and allowed the practical experimentation of concepts proposed. In context of intelligent autonomous vehicles, the use of sensors that provides a 3D environment perception has a very important role to enable obstacle avoidance and autonomous navigation. Therefor the adoption of lower cost sensors have been sought in order to facilitate commercial applications. The stereo cameras are devices that fit these both requirements (cost and 3D perception), standing out as focus of the proposal for a new automatic calibration method presented in this thesis. The proposed method allows to estimate the extrinsic parameters of a stereo camera system through an evolutionary process that considers only the consistency and the quality of some elements of the scenario as to the depth map. This proposal presents a unique form of calibration that allows a user without much knowledge of stereo vision, adjust the camera system for new settings and needs. The system was tested with real images, obtaining very promising results as compared to traditional methods of calibration of stereo cameras that use an iterative process of parameter estimation through the presentation and use of a checkerboard pattern. This method offers a promising approach to achieve the fusion of the data from cameras and sensors, allowing adjustment of transformation matrices (extrinsic system parameters) in order to obtain a single reference in which they are grouped together and represented the data from the different sensors.
19

Uma contribuição ao desenvolvimento de sistemas baseados em visão estéreo para o auxílio a navegação de robôs móveis e veículos inteligentes / A contribution to the development of stereo vision based to aid the mobile robot navigation and intelligent vehicles

Leandro Carlos Fernandes 04 December 2014 (has links)
Esta tese visa apresentar uma contribuição ao desenvolvimento de sistemas computacionais, baseados principalmente em visão computacional, usados para o auxílio a navegação de robôs móveis e veículos inteligentes. Inicialmente, buscou-se apresentar uma proposição de uma arquitetura de um sistema computacional para veículos inteligente que permita a construção de sistemas que sirvam tanto para o apoio ao motorista, auxiliando-o em sua forma de condução, quanto para o controle autônomo, proporcionando maior segurança e autonomia do tráfego de veículos em meio urbano, em rodovias e inclusive no meio rural. Esta arquitetura vem sendo aperfeiçoada e validada junto as plataformas CaRINA I e CaRINA II (Carro Robótico Inteligente para Navegação Autônoma), que também foram alvo de desenvolvimentos e pesquisas junto a esta tese, permitindo também a experimentação prática dos conceitos propostos nesta tese. Neste contexto do desenvolvimento de veículos inteligentes e autônomos, o uso de sensores para a percepção 3D do ambiente possui um papel muito importante, permitindo o desvio de obstáculos e navegação autônoma, onde a adoção de sensores de menor custo tem sido buscada a m de viabilizar aplicações comerciais. As câmeras estéreo são dispositivos que se enquadram nestes requisitos de custo e percepção 3D, destacando-se como sendo o foco da proposta de um novo método automático de calibração apresentado nesta tese. O método proposto permite estimar os parâmetros extrínsecos de um sistema de câmeras estéreo através de um processo evolutivo que considera apenas a coerência e a qualidade de alguns elementos do cenário quanto ao mapa de profundidade. Esta proposta apresenta uma forma original de calibração que permite a um usuário, sem grandes conhecimentos sobre visão estéreo, ajustar o sistema de câmeras para novas configurações e necessidades. O sistema proposto foi testado com imagens reais, obtendo resultados bastante promissores, se comparado aos métodos tradicionais de calibração de câmeras estéreo que fazem uso de um processo interativo de estimação dos parâmetros através da apresentação e uso de um padrão xadrez. Este método apresenta-se como uma abordagem promissora para realizar a fusão dos dados de câmeras e sensores, permitindo o ajuste das matrizes de transformação (parâmetros extrínsecos do sistema), a m de obter uma referência única onde são representados e agrupados os dados vindos dos diferentes sensores. / This thesis aims to provide a contribution to computer systems development based on computer vision used to aid the navigation of mobile robots and intelligent vehicles. Initially, we propose a computer system architecture for intelligent vehicles where intention is to support both the driver, helping him in driving way; and the autonomous control, providing greater security and autonomy of vehicular traffic in urban areas, on highways and even in rural areas. This architecture has been validated and improved with CaRINA I and CaRINA II platforms development, which were also subject of this thesis and allowed the practical experimentation of concepts proposed. In context of intelligent autonomous vehicles, the use of sensors that provides a 3D environment perception has a very important role to enable obstacle avoidance and autonomous navigation. Therefor the adoption of lower cost sensors have been sought in order to facilitate commercial applications. The stereo cameras are devices that fit these both requirements (cost and 3D perception), standing out as focus of the proposal for a new automatic calibration method presented in this thesis. The proposed method allows to estimate the extrinsic parameters of a stereo camera system through an evolutionary process that considers only the consistency and the quality of some elements of the scenario as to the depth map. This proposal presents a unique form of calibration that allows a user without much knowledge of stereo vision, adjust the camera system for new settings and needs. The system was tested with real images, obtaining very promising results as compared to traditional methods of calibration of stereo cameras that use an iterative process of parameter estimation through the presentation and use of a checkerboard pattern. This method offers a promising approach to achieve the fusion of the data from cameras and sensors, allowing adjustment of transformation matrices (extrinsic system parameters) in order to obtain a single reference in which they are grouped together and represented the data from the different sensors.
20

Obstacle detection using stereo vision for unmanned ground vehicles

Olsson, Martin January 2009 (has links)
No description available.

Page generated in 0.0849 seconds