• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 23
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 185
  • 185
  • 64
  • 35
  • 30
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 21
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Um sistema de vis?o para navega??o robusta de uma plataforma rob?tica semi-aut?noma

Bezerra, Jo?o Paulo de Ara?jo 19 May 2006 (has links)
Made available in DSpace on 2014-12-17T14:55:03Z (GMT). No. of bitstreams: 1 JoaoPAB.pdf: 1121359 bytes, checksum: 0140c2cdd16358b4d1f4ee69b79c5b3c (MD5) Previous issue date: 2006-05-19 / Large efforts have been maden by the scientific community on tasks involving locomotion of mobile robots. To execute this kind of task, we must develop to the robot the ability of navigation through the environment in a safe way, that is, without collisions with the objects. In order to perform this, it is necessary to implement strategies that makes possible to detect obstacles. In this work, we deal with this problem by proposing a system that is able to collect sensory information and to estimate the possibility for obstacles to occur in the mobile robot path. Stereo cameras positioned in parallel to each other in a structure coupled to the robot are employed as the main sensory device, making possible the generation of a disparity map. Code optimizations and a strategy for data reduction and abstraction are applied to the images, resulting in a substantial gain in the execution time. This makes possible to the high level decision processes to execute obstacle deviation in real time. This system can be employed in situations where the robot is remotely operated, as well as in situations where it depends only on itself to generate trajectories (the autonomous case) / Grandes esfor?os t?m sido despendidos pela comunidade cient?fica em tarefas de locomo??o de rob?s m?veis. Para a execu??o deste tipo de tarefa, devemos desenvolver no rob? a habilidade de navega??o no ambiente de forma segura, isto ?, sem que haja colis?es contra objetos. Para que isto seja realizado, faz-se necess?rio implementar estrat?gias que possibilitem a detec??o de obst?culos. Neste trabalho, abordamos este problema, propondo um sistema capaz de coletar informa??es sensoriais e estimar a possibilidade de ocorr?ncia de obst?culos no percurso de um rob? m?vel. C?meras est?reo, posicionadas paralelamente uma ? outra, numa estrutura acoplada ao rob?, s?o empregadas como o dispositivo sensorial principal, pos- sibilitando a gera??o de um mapa de disparidades. Otimiza??es de c?digo e uma estrat?gia de redu??o e abstra??o de dados s?o aplicadas ?s imagens, resultando num ganho substancial no tempo de execu??o. Isto torna poss?vel aos processos de decis?o de mais alto n?vel executar o desvio de obst?culos em tempo real. Este sistema pode ser empregado em situa??es onde o rob? seja tele-operado, bem como em situa??es onde ele dependa de si pr?prio para gerar trajet?rias (no caso aut?nomo)
172

Um m?todo para determina??o da profundidade combinando vis?o est?reo e autocalibra??o para aplica??o em rob?tica m?vel

Sousa Segundo, Jos? S?vio Alves de 30 April 2007 (has links)
Made available in DSpace on 2014-12-17T14:55:09Z (GMT). No. of bitstreams: 1 JoseSASS.pdf: 1375081 bytes, checksum: 1561bdbc1ba8feb7671abf9ebca84641 (MD5) Previous issue date: 2007-04-30 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This work proposes a method to determine the depth of objects in a scene using a combination between stereo vision and self-calibration techniques. Determining the rel- ative distance between visualized objects and a robot, with a stereo head, it is possible to navigate in unknown environments. Stereo vision techniques supply a depth measure by the combination of two or more images from the same scene. To achieve a depth estimates of the in scene objects a reconstruction of this scene geometry is necessary. For such reconstruction the relationship between the three-dimensional world coordi- nates and the two-dimensional images coordinates is necessary. Through the achievement of the cameras intrinsic parameters it is possible to make this coordinates systems relationship. These parameters can be gotten through geometric camera calibration, which, generally is made by a correlation between image characteristics of a calibration pattern with know dimensions. The cameras self-calibration allows the achievement of their intrinsic parameters without using a known calibration pattern, being possible their calculation and alteration during the displacement of the robot in an unknown environment. In this work a self-calibration method based in the three-dimensional polar coordinates to represent image features is presented. This representation is determined by the relationship between images features and horizontal and vertical opening cameras angles. Using the polar coordinates it is possible to geometrically reconstruct the scene. Through the proposed techniques combination it is possible to calculate a scene objects depth estimate, allowing the robot navigation in an unknown environment / Este trabalho prop?e um m?todo para determinar a profundidade de objetos em cena utilizando uma combina??o das t?cnicas de vis?o est?reo e autocalibra??o. Determinando a dist?ncia relativa entre objetos visualizados e um rob? m?vel, dotado de uma cabe?a est?reo, ? poss?vel efetuar sua navega??o em ambientes desconhecidos. As t?cnicas de vis?o est?reo fornecem uma medida de profundidade a partir da combina??o de duas ou mais imagens de uma mesma cena. Para a obten??o de estimativas da profundidade dos objetos presentes nesta cena ? necess?rio uma reconstru??o da geometria da mesma. Para tal reconstru??o ? necess?rio o relacionamento das coordenadas tridimensionais do mundo com as coordenadas bidimensionais das imagens. Atrav?s da obten??o dos par?metros intr?nsecos das c?meras ? poss?vel fazer o relacionamento entre os sistemas de coordenadas. Estes par?metros podem ser obtidos atrav?s da calibra??o geom?trica das c?meras, a qual ? geralmente feita atrav?s da visualiza??o de um objeto padr?o de calibra??o com dimens?es conhecidas. A autocalibra??o das c?meras permite a obten??o dos par?metros intr?nsecos das mesmas sem a utiliza??o de um padr?o conhecido de calibra??o, sendo poss?vel a obten??o e a altera??o destes durante o deslocamento do rob? m?vel em um ambiente desconhecido. ? apresentado neste trabalho um m?todo de autocalibra??o baseado na representa??o de caracter?sticas da imagem por coordenadas polares tridimensionais. Estas s?o determinadas relacionando-se caracter?sticas das imagens com os ?ngulos de abertura horizontal e vertical das c?meras. Utilizando-se estas coordenadas polares ? poss?vel efetuar uma reconstru??o geom?trica da cena de forma precisa. Atrav?s desta combina??o das t?cnicas proposta ? poss?vel obter-se uma estimativa da profundidade de objetos cena, permitindo a navega??o de um rob? m?vel aut?nomo em um ambiente desconhecido
173

Evaluating Vivado High-Level Synthesis on OpenCV Functions for the Zynq-7000 FPGA

Johansson, Henrik January 2015 (has links)
More complex and intricate Computer Vision algorithms combined with higher resolution image streams put bigger and bigger demands on processing power. CPU clock frequencies are now pushing the limits of possible speeds, and have instead started growing in number of cores. Most Computer Vision algorithms' performance respond well to parallel solutions. Dividing the algorithm over 4-8 CPU cores can give a good speed-up, but using chips with Programmable Logic (PL) such as FPGA's can give even more. An interesting recent addition to the FPGA family is a System on Chip (SoC) that combines a CPU and an FPGA in one chip, such as the Zynq-7000 series from Xilinx. This tight integration between the Programmable Logic and Processing System (PS) opens up for designs where C programs can use the programmable logic to accelerate selected parts of the algorithm, while still behaving like a C program. On that subject, Xilinx has introduced a new High-Level Synthesis Tool (HLST) called Vivado HLS, which has the power to accelerate C code by synthesizing it to Hardware Description Language (HDL) code. This potentially bridges two otherwise very separate worlds; the ever popular OpenCV library and FPGAs. This thesis will focus on evaluating Vivado HLS from Xilinx primarily with image processing in mind for potential use on GIMME-2; a system with a Zynq-7020 SoC and two high resolution image sensors, tailored for stereo vision.
174

Recalage hétérogène pour la reconstruction 3D de scènes sous-marines / Heterogeneous Registration for 3D Reconstruction of Underwater Scene

Mahiddine, Amine 30 June 2015 (has links)
Le relevé et la reconstruction 3D de scènes sous-marine deviennent chaque jour plus incontournable devant notre intérêt grandissant pour l’étude des fonds sous-marins. La majorité des travaux existants dans ce domaine sont fondés sur l’utilisation de capteurs acoustiques l’image n’étant souvent qu’illustrative.L’objectif de cette thèse consiste à développer des techniques permettant la fusion de données hétérogènes issues d’un système photogrammétrique et d’un système acoustique.Les travaux présentés dans ce mémoire sont organisés en trois parties. La première est consacrée au traitement des données 2D afin d’améliorer les couleurs des images sous-marines pour augmenter la répétabilité des descripteurs en chaque point 2D. Puis, nous proposons un système de visualisation de scène en 2D sous forme de mosaïque.Dans la deuxième partie, une méthode de reconstruction 3D à partir d’un ensemble non ordonné de plusieurs images a été proposée. Les données 3D ainsi calculées seront fusionnées avec les données provenant du système acoustique dans le but de reconstituer le site sous-marin.Dans la dernière partie de ce travail de thèse, nous proposons une méthode de recalage 3D originale qui se distingue par la nature du descripteur extrait en chaque point. Le descripteur que nous proposons est invariant aux transformations isométriques (rotation, transformation) et permet de s’affranchir du problème de la multi-résolution. Nous validons à l’aide d’une étude effectuée sur des données synthétiques et réelles où nous montrons les limites des méthodes de recalages existantes dans la littérature. Au final, nous proposons une application de notre méthode à la reconnaissance d’objets 3D. / The survey and the 3D reconstruction of underwater become indispensable for our growing interest in the study of the seabed. Most of the existing works in this area are based on the use of acoustic sensors image.The objective of this thesis is to develop techniques for the fusion of heterogeneous data from a photogrammetric system and an acoustic system.The presented work is organized in three parts. The first is devoted to the processing of 2D data to improve the colors of the underwater images, in order to increase the repeatability of the feature descriptors. Then, we propose a system for creating mosaics, in order to visualize the scene.In the second part, a 3D reconstruction method from an unordered set of several images was proposed. The calculated 3D data will be merged with data from the acoustic system in order to reconstruct the underwater scene.In the last part of this thesis, we propose an original method of 3D registration in terms of the nature of the descriptor extracted at each point. The descriptor that we propose is invariant to isometric transformations (rotation, transformation) and addresses the problem of multi-resolution. We validate our approach with a study on synthetic and real data, where we show the limits of the existing methods of registration in the literature. Finally, we propose an application of our method to the recognition of 3D objects.
175

Měření rychlosti vozidel pomocí stereo kamery / Vehicle Speed Measurement Using Stereo Camera Pair

Najman, Pavel January 2021 (has links)
Tato práce se snaží najít odpověď na otázku, zda je v současnosti možné autonomně měřit rychlost vozidel pomocí stereoskopické měřící metody s průměrnou chybou v rozmezí 1 km/h, maximální chybou v rozmezí 3 km/h a směrodatnou odchylkou v rozmezí 1 km/h. Tyto rozsahy chyb jsou založené na požadavcích organizace OIML, jejichž doporučení jsou základem metrologických legislativ mnoha zemí. Pro zodpovězení této otázky je zformulována hypotéza, která je následně testována. Metoda, která využívá stereo kameru pro měření rychlosti vozidel je navržena a experimentálně vyhodnocena. Výsledky pokusů ukazují, že navržená metoda překonává výsledky dosavadních metod. Průměrná chyba měření je přibližně 0.05 km/h, směrodatná odchylka chyby je menší než 0.20 km/h a maximální absolutní hodnota chyby je menší než 0.75 km/h. Tyto výsledky jsou v požadovaném rozmezí a potvrzují tedy testovanou hypotézu.
176

Sledování řidiče / Driver monitoring

Pieger, Matúš January 2021 (has links)
This master’s thesis deals with the design of systems for data collection which describe the driver’s behaviour in a car. This data is used to detect risky behaviour that the driver may commit due to inattention caused by the use of either lower or higher levels of driving automation. The thesis first describes the existing safety systems, especially in relation to the driver. Then it deals with the design of the necessary measuring scenes and the implementation of new systems based on the processing of input images which are obtained via the Intel RealSense D415 stereo camera. Every system is tested in a real vehicle environment. In the end the thesis contains an evaluation regarding the detection reliability of the created algorithms, it considers their shortcomings and possible improvements.
177

Design of a Novel Wearable Ultrasound Vest for Autonomous Monitoring of the Heart Using Machine Learning

Goodman, Garrett G. January 2020 (has links)
No description available.
178

BINOCULAR DEPTH PERCEPTION, PROBABILITY, FUZZY LOGIC, AND CONTINUOUS QUANTIFICATION OF UNIQUENESS

Val, Petran 02 February 2018 (has links)
No description available.
179

The Stixel World

Pfeiffer, David 31 August 2012 (has links)
Die Stixel-Welt ist eine neuartige und vielseitig einsetzbare Zwischenrepräsentation zur effizienten Beschreibung dreidimensionaler Szenen. Heutige stereobasierte Sehsysteme ermöglichen die Bestimmung einer Tiefenmessung für nahezu jeden Bildpunkt in Echtzeit. Das erlaubt zum einen die Anwendung neuer leistungsfähiger Algorithmen, doch gleichzeitig steigt die zu verarbeitende Datenmenge und der dadurch notwendig werdende Aufwand massiv an. Gerade im Hinblick auf die limitierte Rechenleistung jener Systeme, wie sie in der videobasierten Fahrerassistenz zum Einsatz kommen, ist dies eine große Herausforderung. Um dieses Problem zu lösen, bietet die Stixel-Welt eine generische Abstraktion der Rohdaten des Sensors. Jeder Stixel repräsentiert individuell einen Teil eines Objektes im Raum und segmentiert so die Umgebung in Freiraum und Objekte. Die Arbeit stellt die notwendigen Verfahren vor, um die Stixel-Welt mittels dynamischer Programmierung in einem einzigen globalen Optimierungsschritt in Echtzeit zu extrahieren. Dieser Prozess wird durch eine Vielzahl unterschiedlicher Annahmen über unsere von Menschenhand geschaffene Umgebung gestützt. Darauf aufbauend wird ein Kalmanfilter-basiertes Verfahren zur präzisen Bewegungsschätzung anderer Objekte vorgestellt. Die Arbeit stellt umfangreiche Bewertungen der zu erwartenden Leistungsfähigkeit aller vorgestellten Verfahren an. Dafür kommen sowohl vergleichende Ansätze als auch diverse Referenzsensoren, wie beispielsweise LIDAR, RADAR oder hochpräzise Inertialmesssysteme, zur Anwendung. Die Stixel-Welt ist eine extrem kompakte Abstraktion der dreidimensionalen Umgebung und bietet gleichzeitig einfachsten Zugriff auf alle essentiellen Informationen der Szene. Infolge dieser Arbeit war es möglich, die Effizienz vieler auf der Stixel-Welt aufbauender Algorithmen deutlich zu verbessern. / The Stixel World is a novel and versatile medium-level representation to efficiently bridge the gap between pixel-based processing and high-level vision. Modern stereo matching schemes allow to obtain a depth measurement for almost every pixel of an image in real-time, thus allowing the application of new and powerful algorithms. However, it also results in a large amount of measurement data that has to be processed and evaluated. With respect to vision-based driver assistance, these algorithms are executed on highly integrated low-power processing units that leave no room for algorithms with an intense calculation effort. At the same time, the growing number of independently executed vision tasks asks for new concepts to manage the resulting system complexity. These challenges are tackled by introducing a pre-processing step to extract all required information in advance. Each Stixel approximates a part of an object along with its distance and height. The Stixel World is computed in a single unified optimization scheme. Strong use is made of physically motivated a priori knowledge about our man-made three-dimensional environment. Relying on dynamic programming guarantees to extract the globally optimal segmentation for the entire scenario. Kalman filtering techniques are used to precisely estimate the motion state of all tracked objects. Particular emphasis is put on a thorough performance evaluation. Different comparative strategies are followed which include LIDAR, RADAR, and IMU reference sensors, manually created ground truth data, and real-world tests. Altogether, the Stixel World is ideally suited to serve as the basic building block for today''s increasingly complex vision systems. It is an extremely compact abstraction of the actual world giving access to the most essential information about the current scenario. Thanks to this thesis, the efficiency of subsequently executed vision algorithms and applications has improved significantly.
180

Vision-based navigation and mapping for flight in GPS-denied environments

Wu, Allen David 15 November 2010 (has links)
Traditionally, the task of determining aircraft position and attitude for automatic control has been handled by the combination of an inertial measurement unit (IMU) with a Global Positioning System (GPS) receiver. In this configuration, accelerations and angular rates from the IMU can be integrated forward in time, and position updates from the GPS can be used to bound the errors that result from this integration. However, reliance on the reception of GPS signals places artificial constraints on aircraft such as small unmanned aerial vehicles (UAVs) that are otherwise physically capable of operation in indoor, cluttered, or adversarial environments. Therefore, this work investigates methods for incorporating a monocular vision sensor into a standard avionics suite. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations by the vision sensor can in turn be used to resolve aircraft position and orientation while continuing to map out new features. An extended Kalman filter framework for performing the tasks of vision-based mapping and navigation is presented. Feature points are detected in each image using a Harris corner detector, and these feature measurements are corresponded from frame to frame using a statistical Z-test. When GPS is available, sequential observations of a single landmark point allow the point's location in inertial space to be estimated. When GPS is not available, landmarks that have been sufficiently triangulated can be used for estimating vehicle position and attitude. Simulation and real-time flight test results for vision-based mapping and navigation are presented to demonstrate feasibility in real-time applications. These methods are then integrated into a practical framework for flight in GPS-denied environments and verified through the autonomous flight of a UAV during a loss-of-GPS scenario. The methodology is also extended to the application of vehicles equipped with stereo vision systems. This framework enables aircraft capable of hovering in place to maintain a bounded pose estimate indefinitely without drift during a GPS outage.

Page generated in 0.0664 seconds