• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 11
  • 7
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 195
  • 195
  • 102
  • 75
  • 51
  • 36
  • 34
  • 34
  • 33
  • 31
  • 30
  • 29
  • 26
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

The design and implementation of vision-based autonomous rotorcraft landing

De Jager, Andries Matthys 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: This thesis presents the design and implementation of all the subsystems required to perform precision autonomous helicopter landings within a low-cost framework. To obtain high-accuracy state estimates during the landing phase a vision-based approach, with a downwards facing camera on the helicopter and a known landing target, was used. An e cient monocular-view pose estimation algorithm was developed to determine the helicopter's relative position and attitude during the landing phase. This algorithm was analysed and compared to existing algorithms in terms of sensitivity, robustness and runtime. An augmented kinematic state estimator was developed to combine measurements from low-cost GPS and inertial measurement units with the high accuracy measurements from the camera system. High-level guidance algorithms, capable of performing waypoint navigation and autonomous landings, were developed. A visual position and attitude measurement (VPAM) node was designed and built to perform the pose estimation and execute the associated algorithms. To increase the node's throughput, a compression scheme is used between the image sensor and the processor to reduce the amount of data that needs to be processed. This reduces processing requirements and allows the entire system to remain on-board with no reliance on radio links. The functionality of the VPAM node was con rmed through a number of practical tests. The node is able to provide measurements of su cient accuracy for the subsequent systems in the autonomous landing system. The functionality of the full system was con rmed in a software environment, as well as through testing using a visually augmented hardware-in-the-loop environment. / AFRIKAANSE OPSOMMING: Hierdie tesis beskryf die ontwikkeling van die substelsels wat vir akkurate outonome helikopter landings benodig word. 'n Onderliggende doel was om al die ontwikkeling binne 'n lae-koste raamwerk te voltooi. Hoe-akkuraatheid toestande word benodig om akkurate landings te verseker. Hierdie metings is verkry deur middel van 'n optiese stelsel, bestaande uit 'n kamera gemonteer op die helikopter en 'n bekende landingsteiken, te ontwikkel. 'n Doeltreffende mono-visie posisie-en-orientasie algoritme is ontwikkel om die helikopter se posisie en orientasie, relatief tot die landingsteiken, te bepaal. Hierdie algoritme is deeglik ondersoek en vergelyk met bestaande algoritmes in terme van sensitiwiteit, robuustheid en uitvoertyd. 'n Optimale kinematiese toestandswaarnemer, wat metings van GPS en inersiele sensore kombineer met die metings van die optiese stelsel, is ontwikkel en deur simulasie bevestig. Hoe-vlak leidingsalgoritmes is ontwikkel wat die helikopter in staat stel om punt-tot-punt navigasie en die landingsprosedure uit te voer. 'n Visuele posisie-en-orientasie meetnodus is ontwikkel om die mono-visie posisie-en orientasie algoritmes uit te voer. Om die deurset te verhoog is 'n saampersingsalgoritme gebruik wat die hoeveelheid data wat verwerk moet word, te verminder. Dit het die benodigde verwerkingskrag verminder, wat verseker het dat alle verwerking op aanboord stelsels kan geskied. Die meetnodus en mono-visie algoritmes is deur middel van praktiese toetse bevestig en is in staat om metings van voldoende akkuraatheid aan die outonome landingstelsel te verskaf. Die werking van die volledige stelsel is, deur simulasies in 'n sagteware en hardeware-indie- lus omgewing, bevestig.
132

Melhorando a estima??o de pose com o RANSAC preemptivo generalizado e m?ltiplos geradores de hip?teses

Gomes Neto, Severino Paulo 27 February 2014 (has links)
Made available in DSpace on 2014-12-17T15:47:04Z (GMT). No. of bitstreams: 1 SeverinoPGN_TESE.pdf: 2322839 bytes, checksum: eda5c48fde7c920680bcb8d8be8d5d21 (MD5) Previous issue date: 2014-02-27 / The camera motion estimation represents one of the fundamental problems in Computer Vision and it may be solved by several methods. Preemptive RANSAC is one of them, which in spite of its robustness and speed possesses a lack of flexibility related to the requirements of applications and hardware platforms using it. In this work, we propose an improvement to the structure of Preemptive RANSAC in order to overcome such limitations and make it feasible to execute on devices with heterogeneous resources (specially low budget systems) under tighter time and accuracy constraints. We derived a function called BRUMA from Preemptive RANSAC, which is able to generalize several preemption schemes, allowing previously fixed parameters (block size and elimination factor) to be changed according the applications constraints. We also propose the Generalized Preemptive RANSAC method, which allows to determine the maximum number of hipotheses an algorithm may generate. The experiments performed show the superiority of our method in the expected scenarios. Moreover, additional experiments show that the multimethod hypotheses generation achieved more robust results related to the variability in the set of evaluated motion directions / A estima??o de pose/movimento de c?mera constitui um dos problemas fundamentais na vis?o computacional e pode ser resolvido por v?rios m?todos. Dentre estes m?todos se destaca o Preemptive RANSAC (RANSAC Preemptivo), que apesar da robustez e velocidade apresenta problemas de falta de flexibilidade em rela??o a requerimentos das aplica??es e plataformas computacionais utilizadas. Neste trabalho, propomos um aperfei?oamento da estrutura do Preemptive RANSAC para superar esta limita??o e viabilizar sua execu??o em dispositivos com recursos variados (enfatizando os de poucas capacidades) atendendo a requisitos de tempo e precis?o diversos. Derivamos do Preemptive RANSAC uma fun??o a que chamamos BRUMA, que ? capaz de generalizar v?rios esquemas de preemp??o e que permite que par?metros anteriormente fixos (tamanho de bloco e fator de elimina??o) sejam configurados de acordo com as restri??es da aplica??o. Propomos o m?todo Generalized Preemptive RANSAC (RANSAC Preemptivo Generalizado) que permite ainda alterar a quantidade m?xima de hip?teses a gerar. Os experimentos demonstraram superioridade de nossa proposta nos cen?rios esperados. Al?m disso, experimentos adicionais demonstram que a gera??o de hip?teses multim?todos produz resultados mais robustos em rela??o ? variabilidade nos tipos de movimento executados
133

[en] A FACE RECOGNITION SYSTEM FOR VIDEO SEQUENCES BASED ON A MULTITHREAD IMPLEMENTATION OF TLD / [pt] UM SISTEMA DE RECONHECIMENTO FACIAL EM VÍDEO BASEADO EM UMA IMPLEMENTAÇÃO MULTITHREAD DO ALGORITMO TLD

CIZENANDO MORELLO BONFA 04 October 2018 (has links)
[pt] A identificação facial em vídeo é uma aplicação de grande interesse na comunidade cientifica e na indústria de segurança, impulsionando a busca por técnicas mais robustas e eficientes. Atualmente, no âmbito de reconhecimento facial, as técnicas de identificação frontal são as com melhor taxa de acerto quando comparadas com outras técnicas não frontais. Esse trabalho tem como objetivo principal buscar métodos de avaliar imagens em vídeo em busca de pessoas (rostos), avaliando se a qualidade da imagem está dentro de uma faixa aceitável que permita um algoritmo de reconhecimento facial frontal identificar os indivíduos. Propõem-se maneiras de diminuir a carga de processamento para permitir a avaliação do máximo número de indivíduos numa imagem sem afetar o desempenho em tempo real. Isso é feito através de uma análise da maior parte das técnicas utilizadas nos últimos anos e do estado da arte, compilando toda a informação para ser aplicada em um projeto que utiliza os pontos fortes de cada uma e compense suas deficiências. O resultado é uma plataforma multithread. Para avaliação do desempenho foram realizados testes de carga computacional com o uso de um vídeo público disponibilizado na AVSS (Advanced Video and Signal based Surveillance). Os resultados mostram que a arquitetura promove um melhor uso dos recursos computacionais, permitindo um uso de uma gama maior de algoritmos em cada segmento que compõe a arquitetura, podendo ser selecionados segundo critérios de qualidade da imagem e ambiente onde o vídeo é capturado. / [en] Face recognition in video is an application of great interest in the scientific community and in the surveillance industry, boosting the search for efficient and robust techniques. Nowadays, in the facial recognition field, the frontal identification techniques are those with the best hit ratio when compared with others non-frontal techniques. This work has as main objective seek for methods to evaluate images in video to look for people (faces), assessing if the image quality is in an acceptable range that allows a facial recognition algorithm to identify the individuals. It s proposed ways to decrease the processing load to allow a maximum number of individuals assessed in an image without affecting the real time performance. This is reached through analysis of most the techniques used in the last years and the state-of-the-art, compiling all information to be applied in a project that uses the strengths of each one and offset its shortcomings. The outcome is a multithread platform. Performance evaluation was performed through computational load tests by using public videos available in AVSS ( Advanced Video and Signal based Surveillance). The outcomes show that the architecture makes a better use of the computational resources, allowing use of a wide range of algorithms in every segment of the architecture that can be selected according to quality image and video environment criteria.
134

Vision-based trailer pose estimation for articulated vehicles

de Saxe, Christopher Charles January 2017 (has links)
Articulated Heavy Goods Vehicles (HGVs) are more efficient than conventional rigid lorries, but exhibit reduced low-speed manoeuvrability and high-speed stability. Technologies such as autonomous reversing and path-following trailer steering can mitigate this, but practical limitations of the available sensing technologies restrict their commercialisation potential. This dissertation describes the development of practical vision-based articulation angle and trailer off-tracking sensing for HGVs. Chapter 1 provides a background and literature review, covering important vehicle technologies, existing commercial and experimental sensors for articulation angle and off-tracking measurement, and relevant vision-based technologies. This is followed by an introduction to pertinent computer vision theory and terminology in Chapter 2. Chapter 3 describes the development and simulation-based assessment of an articulation angle sensing concept. It utilises a rear-facing camera mounted behind the truck or tractor, and one of two proposed image processing methods: template-matching and Parallel Tracking and Mapping (PTAM). The PTAM-based method was shown to be the more accurate and versatile method in full-scale vehicle tests. RMS measurement errors of 0.4-1.6° were observed in tests on a tractor semi-trailer (Chapter 4), and 0.8-2.4° in tests on a Nordic combination with two articulation points (Chapter 5). The system requires no truck-trailer communication links or artificial markers, and is compatible with multiple trailer shapes, but was found to have increasing errors at higher articulation angles. Chapter 6 describes the development and simulation-based assessment of a trailer off-tracking sensing concept, which utilises a trailer-mounted stereo camera pair and visual odometry. The concept was evaluated in full-scale tests on a tractor semi-trailer combination in which camera location and stereo baseline were varied, presented in Chapter 7. RMS measurement errors of 0.11-0.13 m were obtained in some tests, but a sensitivity to camera alignment was discovered in others which negatively affected results. A very stiff stereo camera mount with a sub-0.5 m baseline is suggested for future experiments. A summary of the main conclusions, a review of the objectives, and recommendations for future work are given in Chapter 8. Recommendations include further refinement of both sensors, an investigation into lighting sensitivity, and alternative applications of the sensors.
135

Object detection and pose estimation of randomly organized objects for a robotic bin picking system

Skalski, Tomasz, Zaborowski, Witold January 2013 (has links)
Today modern industry systems are almost fully automated. The high requirements regarding speed, flexibility, precision and reliability makes it in some cases very difficult to create. One of the most willingly researched solution to solve many processes without human influence is bin-picking. Bin picking is a very complex process which integrates devices such as: robotic grasping arm, vision system, collision avoidance algorithms and many others. This paper describes the creation of a vision system - the most important part of the whole bin-picking system. Authors propose a model-based solution for estimating a best pick-up candidate position and orientation. In this method database is created from 3D CAD model, compared with processed image from the 3D scanner. Paper widely describes database creation from 3D STL model, Sick IVP 3D scanner configuration and creation of the comparing algorithm based on autocorrelation function and morphological operators. The results shows that proposed solution is universal, time efficient, robust and gives opportunities for further work. / +4915782529118
136

Simultaneous real-time object recognition and pose estimation for artificial systems operating in dynamic environments

Van Wyk, Frans Pieter January 2013 (has links)
Recent advances in technology have increased awareness of the necessity for automated systems in people’s everyday lives. Artificial systems are more frequently being introduced into environments previously thought to be too perilous for humans to operate in. Some robots can be used to extract potentially hazardous materials from sites inaccessible to humans, while others are being developed to aid humans with laborious tasks. A crucial aspect of all artificial systems is the manner in which they interact with their immediate surroundings. Developing such a deceivingly simply aspect has proven to be significantly challenging, as it not only entails the methods through which the system perceives its environment, but also its ability to perform critical tasks. These undertakings often involve the coordination of numerous subsystems, each performing its own complex duty. To complicate matters further, it is nowadays becoming increasingly important for these artificial systems to be able to perform their tasks in real-time. The task of object recognition is typically described as the process of retrieving the object in a database that is most similar to an unknown, or query, object. Pose estimation, on the other hand, involves estimating the position and orientation of an object in three-dimensional space, as seen from an observer’s viewpoint. These two tasks are regarded as vital to many computer vision techniques and and regularly serve as input to more complex perception algorithms. An approach is presented which regards the object recognition and pose estimation procedures as mutually dependent. The core idea is that dissimilar objects might appear similar when observed from certain viewpoints. A feature-based conceptualisation, which makes use of a database, is implemented and used to perform simultaneous object recognition and pose estimation. The design incorporates data compression techniques, originally suggested by the image-processing community, to facilitate fast processing of large databases. System performance is quantified primarily on object recognition, pose estimation and execution time characteristics. These aspects are investigated under ideal conditions by exploiting three-dimensional models of relevant objects. The performance of the system is also analysed for practical scenarios by acquiring input data from a structured light implementation, which resembles that obtained from many commercial range scanners. Practical experiments indicate that the system was capable of performing simultaneous object recognition and pose estimation in approximately 230 ms once a novel object has been sensed. An average object recognition accuracy of approximately 73% was achieved. The pose estimation results were reasonable but prompted further research. The results are comparable to what has been achieved using other suggested approaches such as Viewpoint Feature Histograms and Spin Images. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
137

Calcul de pose dynamique avec les caméras CMOS utilisant une acquisition séquentielle / Dynamic pose estimation with CMOS cameras using sequential acquisition

Magerand, Ludovic 18 December 2014 (has links)
En informatique, la vision par ordinateur s’attache à extraire de l’information à partir de caméras. Les capteurs de celles-ci peuvent être produits avec la technologie CMOS que nous retrouvons dans les appareils mobiles en raison de son faible coût et d’un encombrement réduit. Cette technologie permet d’acquérir rapidement l’image en exposant les lignes de l’image de manière séquentielle. Cependant cette méthode produit des déformations dans l’image s’il existe un mouvement entre la caméra et la scène filmée. Cet effet est connu sous le nom de «Rolling Shutter» et de nombreuses méthodes ont tenté de corriger ces artefacts. Plutôt que de le corriger, des travaux antérieurs ont développé des méthodes pour extraire de l’information sur le mouvement à partir de cet effet. Ces méthodes reposent sur une extension de la modélisation géométrique classique des caméras pour prendre en compte l’acquisition séquentielle et le mouvement entre le capteur et la scène, considéré uniforme. À partir de cette modélisation, il est possible d’étendre le calcul de pose habituel (estimation de la position et de l’orientation de la scène par rapport au capteur) pour estimer aussi les paramètres du mouvement. Dans la continuité de cette démarche, nous présenterons une généralisation à des mouvements non-uniformes basée sur un lissage des dérivées des paramètres de mouvement. Ensuite nous présenterons une modélisation polynomiale du «Rolling Shutter» et une méthode d’optimisation globale pour l’estimation de ces paramètres. Correctement implémenté, cela permet de réaliser une mise en correspondance automatique entre le modèle tridimensionnel et l’image. Pour terminer nous comparerons ces différentes méthodes tant sur des données simulées que sur des données réelles et conclurons. / Computer Vision, a field of Computer Science, is about extracting information from cameras. Their sensors can be produced using the CMOS technology which is widely used on mobile devices due to its low cost and volume. This technology allows a fast acquisition of an image by sequentially exposin the scan-line. However this method produces some deformation in the image if there is a motion between the camera and the filmed scene. This effect is known as Rolling Shutter and various methods have tried to remove these artifacts. Instead of correcting it, previous works have shown methods to extract information on the motion from this effect. These methods rely on a extension of the usual geometrical model of cameras by taking into account the sequential acquisition and the motion, supposed uniform, between the sensor and the scene. From this model, it’s possible to extend the usual pose estimation (estimation of position and orientation of the camera in the scene) to also estimate the motion parameters. Following on from this approach, we will present an extension to non-uniform motions based on a smoothing of the derivatives of the motion parameters. Afterwards, we will present a polynomial model of the Rolling Shutter and a global optimisation method to estimate the motion parameters. Well implemented, this enables to establish an automatic matching between the 3D model and the image. We will conclude with a comparison of all these methods using either simulated or real data.
138

Analyse des personnes dans les films stéréoscopiques / Person analysis in stereoscopic movies

Seguin, Guillaume 29 April 2016 (has links)
Les humains sont au coeur de nombreux problèmes de vision par ordinateur, tels que les systèmes de surveillance ou les voitures sans pilote. Ils sont également au centre de la plupart des contenus visuels, pouvant amener à des jeux de données très larges pour l’entraînement de modèles et d’algorithmes. Par ailleurs, si les données stéréoscopiques font l’objet d’études depuis longtemps, ce n’est que récemment que les films 3D sont devenus un succès commercial. Dans cette thèse, nous étudions comment exploiter les données additionnelles issues des films 3D pour les tâches d’analyse des personnes. Nous explorons tout d’abord comment extraire une notion de profondeur à partir des films stéréoscopiques, sous la forme de cartes de disparité. Nous évaluons ensuite à quel point les méthodes de détection de personne et d’estimation de posture peuvent bénéficier de ces informations supplémentaires. En s’appuyant sur la relative facilité de la tâche de détection de personne dans les films 3D, nous développons une méthode pour collecter automatiquement des exemples de personnes dans les films 3D afin d’entraîner un détecteur de personne pour les films non 3D. Nous nous concentrons ensuite sur la segmentation de plusieurs personnes dans les vidéos. Nous proposons tout d’abord une méthode pour segmenter plusieurs personnes dans les films 3D en combinant des informations dérivées des cartes de profondeur avec des informations dérivées d’estimations de posture. Nous formulons ce problème comme un problème d’étiquetage de graphe multi-étiquettes, et notre méthode intègre un modèle des occlusions pour produire une segmentation multi-instance par plan. Après avoir montré l’efficacité et les limitations de cette méthode, nous proposons un second modèle, qui ne repose lui que sur des détections de personne à travers la vidéo, et pas sur des estimations de posture. Nous formulons ce problème comme la minimisation d’un coût quadratique sous contraintes linéaires. Ces contraintes encodent les informations de localisation fournies par les détections de personne. Cette méthode ne nécessite pas d’information de posture ou des cartes de disparité, mais peut facilement intégrer ces signaux supplémentaires. Elle peut également être utilisée pour d’autres classes d’objets. Nous évaluons tous ces aspects et démontrons la performance de cette nouvelle méthode. / People are at the center of many computer vision tasks, such as surveillance systems or self-driving cars. They are also at the center of most visual contents, potentially providing very large datasets for training models and algorithms. While stereoscopic data has been studied for long, it is only recently that feature-length stereoscopic ("3D") movies became widely available. In this thesis, we study how we can exploit the additional information provided by 3D movies for person analysis. We first explore how to extract a notion of depth from stereo movies in the form of disparity maps. We then evaluate how person detection and human pose estimation methods perform on such data. Leveraging the relative ease of the person detection task in 3D movies, we develop a method to automatically harvest examples of persons in 3D movies and train a person detector for standard color movies. We then focus on the task of segmenting multiple people in videos. We first propose a method to segment multiple people in 3D videos by combining cues derived from pose estimates with ones derived from disparity maps. We formulate the segmentation problem as a multi-label Conditional Random Field problem, and our method integrates an occlusion model to produce a layered, multi-instance segmentation. After showing the effectiveness of this approach as well as its limitations, we propose a second model which only relies on tracks of person detections and not on pose estimates. We formulate our problem as a convex optimization one, with the minimization of a quadratic cost under linear equality or inequality constraints. These constraints weakly encode the localization information provided by person detections. This method does not explicitly require pose estimates or disparity maps but can integrate these additional cues. Our method can also be used for segmenting instances of other object classes from videos. We evaluate all these aspects and demonstrate the superior performance of this new method.
139

Aplikace rozšířené reality: Měření rozměrů objektů / Application of Augmented Reality: Measurement of Object Dimensions

Karásek, Miroslav January 2019 (has links)
The goal of this diploma thesis is design and implementation of an application for automated measurement of objects in augmented reality. It focuses on automating the entire process, so that the user carries out the fewest number of manual actions. The proposed interface divides the measurement into several steps in which it gives the user instructions to progress to the next stage. The result is an Android application with ARCore technology. Is capable of determining the minimal bounding box of an object of a general shape lying on a horizontal surface. Measure error depends on ambient conditions and is in units of percent.
140

Automatická kalibrace robotického ramene pomocí kamer/y / Automatic calibration ot robotic arm using cameras

Adámek, Daniel January 2019 (has links)
K nahrazení člověka při úloze testování dotykových embedded zařízení je zapotřebí vyvinout komplexní automatizovaný robotický systém. Jedním ze zásadních úkolů je tento systém automaticky zkalibrovat. V této práci jsem se zabýval možnými způsoby automatické kalibrace robotického ramene v prostoru ve vztahu k dotykovému zařízení pomocí jedné či více kamer. Následně jsem představil řešení založené na estimaci polohy jedné kamery pomocí iterativních metod jako např. Gauss-Newton nebo Levenberg-Marquardt. Na konci jsem zhodnotil dosaženou přesnost a navrhnul postup pro její zvýšení.

Page generated in 0.1068 seconds