• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 16
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 78
  • 24
  • 24
  • 21
  • 19
  • 17
  • 16
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Qualification et amélioration de la précision de systèmes de balayage laser mobiles par extraction d’arêtes / Edge-based accuracy assessment and improvement of mobile laser scanning systems

Poreba, Martyna 18 June 2014 (has links)
Au cours de ces dernières décennies, le développement de Systèmes Mobiles de Cartographie, soutenu par un progrès technologique important, est devenu plus apparent. Il a été stimulé par le besoin grandissant de collecte d'informations géographiques précises sur l'environnement. Nous considérons, au sein de cette thèse, des solutions pour l'acquisition des nuages de points mobiles de qualité topographique (précision centimétrique). Il s'agit, dans cette tâche, de mettre au point des méthodes de qualification des données, et d'en améliorer sur les erreurs systématiques par des techniques d'étalonnage et de recalage adéquates.Nous décrivons une démarche innovante d'évaluation de l'exactitude et/ou de la précision des relevés laser mobiles. Celle-ci repose sur l'extraction et la comparaison des entités linéaires de la scène urbaine. La distance moyenne calculée entre les segments appariés, étant la distance modifiée de Hausdorff, sert à noter les nuages par rapport à des références existantes. Pour l'extraction des arêtes, nous proposons une méthode de détection d'intersections entre segments plans retrouvés via un algorithme de RANSAC enrichi d'une analyse de composantes connexes. Nous envisageons également une démarche de correction de relevés laser mobiles à travers un recalage rigide fondé, lui aussi, sur les éléments linéaires. Enfin, nous étudions la pertinence des segments pour en déduire les paramètres de l'étalonnage extrinsèque du système mobile. Nous testons nos méthodes sur des données simulées et des données réelles acquises dans le cadre du projet TerraMobilita. / Over the past few decades, the development of land-based Mobile Mapping Systems (MMS), supported by significant technological progress, has become more prominent. It has been motivated by the growing need for precise geographic information about the environment. In this thesis, we consider approaches for the acquisition of mobile point clouds with topographic quality (centimeter-level accuracy). The aim is to develop techniques for data quality assessment and improvement. In particular, we eliminate the systematic errors inherent to an MMS data using appropriate calibration and registration methods.We describe a novel approach to assess the accuracy and/or the precision of mobile laser point clouds. It is based on the extraction and comparison of line features detected within the urban scene. The computed average distance between corresponding pairs of line segments, taking advantage of a modified Hausdorff distance, is used to evaluate the MMS data with respect to a reference data set. For edge extraction, we propose a method which relies on the intersections between planes modelled via the RANSAC algorithm refined by an analysis of connected components. We also consider an approach to correct point clouds using a line-based rigid registration method. Finally, we study the use of line segments for estimating the boresight angles of a land-based mobile mapping system. We apply our methods to synthetic data and to real data acquired as part of the TerraMobilita project.
12

Boundary Representation Modeling from Point Clouds

Aronsson, Oskar, Nyman, Julia January 2020 (has links)
Inspections of bridges are today performed ocularly by an inspector at arm’s lengths distance to evaluate damages and to assess its current condition. Ocular inspections often require specialized equipment to aid the inspector to reach all parts of the bridge. The current state of practice for bridge inspection is therefore considered to be time-consuming, costly, and a safety hazard for the inspector. The purpose of this thesis has been to develop a method for automated modeling of bridges from point cloud data. Point clouds that have been created through photogrammetry from a collection of images acquired with an Unmanned Aerial Vehicle (UAV). This thesis has been an attempt to contribute to the long-term goal of making bridge inspections more efficient by using UAV technology. Several methods for the identification of structural components in point clouds have been evaluated. Based on this, a method has been developed to identify planar surfaces using the model-fitting method Random Sample Consensus (RANSAC). The developed method consists of a set of algorithms written in the programming language Python. The method utilizes intersection points between planes as well as the k-Nearest-Neighbor (k-NN) concept to identify the vertices of the structural elements. The method has been tested both for simulated point cloud data as well as for real bridges, where the images were acquired with a UAV. The results from the simulated point clouds showed that the vertices were modeled with a mean deviation of 0.13− 0.34 mm compared to the true vertex coordinates. For a point cloud of a rectangular column, the algorithms identified all relevant surfaces and were able to reconstruct it with a deviation of less than 2 % for the width and length. The method was also tested on two point clouds of real bridges. The algorithms were able to identify many of the relevant surfaces, but the complexity of the geometries resulted in inadequately reconstructed models. / Besiktning av broar utförs i dagsläget okulärt av en inspektör som på en armlängds avstånd bedömer skadetillståndet. Okulär besiktning kräver därmed ofta speciell utrustning för att inspektören ska kunna nå samtliga delar av bron. Detta resulterar i att det nuvarande tillvägagångssättet för brobesiktning beaktas som tidkrävande, kostsamt samt riskfyllt för inspektören. Syftet med denna uppsats var att utveckla en metod för att modellera broar på ett automatiserat sätt utifrån punktmolnsdata. Punktmolnen skapades genom fotogrammetri, utifrån en samling bilder tagna med en drönare. Uppsatsen har varit en insats för att bidra till det långsiktiga målet att effektivisera brobesiktning genom drönarteknik. Flera metoder för att identifiera konstruktionselement i punktmoln har undersökts. Baserat på detta har en metod utvecklats som identifierar plana ytor med regressionsmetoden Random Sample Consensus (RANSAC). Den utvecklade metoden består av en samling algoritmer skrivna i programmeringsspråket Python. Metoden grundar sig i att beräkna skärningspunkter mellan plan samt använder konceptet k-Nearest-Neighbor (k-NN) för att identifiera konstruktionselementens hörnpunkter. Metoden har testats på både simulerade punktmolnsdata och på punktmoln av fysiska broar, där bildinsamling har skett med hjälp av en drönare. Resultatet från de simulerade punktmolnen visade att hörnpunkterna kunde identifieras med en medelavvikelse på 0,13 − 0,34 mm jämfört med de faktiska hörnpunkterna. För ett punktmoln av en rektangulär pelare lyckades algoritmerna identifiera alla relevanta ytor och skapa en rekonstruerad modell med en avvikelse på mindre än 2 % med avseende på dess bredd och längd. Metoden testades även på två punktmoln av riktiga broar. Algoritmerna lyckades identifiera många av de relevanta ytorna, men geometriernas komplexitet resulterade i bristfälligt rekonstruerade modeller.
13

Melhorando a estima??o de pose com o RANSAC preemptivo generalizado e m?ltiplos geradores de hip?teses

Gomes Neto, Severino Paulo 27 February 2014 (has links)
Made available in DSpace on 2014-12-17T15:47:04Z (GMT). No. of bitstreams: 1 SeverinoPGN_TESE.pdf: 2322839 bytes, checksum: eda5c48fde7c920680bcb8d8be8d5d21 (MD5) Previous issue date: 2014-02-27 / The camera motion estimation represents one of the fundamental problems in Computer Vision and it may be solved by several methods. Preemptive RANSAC is one of them, which in spite of its robustness and speed possesses a lack of flexibility related to the requirements of applications and hardware platforms using it. In this work, we propose an improvement to the structure of Preemptive RANSAC in order to overcome such limitations and make it feasible to execute on devices with heterogeneous resources (specially low budget systems) under tighter time and accuracy constraints. We derived a function called BRUMA from Preemptive RANSAC, which is able to generalize several preemption schemes, allowing previously fixed parameters (block size and elimination factor) to be changed according the applications constraints. We also propose the Generalized Preemptive RANSAC method, which allows to determine the maximum number of hipotheses an algorithm may generate. The experiments performed show the superiority of our method in the expected scenarios. Moreover, additional experiments show that the multimethod hypotheses generation achieved more robust results related to the variability in the set of evaluated motion directions / A estima??o de pose/movimento de c?mera constitui um dos problemas fundamentais na vis?o computacional e pode ser resolvido por v?rios m?todos. Dentre estes m?todos se destaca o Preemptive RANSAC (RANSAC Preemptivo), que apesar da robustez e velocidade apresenta problemas de falta de flexibilidade em rela??o a requerimentos das aplica??es e plataformas computacionais utilizadas. Neste trabalho, propomos um aperfei?oamento da estrutura do Preemptive RANSAC para superar esta limita??o e viabilizar sua execu??o em dispositivos com recursos variados (enfatizando os de poucas capacidades) atendendo a requisitos de tempo e precis?o diversos. Derivamos do Preemptive RANSAC uma fun??o a que chamamos BRUMA, que ? capaz de generalizar v?rios esquemas de preemp??o e que permite que par?metros anteriormente fixos (tamanho de bloco e fator de elimina??o) sejam configurados de acordo com as restri??es da aplica??o. Propomos o m?todo Generalized Preemptive RANSAC (RANSAC Preemptivo Generalizado) que permite ainda alterar a quantidade m?xima de hip?teses a gerar. Os experimentos demonstraram superioridade de nossa proposta nos cen?rios esperados. Al?m disso, experimentos adicionais demonstram que a gera??o de hip?teses multim?todos produz resultados mais robustos em rela??o ? variabilidade nos tipos de movimento executados
14

LiDAR-Equipped Wireless Sensor Network for Speed Detection on Classification Yards / LiDAR-utrustat sensornätverk för hastighetsmätning på rangerbangårdar

Olsson, Isak, Lindgren, André January 2021 (has links)
Varje dag kopplas tusentals godsvagnar om på de olika rangerbangårdarna i Sverige. För att kunna automatiskt bromsa vagnarna tillräckligt mycket är det nödvändigt att veta deras hastigheter. En teknik som har blivit populär på sistone är Light Detection and Ranging (LiDAR) som använder ljus för att mäta avstånd till objekt. Den här rapporten diskuterar design- och implementationsprocessen av ett trådlöst sensornätverk bestående av en LiDARutrustad sensornod. Designprocessen gav en insikt i hur LiDAR-sensorer bör placeras för att täcka en så stor yta som möjligt. Sensornoden var programmerad att bestämma avståndet av objekt genom att använda Random Sample Consensus (RANSAC) för att ta bort outliers och sen linjär regression på de inliers som detekterats. Implementationen utvärderades genom att bygga ett litet spår med en låda som kunde glida fram och tillbaka över spåret. LiDAR- sensorn placerades med en vinkel vid sidan om spåret. Resultaten visade att implementationen både kunde detektera objekt på spåret och också hastigheten av objekten. En simulation gjordes också med hjälp av en 3D-modell av en tågvagn för att se hur väl algoritmen hanterade ojämna ytor. LiDAR-sensorn i simuleringen hade en strålavvikelse på 0_. 30% av de simulerade mätvärdena gjordes om till outliers för att replikera dåliga väderförhållanden. Resultaten visade att RANSAC effektivt kunde ta bort outliers men att de ojämna ytorna på tåget ledde till felaktiga hastighetsmätningar. En slutsats var att en sensor med en divergerande stråle möjligtvis skulle leda till bättre resultat. Framtida arbete inkluderar att utvärdera implementationen på en riktig bangård, hitta optimala parametrar för algoritmen samt evaluera algoritmer som kan filtrera data från ojämn geometri. / Every day, thousands of train wagons are coupled on the multiple classification yards in Sweden. To be able to automatically brake the wagons a sufficient amount, it is a necessity to determine the speed of the wagons. A technology that has been on the rise recently is Light Detection and Ranging (LiDAR) that emits light to determine the distance to objects. This report discusses the design and implementation of a wireless sensor network consisting of a LiDAR-equipped sensor node. The design process provided insight into how LiDAR sensors may be placed for maximum utilization. The sensor node was programmed to determine the speed of an object by first using Random Sample Consensus (RANSAC) for outlier removal and then linear regression on the inliers. The implementation was evaluated by building a small track with an object sliding over it and placing the sensor node at an angle to the side of the track. The results showed that the implementation could both detect objects on the track and also track the speed of the objects. A simulation was also made using a 3D model of a wagon to see how the algorithm performs on non-smooth surfaces. The simulated LiDAR sensor had a beam divergence of 0_. 30% of the simulated measurements were turned into outliers to replicate bad weather conditions. The results showed that RANSAC was efficient at removing the outliers but that the rough surface of the wagon resulted in some incorrect speed measurements. A conclusion was made that a sensor with some beam divergence could be beneficial. Future work includes testing the implementation in real-world scenarios, finding optimal parameters for the proposed algorithm, and to evaluate algorithms that can filter rough geometry data.
15

Robust Registration of ToF and RGB-D Camera Point Clouds / Robust registrering av punktmoln från ToF och RGB-D kamera

Chen, Shuo January 2021 (has links)
This thesis presents a comparison of M-estimator, BLAVE, and RANSAC method in point clouds registration. The comparison is performed empirically by applying all the estimators on a simulated data added with noise plus gross errors, ToF data and RGB-D data. The RANSAC method is the fastest and most robust estimator from the comparison. The 2D feature extracting methods Harris corner detector, SIFT and SURF and 3D extracting method ISS are compared in the real-world scene data as well. SIFT algorithm is proven to have extracted the most feature points with accurate features among all the extracting methods in different data. In the end, ICP algorithm is used to refine the registration result based on the estimation of initial transform. / Denna avhandling presenterar en jämförelse av tre metoder för registrering av punktmoln: M-estimator, BLAVE och RANSAC. Jämförelsen utfördes empiriskt genom att använda alla metoder på simulerad data med brus och grova fel samt på ToF - och RGB-D -data. Tester visade att RANSAC-metoden är den snabbaste och mest robusta metoden.  Vi har även jämfört tre metoder för extrahering av features från 2D-bilder: Harris hörndetektor, SIFT och SURF och en 3D extraheringsmetod ISS. Denna jämförelse utfördes md hjälp av verkliga data. SIFT -algoritmen har visat sig fungera bäst bland alla extraheringsmetoder: den har extraherat flesta features med högst precision. I slutändan användes ICP-algoritmen för att förfina registreringsresultatet baserat på uppskattningen av initial transformering.
16

Visual navigation for mobile robots using the Bag-of-Words algorithm

Botterill, Tom January 2011 (has links)
Robust long-term positioning for autonomous mobile robots is essential for many applications. In many environments this task is challenging, as errors accumulate in the robot’s position estimate over time. The robot must also build a map so that these errors can be corrected when mapped regions are re-visited; this is known as Simultaneous Localisation and Mapping, or SLAM. Successful SLAM schemes have been demonstrated which accurately map tracks of tens of kilometres, however these schemes rely on expensive sensors such as laser scanners and inertial measurement units. A more attractive, low-cost sensor is a digital camera, which captures images that can be used to recognise where the robot is, and to incrementally position the robot as it moves. SLAM using a single camera is challenging however, and many contemporary schemes suffer complete failure in dynamic or featureless environments, or during erratic camera motion. An additional problem, known as scale drift, is that cameras do not directly measure the scale of the environment, and errors in relative scale accumulate over time, introducing errors into the robot’s speed and position estimates. Key to a successful visual SLAM system is the ability to continue operation despite these difficulties, and to recover from positioning failure when it occurs. This thesis describes the development of such a scheme, which is known as BoWSLAM. BoWSLAM enables a robot to reliably navigate and map previously unknown environments, in real-time, using only a single camera. In order to position a camera in visually challenging environments, BoWSLAM combines contemporary visual SLAM techniques with four new components. Firstly, a new Bag-of-Words (BoW) scheme is developed, which allows a robot to recognise places it has visited previously, without any prior knowledge of its environment. This BoW scheme is also used to select the best set of frames to reconstruct positions from, and to find efficient wide-baseline correspondences between many pairs of frames. Secondly, BaySAC, a new outlier- robust relative pose estimation scheme based on the popular RANSAC framework, is developed. BaySAC allows the efficient computation of multiple position hypotheses for each frame. Thirdly, a graph-based representation of these position hypotheses is proposed, which enables the selection of only reliable position estimates in the presence of gross outliers. Fourthly, as the robot explores, objects in the world are recognised and measured. These measurements enable scale drift to be corrected. BoWSLAM is demonstrated mapping a 25 minute 2.5km trajectory through a challenging and dynamic outdoor environment in real-time, and without any other sensor input; considerably further than previous single camera SLAM schemes.
17

Couplage de données laser aéroporté et photogrammétriques pour l'analyse de scènes tridimensionnelles

Bretar, Frédéric 06 1900 (has links) (PDF)
L'interprétation de scènes tridimensionnelles dans un contexte cartographique met en jeu de nombreuses techniques, des plus traditionnelles, comme la photogrammétrie à partir d'images aériennes, jusqu'aux plus récentes, comme l'altimétrie laser. Si la photogrammétrie intègre les traitements géométriques liés à l'orientation des images, ainsi que le calcul des altitudes associées à chaque pixel par des processus de corrélation, les systèmes laser aéroportés (capteurs actifs) intègrent un mécanisme de géoréférencement direct (couplage Inertie-GPS) des impulsions lumineuses émises, fournissant ainsi une représentation de la topographie sous la forme d'un nuage de points 3D. Les objectifs de cette thèse étaient dans un premier temps de valoriser de manière autonome les données 3D issues de la technologie laser, compte tenu de leur caractère novateur, puis d'étudier divers aspects de l'intégration de ces données à un savoir-faire photogrammétrique. Nous nous sommes donc intéressés dans une première partie à l'extraction automatique de points laser appartenant au sol (paysages urbains et ruraux). L'analyse de ces points sol nous a menés à la recherche d'une modélisation dense du terrain sous la forme d'une grille régulière d'altitude à travers la définition d'un modèle bayésien de régularisation de surface. Les points appartenant à la composante sursol, dans le cadre de l'étude du milieu urbain pour la modélisation du bâti, ont été analysés à la lumière d'un algorithme de recherche de primitives planes (facettes de toits) basé sur une modification de l'estimateur robuste RANSAC. La seconde partie concerne l'étude effective du couplage des techniques laser et photogrammétriques. Il s'agit d'utiliser conjointement la grande précision des mesures laser (<5 cm pour la composante altimétrique sous certaines conditions) avec d'une part la radiométrie issue des images aériennes, mais aussi avec les images d'altitudes correspondantes. La confrontation des géométries issues de ces deux systèmes fait apparaître des décalages tridimensionnels non linéaires possiblement liés aux dérives temporelles des mesures inertielles. Un algorithme de recalage adapté à la géométrie d'acquisition des bandes laser a donc été mis en place, assurant a posteriori, une cohérence des géométries aussi bien 2D que 3D. À partir d'une mise en correspondance locale des surfaces, un mécanisme de corrections par fenêtres glissantes simule les dépendances temporelles des variations, sans imposer de modèle global de déformations. Enfin, nous avons exprimé la complémentarité des deux systèmes à travers l'extraction de facettes 3D de bâtiments par un mécanisme de segmentation hiérarchique d'images basé sur la définition d'une énergie d'agrégation de régions élémentaires dépendant à la fois des informations présentes dans l'image, des points laser et de la classification préalablement effectuée.
18

Camera Based Navigation : Matching between Sensor reference and Video image

Olgemar, Markus January 2008 (has links)
<p>an Internal Navigational System and a Global Navigational Satellite System (GNSS). In navigational warfare the GNSS can be jammed, therefore are a third navigational system is needed. The system that has been tried in this thesis is camera based navigation. Through a video camera and a sensor reference the position is determined. This thesis will process the matching between the sensor reference and the video image.</p><p>Two methods have been implemented: normalized cross correlation and position determination through a homography. Normalized cross correlation creates a correlation matrix. The other method uses point correspondences between the images to determine a homography between the images. And through the homography obtain a position. The more point correspondences the better the position determination will be.</p><p>The results have been quite good. The methods have got the right position when the Euler angles of the UAV have been known. Normalized cross correlation has been the best method of the tested methods.</p>
19

Visual Servoing In Semi-Structured Outdoor Environments

Rosenquist, Calle, Evesson, Andreas January 2007 (has links)
<p>The field of autonomous vehicle navigation and localization is a highly active research</p><p>topic. The aim of this thesis is to evaluate the feasibility to use outdoor visual navigation in a semi-structured environment. The goal is to develop a visual navigation system for an autonomous golf ball collection vehicle operating on driving ranges.</p><p>The image feature extractors SIFT and PCA-SIFT was evaluated on an image database</p><p>consisting of images acquired from 19 outdoor locations over a period of several weeks to</p><p>allow different environmental conditions. The results from these tests show that SIFT-type</p><p>feature extractors are able to find and match image features with high accuracy. The results also show that this can be improved further by a combination of a lower nearest neighbour threshold and an outlier rejection method to allow more matches and a higher ratio of correct matches. Outliers were found and rejected by fitting the data to a homography model with the RANSAC robust estimator algorithm. </p><p>A simulator was developed to evaluate the suggested system with respect to pixel noise from illumination changes, weather and feature position accuracy as well as the distance to features, path shapes and the visual servoing target image (milestone) interval. The system was evaluated on a total of 3 paths, 40 test combinations and 137km driven. The results show that with the relatively simple visual servoing navigation system it is possible to use mono-vision as a sole sensor and navigate semi-structured outdoor environments such as driving ranges.</p>
20

High-speed View Matching using Region Descriptors / Vymatchning i realtid med region-deskriptorer

Lind, Anders January 2010 (has links)
This thesis treats topics within the area of object recognition. A real-time view matching method has been developed to compute the transformation between two different images of the same scene. This method uses a color based region detector called MSCR and affine transformations of these regions to create affine-invariant patches that are used as input to the SIFT algorithm. A parallel method to compute the SIFT descriptor has been created with relaxed constraints so that the descriptor size and the number of histogram bins can be adjusted. Additionally, a matching step to deduce correspondences and a parallel RANSAC method have been created to estimate the undergone transformation between these descriptors. To achieve real-time performance, the implementation has been targeted to use the parallel nature of the GPU with CUDA as the programming language. Focus has been put on the architecture of the GPU to find the best way to parallelize the different processing steps. CUDA has also been combined with OpenGL to be able to use the hardware accelerated anisotropic sampling method for affine transformations of regions. Parts of the implementation can also be used individually from either Matlab or by using the provided C++ library directly. The method was also evaluated in terms of accuracy and speed. It was shown that our algorithm has similar or better accuracy at finding correspondences than SIFT when the 3D geometry changes are large but we get a slightly worse result on images with flat surfaces.

Page generated in 0.0534 seconds