Spelling suggestions: "subject:"iterative closest point"" "subject:"lterative closest point""
11 |
3D mapování vnitřního prostředí senzorem Microsoft Kinect / 3D indoor mapping using Microsoft KinectPilch, Petr January 2013 (has links)
This work is focused on creating 3D maps of indoor enviroment using Microsoft Kinect sensor. The first part shows the description of Microsoft Kinect sensor, the methods for acquisition and processing of depth data and their registration using different algorithms. The second part shows application of algorithms for map registration and final 3D maps of indoor enviroment.
|
12 |
Autonomous Mapping and Exploration of Dynamic Indoor Environments / Autonom kartläggning och utforskning av dynamiska inomhusmiljöerFåk, Joel, Wilkinson, Tomas January 2013 (has links)
This thesis describes all the necessary parts needed to build a complete system for autonomous indoor mapping in 3D. The robotic platform used is a two-wheeled Segway, operating in a planar environment. This, together with wheel odometers, an Inertial Measurement Unit (IMU), two Microsoft Kinects and a laptop comprise the backbone of the system, which can be divided into three parts: The localization and mapping part, which fundamentally is a SLAM (simultaneous localization and mapping) algorithm implemented using the registration technique Iterative Closest Point (ICP). Along with the map being in 3D, it also designed to handle the mapping of dynamic scenes, something absent from the standard SLAM design. The planning used by the system is twofold. First, the path planning - finding a path from the current position to a destination - and second, the target planning - determining where to go next given the current state of the map and the robot. The third part of the system is the control and collision systems, which while they have not received much focus, are very necessary for a fully autonomous system. Contributions made by this thesis include: The 3D map framework Octomap is extended to handle the mapping of dynamic scenes; A new method for target planning, based on image processing is presented; A calibration procedure for the robot is derived that gives a full six degree of freedom pose for each Kinect. Results show that our calibration procedure produces an accurate pose for each Kinect, which is crucial for a functioning system. The dynamic mapping is shown to outperform the standard occupancy grid in fundamental situations that arise when mapping dynamic scenes. Additionally, the results indicate that the target planning algorithm provides a fast and easy way to plan new target destinations. Finally, the entire system’s autonomous mapping capabilities are evaluated together, producing promising results. However, it also highlights some problems that limit the system’s performance such as the inaccuracy and short range of the Kinects or noise added and reinforced by the multiple subsystems / Detta exjobb beskriver delarna som krävs för att för bygga ett komplett system som autonomt kartlägger inomhusmiljöer i tre dimensioner. Robotplattformen är en Segway, som är kapabel att röra sig i ett plan. Segwayn, tillsammans med en tröghetssensor, två Microsoft Kinects och en bärbar dator utgör grunden till systemet, som kan delas i tre delar: En lokaliserings- och karteringsdel, som i grunden är en SLAM-algoritm (simultan lokalisering och kartläggning) baserad på registreringsmetoden Iterative Closest Point (ICP). Kartan som byggs upp är i tre dimensioner och ska dessutom hantera kartläggningen av dynamiska miljöer, något som orginalforumleringen av SLAM problemet inte klarar av. En automatisk planeringsdel, som består av två delar. Dels ruttplanering som går ut på att hitta en väg från sin nuvarande position till det valda målet och dels målplanering som innebär att välja ett mål att åka till givet den nuvarande kartan och robotens nuvarande position. Systemets tredje del är regler- och kollisionssystemen. Dessa system har inte varit i fokus i detta arbete, men de är ändå högst nödvändiga för att ett autonomt system skall fungera. Detta examensarbete bidrar med följande: Octomap, ett ramverk för kartläggningen i 3D, har utökats för att hantera kartläggningen av dynamiska miljöer; En ny metod för målplanering, baserad på bildbehandling läggs fram; En kalibreringsprocedur för roboten är framtagen som ger den fullständiga posen i förhållande till roboten för varje Kinect. Resultaten visar att vår kalibreringsprocedur ger en nogrann pose for för varje Kinect, vilket är avgörande för att systemet ska fungera. Metoden för kartläggningen av dynamiska miljöer visas prestera bra i grundläggande situationer som uppstår vid kartläggning av dynamiska miljöer. Vidare visas att målplaneringsalgoritmen ger ett snabbt och enkelt sätt att planera mål att åka till. Slutligen utvärderas hela systemets autonoma kartläggningsförmåga, som ger lovande resultat. Dock lyfter resultat även fram problem som begränsar systemets prestanda, till exempel Kinectens onoggranhet och korta räckvidd samt brus som läggs till och förstärks av de olika subsystemen.
|
13 |
Feature Extraction Based Iterative Closest Point Registration for Large Scale Aerial LiDAR Point CloudsGraehling, Quinn R. January 2020 (has links)
No description available.
|
14 |
Point Cloud Registration using both Machine Learning and Non-learning Methods : with Data from a Photon-counting LIDAR SensorBoström, Maja January 2023 (has links)
Point Cloud Registration with data measured from a photon-counting LIDAR sensor from a large distance (500 m - 1.5 km) is an expanding field. Data measuredfrom far is sparse and have low detail, which can make the registration processdifficult, and registering this type of data is fairly unexplored. In recent years,machine learning for point cloud registration has been explored with promisingresults. This work compares the performance of the point cloud registration algorithm Iterative Closest Point with state-of-the-art algorithms, with data froma photon-counting LIDAR sensor. The data was provided by the Swedish Defense Research Agency (FOI). The chosen state-of-the-art algorithms were thenon-learning-based Fast Global Registration and learning-based D3Feat and SpinNet. The results indicated that all state-of-the-art algorithms achieve a substantial increase in performance compared to the Iterative Closest Point method. Allthe state-of-the-art algorithms utilize their calculated features to obtain bettercorrespondence points and therefore, can achieve higher performance in pointcloud registration. D3Feat performed point cloud registration with the highestaccuracy of all the state-of-the-art algorithms and ICP.
|
15 |
Multi-view point cloud fusion for LiDAR based cooperative environment detectionJähn, Benjamin, Lindner, Philipp, Wanielik, Gerd 11 November 2015 (has links) (PDF)
A key component for automated driving is 360◦ environment detection. The recognition capabilities of mod- ern sensors are always limited to their direct field of view. In urban areas a lot of objects occlude important areas of in- terest. The information captured by another sensor from an- other perspective could solve such occluded situations. Fur- thermore, the capabilities to detect and classify various ob- jects in the surrounding can be improved by taking multiple views into account. In order to combine the data of two sensors into one co- ordinate system, a rigid transformation matrix has to be de- rived. The accuracy of modern e.g. satellite based relative pose estimation systems is not sufficient to guarantee a suit- able alignment. Therefore, a registration based approach is used in this work which aligns the captured environment data of two sensors from different positions. Thus their relative pose estimation obtained by traditional methods is improved and the data can be fused. To support this we present an approach which utilizes the uncertainty information of modern tracking systems to de- termine the possible field of view of the other sensor. Fur- thermore, it is estimated which parts of the captured data is directly visible to both, taking occlusion and shadowing ef- fects into account. Afterwards a registration method, based on the iterative closest point (ICP) algorithm, is applied to that data in order to get an accurate alignment. The contribution of the presented approch to the achiev- able accuracy is shown with the help of ground truth data from a LiDAR simulation within a 3-D crossroad model. Re- sults show that a two dimensional position and heading esti- mation is sufficient to initialize a successful 3-D registration process. Furthermore it is shown which initial spatial align- ment is necessary to obtain suitable registration results.
|
16 |
Non-parametric workspace modelling for mobile robots using push broom lasersSmith, Michael January 2011 (has links)
This thesis is about the intelligent compression of large 3D point cloud datasets. The non-parametric method that we describe simultaneously generates a continuous representation of the workspace surfaces from discrete laser samples and decimates the dataset, retaining only locally salient samples. Our framework attains decimation factors in excess of two orders of magnitude without significant degradation in fidelity. The work presented here has a specific focus on gathering and processing laser measurements taken from a moving platform in outdoor workspaces. We introduce a somewhat unusual parameterisation of the problem and look to Gaussian Processes as the fundamental machinery in our processing pipeline. Our system compresses laser data in a fashion that is naturally sympathetic to the underlying structure and complexity of the workspace. In geometrically complex areas, compression is lower than that in geometrically bland areas. We focus on this property in detail and it leads us well beyond a simple application of non-parametric techniques. Indeed, towards the end of the thesis we develop a non-stationary GP framework whereby our regression model adapts to the local workspace complexity. Throughout we construct our algorithms so that they may be efficiently implemented. In addition, we present a detailed analysis of the proposed system and investigate model parameters, metric errors and data compression rates. Finally, we note that this work is predicated on a substantial amount of robotics engineering which has allowed us to produce a high quality, peer reviewed, dataset - the first of its kind.
|
17 |
A ROBUST RGB-D SLAM SYSTEM FOR 3D ENVIRONMENT WITH PLANAR SURFACESSu, Po-Chang 01 January 2013 (has links)
Simultaneous localization and mapping is the technique to construct a 3D map of unknown environment. With the increasing popularity of RGB-depth (RGB-D) sensors such as the Microsoft Kinect, there have been much research on capturing and reconstructing 3D environments using a movable RGB-D sensor. The key process behind these kinds of simultaneous location and mapping (SLAM) systems is the iterative closest point or ICP algorithm, which is an iterative algorithm that can estimate the rigid movement of the camera based on the captured 3D point clouds. While ICP is a well-studied algorithm, it is problematic when it is used in scanning large planar regions such as wall surfaces in a room. The lack of depth variations on planar surfaces makes the global alignment an ill-conditioned problem. In this thesis, we present a novel approach for registering 3D point clouds by combining both color and depth information. Instead of directly searching for point correspondences among 3D data, the proposed method first extracts features from the RGB images, and then back-projects the features to the 3D space to identify more reliable correspondences. These color correspondences form the initial input to the ICP procedure which then proceeds to refine the alignment. Experimental results show that our proposed approach can achieve better accuracy than existing SLAMs in reconstructing indoor environments with large planar surfaces.
|
18 |
Objektų Pozicijos ir Orientacijos Nustatymo Metodų Mobiliam Robotui Efektyvumo Tyrimas / Efficiency Analysis of Object Position and Orientation Detection Algorithms for Mobile RobotUktveris, Tomas 18 August 2014 (has links)
Šiame darbe tiriami algoritminiai sprendimai mobiliam robotui, leidžiantys aptikti ieškomą objektą bei įvertinti jo poziciją ir orientaciją erdvėje. Atlikus šios srities technologijų analizę surasta įvairių realizacijai tinkamų metodų, tačiau bendro jų efektyvumo palyginimo trūko. Siekiant užpildyti šią spragą realizuota programinė ir techninė įranga, kuria atliktas labiausiai roboto sistemoms tinkamų metodų vertinimas. Algoritmų analizė susideda iš algoritmų tikslumo ir jų veikimo spartos vertinimo panaudojant tam paprastus bei efektyvius metodus. Darbe analizuojamas objektų orientacijos nustatymas iš Kinect kameros gylio duomenų pasitelkiant ICP algoritmą. Atliktas dviejų gylio sistemų spartos ir tikslumo tyrimas parodė, jog Kinect kamera spartos atžvilgiu yra efektyvesnis bei 2-5 kartus tikslesnis sprendimas nei įprastinė stereo kamerų sistema. Objektų aptikimo algoritmų efektyvumo eksperimentuose nustatytas maksimalus aptikimo tikslumas apie 90% bei pasiekta maksimali 15 kadrų/s veikimo sparta analizuojant standartinius VGA 640x480 raiškos vaizdus. Atliktas objektų pozicijos ir orientacijos nustatymo ICP metodo efektyvumo tyrimas parodė, jog vidutinė absoliutinė pozicijos ir orientacijos nustatymo paklaida yra atitinkamai apie 3.4cm bei apie 30 laipsnių, o veikimo sparta apie 2 kadrai/s. Tolesnis optimizavimas arba duomenų kiekio minimizavimas yra būtinas norint pasiekti geresnius veikimo rezultatus mobilioje ribotų resursų roboto sistemoje. Darbe taip pat buvo sėkmingai... [toliau žr. visą tekstą] / This work presents a performance analysis of the state-of-the-art computer vision algorithms for object detection and pose estimation. Initial field study showed that many algorithms for the given problem exist but still their combined comparison was lacking. In order to fill in the existing gap a software and hardware solution was created and the comparison of the most suitable methods for a robot system were done. The analysis consists of detector accuracy and runtime performance evaluation using simple and robust techniques. Object pose estimation via ICP algorithm and stereo vision Kinect depth sensor method was used in this work. A conducted two different stereo system analysis showed that Kinect achieves best runtime performance and its accuracy is 2-5 times more superior than a regular stereo setup. Object detection experiments showcased a maximum object detection accuracy of nearly 90% and speed of 15 fps for standard size VGA 640x480 resolution images. Accomplished object position and orientation estimation experiment using ICP method showed, that average absolute position and orientation detection error is respectively 3.4cm and 30 degrees while the runtime speed – 2 fps. Further optimization and data size minimization is necessary to achieve better efficiency on a resource limited mobile robot platform. The robot hardware system was also successfully implemented and tested in this work for object position and orientation detection.
|
19 |
Transformer-Based Point Cloud Registration with a Photon-Counting LiDAR SensorJohansson, Josef January 2024 (has links)
Point cloud registration is an extensively studied field in computer vision, featuring a variety of existing methods, all aimed at achieving the common objective of determining a transformation that aligns two point clouds. Methods like the Iterative Closet Point (ICP) and Fast Global Registration (FGR) have shown to work well for many years, but recent work has explored different learning-based approaches, showing promising results. This work compares the performance of two learning-based methods GeoTransformer and RegFormer against three baseline methods ICP point-to-point, ICP point-to-plane, and FGR. The comparison was conducted on data provided by the Swedish Defence Research Agency (FOI), where the data was captured with a photon-counting LiDAR sensor. Findings suggest that while ICP point-to-point and ICP point-to-plane exhibit solid performance, the GeoTransformer demonstrates the potential for superior outcomes. Additionally, the RegFormer and FGR perform worse than the ICP variants and the GeoTransformer.
|
20 |
Robust Registration of Measured Point Set for Computer-Aided InspectionRavishankar, S January 2013 (has links) (PDF)
This thesis addresses the problem of registering one point set with respect to
another. This problem arises in the context of the use of CMM/Scanners to inspect
objects especially with freeform surfaces. The tolerance verification process now
requires the comparison of measured points with the nominal geometry. This entails placement of the measured point set in the same reference frame as the nominal model. This problem is referred to as the registration or localization problem. In the most general form the tolerance verification task involves registering multiple point sets corresponding to multi-step scan of an object with respect to the nominal CAD model. This problem is addressed in three phases.
This thesis presents a novel approach to automated inspection by matching
point sets based on the Iterative Closest Point (ICP) algorithm. The Modified ICP
(MICP) algorithm presented in the thesis improves upon the existing methods through the use of a localized region based triangulation technique to obtain correspondences for all the inspection points and achieves dramatic reduction in computational effort. The use of point sets to represent the nominal surface and shapes enables handling different systems and formats. Next, the thesis addresses the important problem of establishing registration between point sets in different reference frames when the initial relative pose between them is significantly large. A novel initial pose invariant methodology has been developed. Finally, the above approach is extended to registration of multiview inspection data sets based on acquisition of transformation information of each inspection view using the virtual gauging concept. This thesis describes implementation to address each of these problems in the area of automated registration and verification leading towards automatic inspection.
|
Page generated in 0.1029 seconds