• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 1
  • Tagged with
  • 14
  • 14
  • 14
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving the Quality of LiDAR Point Cloud Data for Greenhouse Crop Monitoring

Si, Gaoshoutong 09 August 2022 (has links)
No description available.
2

Reweighted Discriminative Optimization for least-squares problems with point cloud registration

Zhao, Y., Tang, W., Feng, J., Wan, Tao Ruan, Xi, L. 26 March 2022 (has links)
Yes / Optimization plays a pivotal role in computer graphics and vision. Learning-based optimization algorithms have emerged as a powerful optimization technique for solving problems with robustness and accuracy because it learns gradients from data without calculating the Jacobian and Hessian matrices. The key aspect of the algorithms is the least-squares method, which formulates a general parametrized model of unconstrained optimizations and makes a residual vector approach to zeros to approximate a solution. The method may suffer from undesirable local optima for many applications, especially for point cloud registration, where each element of transformation vectors has a different impact on registration. In this paper, Reweighted Discriminative Optimization (RDO) method is proposed. By assigning different weights to components of the parameter vector, RDO explores the impact of each component and the asymmetrical contributions of the components on fitting results. The weights of parameter vectors are adjusted according to the characteristics of the mean square error of fitting results over the parameter vector space at per iteration. Theoretical analysis for the convergence of RDO is provided, and the benefits of RDO are demonstrated with tasks of 3D point cloud registrations and multi-views stitching. The experimental results show that RDO outperforms state-of-the-art registration methods in terms of accuracy and robustness to perturbations and achieves further improvement than non-weighting learning-based optimization.
3

Volumetric Change Detection Using Uncalibrated 3D Reconstruction Models

Diskin, Yakov 03 June 2015 (has links)
No description available.
4

Point Cloud Registration using both Machine Learning and Non-learning Methods : with Data from a Photon-counting LIDAR Sensor

Boström, Maja January 2023 (has links)
Point Cloud Registration with data measured from a photon-counting LIDAR sensor from a large distance (500 m - 1.5 km) is an expanding field. Data measuredfrom far is sparse and have low detail, which can make the registration processdifficult, and registering this type of data is fairly unexplored. In recent years,machine learning for point cloud registration has been explored with promisingresults. This work compares the performance of the point cloud registration algorithm Iterative Closest Point with state-of-the-art algorithms, with data froma photon-counting LIDAR sensor. The data was provided by the Swedish Defense Research Agency (FOI). The chosen state-of-the-art algorithms were thenon-learning-based Fast Global Registration and learning-based D3Feat and SpinNet. The results indicated that all state-of-the-art algorithms achieve a substantial increase in performance compared to the Iterative Closest Point method. Allthe state-of-the-art algorithms utilize their calculated features to obtain bettercorrespondence points and therefore, can achieve higher performance in pointcloud registration. D3Feat performed point cloud registration with the highestaccuracy of all the state-of-the-art algorithms and ICP.
5

Relative pose estimation of a plane on an airfield with automotive-class solid-state LiDAR sensors : Enhancing vehicular localization with point cloud registration

Casagrande, Marco January 2021 (has links)
Point cloud registration is a technique to align two sets of points with manifold applications across a range of industries. However, due to a lack of adequate sensing technology, this technique has seldom found applications in the automotive sector up to now. With the advent of solid-state Light Detection and Ranging (LiDAR) sensors that are easily integrable in series production vehicles as means to sense the surrounding environment, this technique can be functional to automate their operations. Maneuvering a vehicle in the proximity of a reference object is one such operation, which can only be performed by accurately estimating its position and orientation relative to the vehicle itself. This project deals with the design and the implementation of an algorithm to accurately locate an aircraft parked on an airfield apron in real time. This is achieved by registering the point cloud model of the plane to the measurement point cloud of the scene produced by the LiDAR sensors on board the vehicle. To this end, the Iterative Closest Point (ICP) algorithm is a well-established approach to register two sets of points without prior knowledge of the correspondences between pairs of points, which, however, is notoriously sensitive towards outliers and computationally expensive with large point clouds. In this work, different variants are presented that improve on the standard ICP algorithm, in terms of accuracy and runtime performance, by leveraging different data structures to index the reference model and outlier rejection strategies. The results show that the implemented algorithms can produce estimates of centimeter precision in milliseconds based only on partial observations of the aircraft, outperforming another established solution tested. / Punktmolnregistrering är en teknik för att anpassa två uppsättningar punkter med mångfaldiga applikationer inom en rad branscher. På grund av bristen på adekvat sensorsteknik har denna teknik hittills sällan används inom automotivesektorn. Med tillkomsten av solid-state LiDAR -sensorer som enkelt kan integreras i serieproduktionsfordon för att kunna känna av den omgivningen, kan denna teknik automatisera verksamheten. Att manövrera ett fordon i närheten av ett referensobjekt är en sådan operation, som bara kan utföras genom att exakt uppskatta dess position och orientering i förhållande till själva fordonet. Detta projekt handlar om design och implementering av en algoritm för att exakt lokalisera ett flygplan parkerat på ett flygfält i realtid. Detta uppnås genom att registrera planetens molnmodell till mätpunktsmolnet på scenen som produceras av LiDAR -sensorerna ombord på fordonet. För detta ändamålet är Iterative Closest Point (ICP) -algoritmen ett väletablerat tillvägagångssätt för att registrera två uppsättningar punkter utan föregående kännedom om överensstämmelserna mellan parpar, vilket dock är notoriskt känsligt för avvikelser och beräknat dyrt med stora punktmoln. I detta arbete presenteras olika varianter som förbättrar standard ICP - algoritmen, när det gäller noggrannhet och runtime performance, genom att utnyttja olika datastrukturer för att indexera referensmodellen och outlier -avvisningsstrategier. Resultaten visar att de implementerade algoritmerna kan producera uppskattningar av centimeters precision i millisekunder baserat endast på partiella observationer av flygplanet, vilket överträffar en annan etablerad lösning som testats.
6

Point clouds in the application of Bin Picking

Anand, Abhijeet January 2023 (has links)
Automatic bin picking is a well-known problem in industrial automation and computer vision, where a robot picks an object from a bin and places it somewhere else. There is continuous ongoing research for many years to improve the contemporary solution. With camera technology advancing rapidly and available fast computation resources, solving this problem with deep learning has become a current interest for several researchers. This thesis intends to leverage the current state-of-the-art deep learning based methods of 3D instance segmentation and point cloud registration and combine them to improve the bin picking solution by improving the performance and make them robust. The problem of bin picking becomes complex when the bin contains identical objects with heavy occlusion. To solve this problem, a 3D instance segmentation is performed with Fast Point Cloud Clustering (FPCC) method to detect and locate the objects in the bin. Further, an extraction strategy is proposed to choose one predicted instance at a time. Inthe next step, a point cloud registration technique is implemented based on PointNetLK method to estimate the pose of the selected object from the bin. The above implementation is trained, tested, and evaluated on synthetically generated datasets. The synthetic dataset also contains several noisy point clouds to imitate a real situation. The real data captured at the company ’SICK IVP’ is also tested with the implemented model. It is observed that the 3D instance segmentation can detect and locate the objects available in the bin. In a noisy environment, the performance degrades as the noise level increase. However, the decrease in the performance is found to be not so significant. Point cloud registration is observed to register best with the full point cloud of the object, when compared to point cloud with missing points.
7

Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces

Rizwan, Macknojia 21 March 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
8

Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces

Macknojia, Rizwan 21 March 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
9

Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces

Macknojia, Rizwan January 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
10

3D mapování s využitím řídkých dat senzoru LiDAR / 3D Mapping from Sparse LiDAR Data

Veľas, Martin Unknown Date (has links)
Tato práce se zabývá návrhem nových algoritmů pro zpracování řídkých 3D dat senzorů LiDAR, včetně kompletního návrhu batohovího mobilního mapovacího řešení. Tento výzkum byl motivován potřebou takových řešení v oblasti geodézie, mobilního průzkumu a výstavby. Nejprve je prezentován iterační algoritmus pro spolehlivou registraci mračen bodů a odhad odometrie z měření 3D LiDARu. Problém řídkosti a velikosti těchto dat je řešen pomocí náhodného vzorkování pomocí Collar Line Segments (CLS). Vyhodnocení na standardní datové sadě KITTI ukázalo vynikající přesnost oproti známému algoritmu General ICP. Konvoluční neuronové sítě hrají důležitou roli ve druhé metodě odhadu odometrie, která zpracovává kódovaná data LiDARu do 2D matic. Metoda je schopna online výkonu, zatímco je zachována přesnost, když požadujeme pouze parametry posunu. To může být užitečné v situacích, kdy je vyžadován online náhled mapování a parametry rotace mohou být spolehlivě poskytnuty např. senzorem IMU. Na základě algoritmu CLS bylo navrženo a implementováno batohové mobilní mapovací řešení 4RECON. S využitím kalibrovaného a synchronizovaného páru LiDARů Velodyne a s nasazením řešení GNSS/INS s duální anténou, byl vyvinut univerzální systém poskytující přesné 3D modelování malých vnitřních i velkých otevřených prostředí. Naše hodnocení prokázalo, že požadavky stanovené pro tento systém byly splněny -- relativní přesnost do $5$~cm a průměrná chyba georeferencí pod $12$~cm. Poslední stránky obsahují popis a vyhodnocení další metody založené na konvolučních neuronových sítích -- navržených pro segmentaci země v mračnech bodů 3D LiDARu. Tato metoda překonala současný stav techniky v této oblasti a představuje způsob, jakým může být sémantická informace vložena do 3D laserových dat.

Page generated in 0.1498 seconds