Spelling suggestions: "subject:"pointcloud"" "subject:"pointclouds""
231 |
Modélisation 3D automatique d'environnements : une approche éparse à partir d'images prises par une caméra catadioptrique / Automatic 3d modeling of environments : a sparse approach from images taken by a catadioptric cameraYu, Shuda 03 June 2013 (has links)
La modélisation 3d automatique d'un environnement à partir d'images est un sujet toujours d'actualité en vision par ordinateur. Ce problème se résout en général en trois temps : déplacer une caméra dans la scène pour prendre la séquence d'images, reconstruire la géométrie, et utiliser une méthode de stéréo dense pour obtenir une surface de la scène. La seconde étape met en correspondances des points d'intérêts dans les images puis estime simultanément les poses de la caméra et un nuage épars de points 3d de la scène correspondant aux points d'intérêts. La troisième étape utilise l'information sur l'ensemble des pixels pour reconstruire une surface de la scène, par exemple en estimant un nuage de points dense.Ici nous proposons de traiter le problème en calculant directement une surface à partir du nuage épars de points et de son information de visibilité fournis par l'estimation de la géométrie. Les avantages sont des faibles complexités en temps et en espace, ce qui est utile par exemple pour obtenir des modèles compacts de grands environnements comme une ville. Pour cela, nous présentons une méthode de reconstruction de surface du type sculpture dans une triangulation de Delaunay 3d des points reconstruits. L'information de visibilité est utilisée pour classer les tétraèdres en espace vide ou matière. Puis une surface est extraite de sorte à séparer au mieux ces tétraèdres à l'aide d'une méthode gloutonne et d'une minorité de points de Steiner. On impose sur la surface la contrainte de 2-variété pour permettre des traitements ultérieurs classiques tels que lissage, raffinement par optimisation de photo-consistance ... Cette méthode a ensuite été étendue au cas incrémental : à chaque nouvelle image clef sélectionnée dans une vidéo, de nouveaux points 3d et une nouvelle pose sont estimés, puis la surface est mise à jour. La complexité en temps est étudiée dans les deux cas (incrémental ou non). Dans les expériences, nous utilisons une caméra catadioptrique bas coût et obtenons des modèles 3d texturés pour des environnements complets incluant bâtiments, sol, végétation ... Un inconvénient de nos méthodes est que la reconstruction des éléments fins de la scène n'est pas correcte, par exemple les branches des arbres et les pylônes électriques. / The automatic 3d modeling of an environment using images is still an active topic in Computer Vision. Standard methods have three steps : moving a camera in the environment to take an image sequence, reconstructing the geometry of the environment, and applying a dense stereo method to obtain a surface model of the environment. In the second step, interest points are detected and matched in images, then camera poses and a sparse cloud of 3d points corresponding to the interest points are simultaneously estimated. In the third step, all pixels of images are used to reconstruct a surface of the environment, e.g. by estimating a dense cloud of 3d points. Here we propose to generate a surface directly from the sparse point cloud and its visibility information provided by the geometry reconstruction step. The advantages are low time and space complexities ; this is useful e.g. for obtaining compact models of large and complete environments like a city. To do so, a surface reconstruction method by sculpting 3d Delaunay triangulation of the reconstructed points is proposed.The visibility information is used to classify the tetrahedra in free-space and matter. Then a surface is extracted thanks to a greedy method and a minority of Steiner points. The 2-manifold constraint is enforced on the surface to allow standard surface post-processing such as denoising, refinement by photo-consistency optimization ... This method is also extended to the incremental case : each time a new key-frame is selected in the input video, new 3d points and camera pose are estimated, then the reconstructed surface is updated.We study the time complexity in both cases (incremental or not). In experiments, a low-cost catadioptric camera is used to generate textured 3d models for complete environments including buildings, ground, vegetation ... A drawback of our methods is that thin scene components cannot be correctly reconstructed, e.g. tree branches and electric posts.
|
232 |
Hantering och modellering av laserskanningsdata i FME : Automatisering av modellering av tunnlar / : Automation of modelling of tunnelsLindqvist, Linus, Pantesjö, Jesper January 2019 (has links)
Bygg- och anläggningsbranschens implementering av BIM har resulterat i ett ökat behov att digitaliserat relationsunderlag. Äldre relationshandlingar, som mestadels utgörs av pappersritningar, saknar digitala motsvarigheter vilket gör att insamlingar av ny information, från pappersritningar, kan bli aktuell. Terrester laserskanning (TLS) är en teknik som tillämpas för insamling av data i punktmolnsform och är en allt mer förekommande insamlingsmetod vid införskaffning av relationsunderlag. Modellering från tredimensionella punktmolnsdata är ofta komplicerad och på så vis införstått med manuellt arbete för att producera ett godtyckligt resultat. Syftet med examensarbetet var att undersöka möjligheten att skapa en CAD-modell av en tunnels ytskikt från ett punktmoln med hjälp av programvaran FME. Studieområdet är ett mindre tunnelsegment och den insamlade datamängden utgörs av tidigare framarbetat punktmoln. Punktmolnet är obearbetat och innehåller brus i form av avvikande punkter samt installations- och konstruktionsobjekt. Tidigare producerat relationsunderlag, i form av CAD-modell, tilldelades också för att möjliggöra en jämförelse mot de modeller som skapats i arbetet. FME tillhandahåller ett flertal verktyg för bearbetning av punktmoln och arbetet har omfattats av tester där de olika verktygen utvärderats. Det huvudsakliga fokuset har legat på verktyget PointCloudSurfaceBuilder, vars funktion är att rekonstruera punktmoln till en mesh. En metod för filtrering av punktmolnet utformades och utreddes också under arbetet. Flertalet försök utfördes för att testa vad som fungerade bäst och ett antal modeller av varierande kvalitet kunde skapas. Metoden Poisson i verktyget PointCloudSurfaceBuilder visade bäst resultat då den skapar en “vattentät” modell som följer punktmolnets rumsliga förhållande bättre än det tilldelade relationsunderlaget. För metoden Poisson var Maximum Depth den parameter som hade störst inverkan på resultatets kvalitet. För varje höjning med 1 i parametern Maximum Depth så ökade upplösningen kvadratiskt i varje dimension för x, y och z. De totala värdena för tidsåtgång, filstorlek och antal trianglar ökade även potentiellt med upplösningen. Värden över 9 blir svåra, om inte omöjliga, att hantera i CAD-miljöer på grund av för detaljerade data i förhållande studieområdets storlek. Därav rekommenderas 7 och 8 som parametervärden vid modellering i miljöer likartade med tunnelsegmentet. / The building and construction industries implementation of BIM has resulted in an increased need to digitalise as-built basis. Older as-built documents, which is mostly made of paper plans, are missing their digital counterparts, which makes it that collection of new information, from the paper plans, can be vital. Terrestrial laser scanning (TLS) is a technique that is applied for collection of data in the form of data point clouds and is a more frequent collection method for obtaining supplies of as-built. Modelling from three-dimensional point cloud data is usually a complicated matter and therefore connected with manual labour to produce an arbitrary result. The purpose with the bachelor thesis was to research the possibility to create a CAD-model of the layer of a tunnel from a point cloud with the use of a software called FME. The study area is a smaller tunnel segment and the collected data set is based from an earlier created point cloud. The point cloud is unprocessed and contains noise from deviant points and object of installations and construction. The earlier produced as-built, in form of a CAD-model, was applied as well to enable a comparison parallel to the newly created models in this thesis. FME contains several tools for handling point clouds and the work have included several tests where the different tools have been evaluated. The primary focus of the work has been to evaluate the possibilities of the tool PointCloudSurfaceBuilder, which function is to reconstruct point clouds to a mesh. A method was also created and examined to clean the point cloud from noise. Several tests were executed to see what kind of method works the best and models of different qualities were rendered. The construction method Poisson in the transformer PointCloudSurfaceBuilder produced the best results whereas it creates a “water tight” model that follows the point clouds spatial conditions in a better way than the as-built model. In the method of Poisson there is a parameter called Maximum Depth which showed the greatest impact for the quality of the result. For every increase of 1 in the parameter Maximum Depth was the resolution increased by a factor of two in every direction of x, y and z. The total values for amount of time, file size and number of triangles increased as well in a way parallel to the potential increase of the resolution. It is hard, if not impossible, to handle the models in CAD-environments above the value 9. That is because of too high detail in the data in relation to the size of the study area. Therefore, are the recommended values of the parameter 7 and 8 in case of modelling of similar environments in tunnel complexes.
|
233 |
A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUsOrts-Escolano, Sergio 21 January 2014 (has links)
The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.
|
234 |
Contributions to 3D Data Registration and RepresentationMorell, Vicente 02 October 2014 (has links)
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
|
235 |
High Speed, Micron Precision Scanning Technology for 3D Printing ApplicationsEmord, Nicholas 01 January 2018 (has links)
Modern 3D printing technology is becoming a more viable option for use in industrial manufacturing. As the speed and precision of rapid prototyping technology improves, so too must the 3D scanning and verification technology. Current 3D scanning technology (such as CT Scanners) produce the resolution needed for micron precision inspection. However, the method lacks in speed. Some scans can be multiple gigabytes in size taking several minutes to acquire and process. Especially in high volume manufacturing of 3D printed parts, such delays prohibit the widespread adaptation of 3D scanning technology for quality control. The limiting factors of current technology boil down to computational and processing power along with available sensor resolution and operational frequency. Realizing a 3D scanning system that produces micron precision results within a single minute promises to revolutionize the quality control industry.
The specific 3D scanning method considered in this thesis utilizes a line profile triangulation sensor with high operational frequency, and a high-precision mechanical actuation apparatus for controlling the scan. By syncing the operational frequency of the sensor to the actuation velocity of the apparatus, a 3D point cloud is rapidly acquired. Processing of the data is then performed using MATLAB on contemporary computing hardware, which includes proper point cloud formatting and implementation of the Iterative Closest Point (ICP) algorithm for point cloud stitching. Theoretical and physical experiments are performed to demonstrate the validity of the method. The prototyped system is shown to produce multiple loosely-registered micron precision point clouds of a 3D printed object that are then stitched together to form a full point cloud representative of the original part. This prototype produces micron precision results in approximately 130 seconds, but the experiments illuminate upon the additional investments by which this time could be further reduced to approach the revolutionizing one-minute milestone.
|
236 |
基於多視角幾何萃取精確影像對應之研究 / Accurate image matching based on multiple view geometry謝明龍, Hsieh, Ming Lung Unknown Date (has links)
近年來諸多學者專家致力於從多視角影像獲取精確的點雲資訊,並藉由點雲資訊進行三維模型重建等研究,然而透過多視角影像求取三維資訊的精確度仍然有待提升,其中萃取影像對應與重建三維資訊方法,是多視角影像重建三維資訊的關鍵核心,決定點雲資訊的形成方式與成效。
本論文中,我們提出了一套新的方法,由多視角影像之間的幾何關係出發,萃取多視角影像對應與重建三維點,可以有效地改善對應點與三維點的精確度。首先,在萃取多視角影像對應的部份,我們以相互支持轉換、動態高斯濾波法與綜合性相似度評估函數,改善補綴面為基礎的比對方法,提高相似度測量值的辨識力與可信度,可從多視角影像中獲得精確的對應點。其次,在重建三維點的部份,我們使用K均值分群演算法與線性內插法發掘潛在的三維點,讓求出的三維點更貼近三維空間真實物體表面,能在多視角影像中獲得更精確的三維點。
實驗結果顯示,採用本研究所提出的方法進行改善後,在對應點精確度的提升上有很好的成效,所獲得的點雲資訊存在數萬個精確的三維點,而且僅有少數的離群點。 / Recently, many researchers pay attentions in obtaining accurate point cloud data from multi-view images and use these data in 3D model reconstruction. However, this accuracy still needs to be improved. Among these researches, the methods in extracting the corresponding points as well as computing the 3D point information are the most critical ones. These methods practically affect the final results of the point cloud data and the 3D models so constructed.
In this thesis, we propose new approaches, based on multi-view geometry, to improve the accuracy of corresponding points and 3D points. Mutual support transformation, dynamic Gaussian filtering, and similarity evaluation function were used to improve the patch-based matching methods in multi-view image correspondence. Using these mechanisms, the discrimination ability and reliability of the similarity function and, hence, the accuracy of the extracted corresponding points can be greatly improved. We also used K-mean algorithms and linear interpolations to find the better 3D point candidates. The 3D point so computed will be much closer to the surface of the actual 3D object. Thus, this mechanism will produce highly accurate 3D points.
Experimental results show that our mechanism can improve the accuracy of corresponding points as well as the 3D point cloud data. We successfully generated accurate point cloud data that contains tens of thousands 3D points, and, moreover, only has a few outliers.
|
237 |
Examination of airborne discrete-return lidar in prediction and identification of unique forest attributesWing, Brian M. 08 June 2012 (has links)
Airborne discrete-return lidar is an active remote sensing technology capable of obtaining accurate, fine-resolution three-dimensional measurements over large areas. Discrete-return lidar data produce three-dimensional object characterizations in the form of point clouds defined by precise x, y and z coordinates. The data also provide intensity values for each point that help quantify the reflectance and surface properties of intersected objects. These data features have proven to be useful for the characterization of many important forest attributes, such as standing tree biomass, height, density, and canopy cover, with new applications for the data currently accelerating. This dissertation explores three new applications for airborne discrete-return lidar data.
The first application uses lidar-derived metrics to predict understory vegetation cover, which has been a difficult metric to predict using traditional explanatory variables. A new airborne lidar-derived metric, understory lidar cover density, created by filtering understory lidar points using intensity values, increased the coefficient of variation (R²) from non-lidar understory vegetation cover estimation models from 0.2-0.45 to 0.7-0.8. The method presented in this chapter provides the ability to accurately quantify understory vegetation cover (± 22%) at fine spatial resolutions over entire landscapes within the interior ponderosa pine forest type.
In the second application, a new method for quantifying and locating snags using airborne discrete-return lidar is presented. The importance of snags in forest ecosystems and the inherent difficulties associated with their quantification has been well documented. A new semi-automated method using both 2D and 3D local-area lidar point filters focused on individual point spatial location and intensity information is used to identify points associated with snags and eliminate points associated with live trees. The end result is a stem map of individual snags across the landscape with height estimates for each snag. The overall detection rate for snags DBH ≥ 38 cm was 70.6% (standard error: ± 2.7%), with low commission error rates. This information can be used to: analyze the spatial distribution of snags over entire landscapes, provide a better understanding of wildlife snag use dynamics, create accurate snag density estimates, and assess achievement and usefulness of snag stocking standard requirements.
In the third application, live above-ground biomass prediction models are created using three separate sets of lidar-derived metrics. Models are then compared using both model selection statistics and cross-validation. The three sets of lidar-derived metrics used in the study were: 1) a 'traditional' set created using the entire plot point cloud, 2) a 'live-tree' set created using a plot point cloud where points associated with dead trees were removed, and 3) a 'vegetation-intensity' set created using a plot point cloud containing points meeting predetermined intensity value criteria. The models using live-tree lidar-derived metrics produced the best results, reducing prediction variability by 4.3% over the traditional set in plots containing filtered dead tree points.
The methods developed and presented for all three applications displayed promise in prediction or identification of unique forest attributes, improving our ability to quantify and characterize understory vegetation cover, snags, and live above ground biomass. This information can be used to provide useful information for forest management decisions and improve our understanding of forest ecosystem dynamics. Intensity information was useful for filtering point clouds and identifying lidar points associated with unique forest attributes (e.g., understory components, live and dead trees). These intensity filtering methods provide an enhanced framework for analyzing airborne lidar data in forest ecosystem applications. / Graduation date: 2013
|
238 |
利用近紅外光影像之近景攝影測量建立數值表面模型之研究 / Construction of digital surface model using Near-IR close range photogrammetry廖振廷, Liao, Chen Ting Unknown Date (has links)
點雲(point cloud)為以大量三維坐標描述地表實際情形的資料形式,其中包含其三維坐標及相關屬性。通常點雲資料取得方式為光達測量,其以單一波段雷射光束掃描獲取資料,以光達獲取點雲,常面臨掃描時間差、缺乏多波段資訊、可靠邊緣線及角點資訊、大量離散點雲又缺乏語意資訊(semantic information)難以直接判讀及缺乏多餘觀測量等問題。
攝影測量藉由感測反射自太陽光或地物本身放射之能量,可記錄為二維多光譜影像,透過地物在不同光譜範圍表現之特性,可輔助分類,改善分類成果。若匹配多張高重疊率的多波段影像,可以獲取包含多波段資訊且位於明顯特徵點上的點雲,提供光達以外的點雲資料來源。
傳統空中三角測量平差解算地物點坐標及產製數值表面模型(Digital Surface Model, DSM)時,多採用可見光影像為主;而目前常見之高空間解析度數值航照影像,除了記錄可見光波段之外,亦可蒐集近紅外光波段影像。但較少採用近紅外光波段影像,以求解地物點坐標及建立DSM。
因此本研究利用多波段影像所蘊含的豐富光譜資訊,以取像方式簡易及低限制條件的近景攝影測量方式,匹配多張可見光、近紅外光及紅外彩色影像,分別建立可見光、近紅外光及紅外彩色之DSM,其目的在於探討加入近紅外光波段後,所產生的近紅外光及紅外彩色DSM,和可見光DSM之異同;並比較該DSM是否更能突顯植被區。
研究顯示,以可見光點雲為檢核資料,計算近紅外光與紅外彩色點雲的均方根誤差為其距離門檻值之相對檢核方法,可獲得約21%的點雲增加率;然而使用近紅外光或紅外彩色影像,即使能增加點雲資料量,但對於增加可見光影像未能匹配的資料方面,其效果仍屬有限。 / Point cloud represents the surface as mass 3D coordinates and attributes. Generally, these data are usually collected by LIDAR (LIght Detection And Ranging), which acquires data through single band laser scanning. But the data collected by LIDAR could face problems, such as scanning process is not instantaneous, lack of multispectral information, breaklines, corners, semantic information and redundancies.
However, photogrammetry record the electromagnetic energy reflected or emitted from the surface as 2D multispectral images, via ground features with different characteristics differ in spectrum, it can be classified more efficiently and precisely. By matching multiple high overlapping multispectral images, point cloud including multispectral information and locating on obvious feature points can be acquired. This provides another point cloud source aparting from LIDAR.
In most studies, visible light (VIS) images are used primarily, while calculating ground point coordinates and generating digital surface models (DSM) through aerotriangulation. Although nowadays, high spatial resolution digital aerial images can acquire not only VIS channel, but also near infrared (NIR) channel as well. But there is lack of research doing the former procedures by using NIR images.
Therefore, this research focuses on the rich spectral information in multispectral images, by using easy image collection and low restriction close range photogrammetry method. It matches several VIS, NIR and color infrared (CIR) images, and generate DSMs respectively. The purpose is to analyze the difference between VIS, NIR and CIR data sets, and whether it can emphasize the vegetation area, after adding NIR channel in DSM generation.
The result shows that by using relative check points between NIR, CIR data with VIS one. First, VIS point cloud was set as check point data, then, the RMSE (Root Mean Square Error) of NIR and CIR point cloud was calculated as distance threshold. Its data increment is 21% ca. However, the point cloud data amount can be increased, by matching NIR and CIR images. But the effect of increasing data, which was not being matched from VIS images are limited.
|
239 |
Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large WorkspacesRizwan, Macknojia 21 March 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version.
The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors.
The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
|
240 |
以全波形光達之波形資料輔助製作植被覆蓋區數值高程模型 / DEM Generation with Full-Waveform LiDAR Data in Vegetation Area廖思睿, Liao, Sui Jui Unknown Date (has links)
在植被覆蓋的山區中,由於空載雷射掃描可穿透植被間縫隙的特性,有較高機會收集到植被下的地面資訊,因此適合作為製作植被覆蓋地區數值高程模型的資料來源,而在過濾過程中,一般僅利用點雲間的三維位置關係進行幾何過濾,而全波形空載雷射掃描可另外提供點位的波形寬、振幅值、散射截面積以及散射截面積數等波形資料,本研究將透過波形資料分析進行點雲過濾。
首先經最低點採樣後,本研究利用貝氏定理自動分析並計算得到地面點的波形資料的特徵區間範圍,採用振幅值、散射截面積以及散射截面積係數得到的特徵區間範圍開始第一階段的波形資料過濾,完成後再以第二階段的一般幾何過濾濾除剩餘之非地面點,最後的成果將與航測以及只採用幾何過濾時的成果比較。
由研究成果中顯示,不同的植被覆蓋間的單一回波波形資料的差異較明顯,最後回波類似。同一植被覆蓋下的單一回波及最後回波反應不同。而在成果的比較中,本實驗的成果與不採用波形資料輔助的成果大致相同本研究的成果在部分植被覆蓋的區域成果稍差,但透過波形過濾,可將幾何過濾所需計算的點雲數減少許多,可以增進整理過濾的效率。本研究的成果與航測相比時,在植被覆蓋區域較航測成果貼近實際的地面起伏,數值高程模型成果較為正確。 / In mountain areas covered with vegetation, discrete airborne laser scanning is an appropriate technique to produce DEMs for its laser signal is able to reach the ground beneath the vegetation. Once the scanned data was derived, point cloud filtering was performed based on the geometry relationship between the points at the processing stage. With the development of the advanced full-waveform laser scanning system, the additional waveform data has been proved useful for improving the performance of point cloud filtering. This research therefore focused on using the waveform data to extract DEM over vegetation covered area.
The amplitude, backscatter cross-section and backscatter cross-section coefficient were the waveform parameters used to do the filtering. After initial waveform analysis was accomplished, an automated method to determine threshold range of each parameter representing ground points was proposed. By applying the thresholds, the original point cloud was filtered. Geometric filtering method was then used to eliminate the remained non-ground points. As a result, the DEM over the target vegetated area was derived. With the comparison against photogrammetric DEM and DEM derived from traditional filtering method, it was demonstrated that the quality of the resultant DEM was improved.
|
Page generated in 0.0342 seconds