Spelling suggestions: "subject:"point cloud."" "subject:"point aloud.""
1 |
Elasticity Parameter Estimation in a Simple Measurement SetupTekieh, Motahareh 19 September 2013 (has links)
Elastic deformation has wide applications in medical simulations, therefore when it comes to designing physical behavior of objects for realistic training applications, determining material parameters so that objects behave in a desired way becomes a crucial. In this work we consider the problem of elasticity parameter estimation for deformable bodies, which is important for accurate medical simulations.
Our work has two major steps: the first step is the data acquisition and preparation, and the second step is the parameter estimation. The experimental setup for data acquisition consists of depth and force sensors. Surface deformations are acquired as a series of images along with the corresponding applied forces. The contact point of the force sensor on the surface is found visually and the corresponding applied forces are acquired directly from the force sensor. A complete mesh deformation which is obtained from a surface tracking method is employed along with force measurements in the elasticity parameter estimation method.
Our approach to estimate the physical material properties is based on an inverse linear finite element method. We have experimented with two approaches to optimize the elasticity parameters: a linear iterative method and a force-displacement error minimization method. The two methods were tested on the simulation data, and the second method was tested on three deformable objects. The results are presented for the data captured by two different depth sensors. The result is a set of two parameters, the Young's modulus and the Poisson's ratio, which represents the stiffness of the object under test.
|
2 |
Elasticity Parameter Estimation in a Simple Measurement SetupTekieh, Motahareh January 2013 (has links)
Elastic deformation has wide applications in medical simulations, therefore when it comes to designing physical behavior of objects for realistic training applications, determining material parameters so that objects behave in a desired way becomes a crucial. In this work we consider the problem of elasticity parameter estimation for deformable bodies, which is important for accurate medical simulations.
Our work has two major steps: the first step is the data acquisition and preparation, and the second step is the parameter estimation. The experimental setup for data acquisition consists of depth and force sensors. Surface deformations are acquired as a series of images along with the corresponding applied forces. The contact point of the force sensor on the surface is found visually and the corresponding applied forces are acquired directly from the force sensor. A complete mesh deformation which is obtained from a surface tracking method is employed along with force measurements in the elasticity parameter estimation method.
Our approach to estimate the physical material properties is based on an inverse linear finite element method. We have experimented with two approaches to optimize the elasticity parameters: a linear iterative method and a force-displacement error minimization method. The two methods were tested on the simulation data, and the second method was tested on three deformable objects. The results are presented for the data captured by two different depth sensors. The result is a set of two parameters, the Young's modulus and the Poisson's ratio, which represents the stiffness of the object under test.
|
3 |
Pointwise and Instance Segmentation for 3D Point CloudGujar, Sanket 11 April 2019 (has links)
The camera is the cheapest and computationally real-time option for detecting or segmenting the environment for an autonomous vehicle, but it does not provide the depth information and is undoubtedly not reliable during the night, bad weather, and tunnel flash outs. The risk of an accident gets higher for autonomous cars when driven by a camera in such situations. The industry has been relying on LiDAR for the past decade to solve this problem and focus on depth information of the environment, but LiDAR also has its shortcoming. The industry methods commonly use projections methods to create a projection image and run detection and localization network for inference, but LiDAR sees obscurants in bad weather and is sensitive enough to detect snow, making it difficult for robustness in projection based methods. We propose a novel pointwise and Instance segmentation deep learning architecture for the point clouds focused on self-driving application. The model is only dependent on LiDAR data making it light invariant and overcoming the shortcoming of the camera in the perception stack. The pipeline takes advantage of both global and local/edge features of points in points clouds to generate high-level feature. We also propose Pointer-Capsnet which is an extension of CapsNet for small 3D point clouds.
|
4 |
Evaluating the quality of ground surfaces generated from Terrestrial Laser Scanning (TLS) dataSun, Yanshen 24 June 2019 (has links)
Researchers and GIS analysts have used Aerial Laser Scanning (ALS) data to generate Digital Terrain Models (DTM) since the 1990s, and various algorithms developed for ground point extraction have been proposed based on the characteristics of ALS data. However, Terrestrial Laser Scanning (TLS) data, which might be a better indicator of ground morphological features under dense tree canopies and more accessible for small areas, have been long ignored. In this research, the aim was to evaluate if TLS data were as qualified as ALS to serve as a source of a DTM. To achieve this goal, there were three steps: acquiring and aligning ALS and TLS of the same region, applying ground filters on both of the data sets, and comparing the results.
Our research area was a 100m by 140m region of grass, weeds and small trees along Strouble's Creek on the Virginia Tech campus. Four popular ground filter tools (ArcGIS, LASTools, PDAL, MCC) were applied to both ALS and TLS data. The output ground point clouds were then compared with a DTM generated from ALS data of the same region. Among the four ground filter tools employed in this research, the distances from TLS ground points to the ALS ground surface were no more than 0.06m with standard deviations less than 0.3m. The results indicated that the differences between the ground extracted from TLS and that extracted from ALS were subtle. The conclusion is that Digital Terrain Models (DTM) generated from TLS data are valid. / Master of Science / Elevation is one of the most basic data for researches such as flood prediction and land planning in the field of geography, agriculture, forestry, etc. The most common elevation data that could be downloaded from the internet were acquired from field measurements or satellites. However, the finest grained of that kind of data is 1/3m and errors can be introduced by ground objects such as trees and buildings. To acquire more accurate and pure-ground elevation data (also called Digital Terrain Models (DTM)), Researchers and GIS analysts introduced laser scanners for small area geographical research. For land surface data collection, researchers usually fly a drone with laser scanner (ALS) to derive the data underneath, which could be blocked by ground objects. An alternative way is to place the laser scanner on a tripod on the ground (TLS), which provides more data for ground morphological features under dense tree canopies and better precision. As ALS and TLS collect data from different perspectives, the coverage of a ground area can be different. As most of the ground extraction algorithm were designed for ALS data, their performance on TLS data hasn’t been fully tested yet. Our research area was a 100m by 140m region of grass, weeds and small trees along Strouble’s Creek on the Virginia Tech campus. Four popular ground filter tools (ArcGIS, LASTools, PDAL, MCC) were applied to both ALS and TLS data. The output ground point clouds were then compared with a ground surface generated from ALS data of the same region. Among the four ground filter tools employed in this research, the distances from TLS ground points to the ALS ground surface were no more than 0.06m with standard deviations less than 0.3m. The results indicated that the differences between the ground extracted from TLS and that extracted from ALS were subtle. The conclusion is that Digital Terrain Models (DTM) generated from TLS data are valid.
|
5 |
Recognition and Registration of 3D Models in Depth Sensor DataGrankvist, Ola January 2016 (has links)
Object Recognition is the art of localizing predefined objects in image sensor data. In this thesis a depth sensor was used which has the benefit that the 3D pose of the object can be estimated. This has applications in e.g. automatic manufacturing, where a robot picks up parts or tools with a robot arm. This master thesis presents an implementation and an evaluation of a system for object recognition of 3D models in depth sensor data. The system uses several depth images rendered from a 3D model and describes their characteristics using so-called feature descriptors. These are then matched with the descriptors of a scene depth image to find the 3D pose of the model in the scene. The pose estimate is then refined iteratively using a registration method. Different descriptors and registration methods are investigated. One of the main contributions of this thesis is that it compares two different types of descriptors, local and global, which has seen little attention in research. This is done for two different scene scenarios, and for different types of objects and depth sensors. The evaluation shows that global descriptors are fast and robust for objects with a smooth visible surface whereas the local descriptors perform better for larger objects in clutter and occlusion. This thesis also presents a novel global descriptor, the CESF, which is observed to be more robust than other global descriptors. As for the registration methods, the ICP is shown to perform most accurately and ICP point-to-plane more robust.
|
6 |
Characterizing the Geometry of a Random Point CloudUnknown Date (has links)
This thesis is composed of three main parts. Each chapter is concerned with
characterizing some properties of a random ensemble or stochastic process. The
properties of interest and the methods for investigating them di er between chapters.
We begin by establishing some asymptotic results regarding zeros of random
harmonic mappings, a topic of much interest to mathematicians and astrophysicists
alike. We introduce a new model of harmonic polynomials based on the so-called
"Weyl ensemble" of random analytic polynomials. Building on the work of Li and
Wei [28] we obtain precise asymptotics for the average number of zeros of this model.
The primary tools used in this section are the famous Kac-Rice formula as well as
classical methods in the asymptotic analysis of integrals such as the Laplace method.
Continuing, we characterize several topological properties of this model of
harmonic polynomials. In chapter 3 we obtain experimental results concerning the
number of connected components of the orientation-reversing region as well as the geometry
of the distribution of zeros. The tools used in this section are primarily Monte
Carlo estimation and topological data analysis (persistent homology). Simulations in this section are performed within MATLAB with the help of a computational homology
software known as Perseus. While the results in this chapter are empirical rather
than formal proofs, they lead to several enticing conjectures and open problems.
Finally, in chapter 4 we address an industry problem in applied mathematics
and machine learning. The analysis in this chapter implements similar techniques to
those used in chapter 3. We analyze data obtained by observing CAN tra c. CAN (or
Control Area Network) is a network for allowing micro-controllers inside of vehicles
to communicate with each other. We propose and demonstrate the e ectiveness of an
algorithm for detecting malicious tra c using an approach that discovers and exploits
the natural geometry of the CAN surface and its relationship to random walk Markov
chains. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
7 |
A Look Into Human Brain Activity with EEG DataSurface ReconstructionPothayath, Naveen 23 April 2018 (has links)
EEG has been used to explore the electrical activity of the brain for manydecades. During that time, different components of the EEG signal have been iso-lated, characterized, and associated with a variety of brain activities. However, nowidely accepted model characterizing the spatio-temporal structure of the full-brainEEG signal exists to date.Modeling the spatio-temporal nature of the EEG signal is a daunting task. Thespatial component of EEG is defined by the locations of recording electrodes (rang-ing between 2 to 256 in number) placed on the scalp, while its temporal componentis defined by the electrical potentials the electrodes detect. The EEG signal is gen-erated by the composite electrical activity of large neuron assemblies in the brain.These neuronal units often perform independent tasks, giving the EEG signal ahighly dynamic and non-linear character. These characteristics make the raw EEGsignal challenging to work with. Thus, most research focuses on extracting andisolating targeted spatial and temporal components of interest. While componentisolation strategies like independent component analysis are useful, their effective-ness is limited by noise contamination and poor reproducibility. These drawbacks tofeature extraction could be improved significantly if they were informed by a globalspatio-temporal model of EEG data.The aim of this thesis is to introduce a novel data-surface reconstruction (DSR)technique for EEG which can model the integrated spatio-temporal structure of EEGdata. To produce physically intuitive results, we utilize a hyper-coordinate transfor-mation which integrates both spatial and temporal information of the EEG signalinto a unified coordinate system. We then apply a non-uniform rational B spline(NURBS) fitting technique which minimizes the point distance from the computedsurface to each element of the transformed data. To validate the effectiveness of thisproposed method, we conduct an evaluation using a 5-state classification problem;with 1 baseline and 4 meditation states comparing the classification accuracy usingthe raw EEG data versus the surface reconstructed data in the broadband rangeand the alpha, beta, delta, gamma and higher gamma frequencies. Results demon-strate that the fitted data consistently outperforms the raw data in the broadbandspectrum and all frequency spectrums.
|
8 |
Automated generation of geometric digital twins of existing reinforced concrete bridgesLu, Ruodan January 2019 (has links)
The cost and effort of modelling existing bridges from point clouds currently outweighs the perceived benefits of the resulting model. The time required for generating a geometric Bridge Information Model, a holistic data model which has recently become known as a "Digital Twin", of an existing bridge from Point Cloud Data is roughly ten times greater than laser scanning it. There is a pressing need to automate this process. This is particularly true for the highway infrastructure sector because Bridge Digital Twin Generation is an efficient means for documenting bridge condition data. Based on a two-year inspection cycle, there is a need for at least 315,000 bridge inspections per annum across the United States and the United Kingdom. This explains why there is a huge market demand for less labour-intensive bridge documentation techniques that can efficiently boost bridge management productivity. Previous research has achieved the automatic generation of surface primitives combined with rule-based classification to create labelled cuboids and cylinders from point clouds. While existing methods work well in synthetic datasets or simplified cases, they encounter huge challenges when dealing with real-world bridge point clouds, which are often unevenly distributed and suffer from occlusions. In addition, real bridge topology is much more complicated than idealized cases. Real bridge geometries are defined with curved horizontal alignments, and varying vertical elevations and cross-sections. These characteristics increase the modelling difficulties, which is why none of the existing methods can handle reliably. The objective of this PhD research is to devise, implement, and benchmark a novel framework that can reasonably generate labelled geometric object models of constructed bridges comprising concrete elements in an established data format (i.e. Industry Foundation Classes). This objective is achieved by answering the following research questions: (1) how to effectively detect reinforced concrete bridge components in Point Cloud Data? And (2) how to effectively fit 3D solid models in the format of Industry Foundation Classes to the detected point clusters? The proposed framework employs bridge engineering knowledge that mimics the intelligence of human modellers to detect and model reinforced concrete bridge objects in point clouds. This framework directly extracts structural bridge components and then models them without generating low-level shape primitives. Experimental results suggest that the proposed framework can perform quickly and reliably with complex and incomplete real-world bridge point clouds encounter occlusions and unevenly distributed points. The results of experiments on ten real-world bridge point clouds indicate that the framework achieves an overall micro-average detection F1-score of 98.4%, an average modelling accuracy of (C2C) ̅_Auto 7.05 cm, and the average modelling time of merely 37.8 seconds. Compared to the laborious and time-consuming manual practice, the proposed framework can realize a direct time-savings of 95.8%. This is the first framework of its kind to achieve such high and reliable performance of geometric digital twin generation of existing bridges. Contributions. This PhD research provides the unprecedented ability to rapidly model geometric bridge concrete elements, based on quantitative measurements. This is a huge leap over the current practice of Bridge Digital Twin Generation, which performs this operation manually. The presented research activities will create the foundations for generating meaningful digital twins of existing bridges that can be used over the whole lifecycle of a bridge. As a result, the knowledge created in this PhD research will enable the future development of novel, automated applications for real-time condition assessment and retrofit engineering.
|
9 |
Applications of Graph Convolutional Networks and DeepGCNs in Point Cloud Part Segmentation and UpsamplingAbualshour, Abdulellah 18 April 2020 (has links)
Graph convolutional networks (GCNs) showed promising results in learning from point cloud data. Applications of GCNs include point cloud classification, point cloud segmentation, point cloud upsampling, and more. Recently, the introduction of Deep Graph Convolutional Networks (DeepGCNs) allowed GCNs to go deeper, and thus resulted in better graph learning while avoiding the vanishing gradient problem in GCNs. By adapting impactful methods from convolutional neural networks (CNNs) such as residual connections, dense connections, and dilated convolutions, DeepGCNs allowed GCNs to learn better from non-Euclidean data. In addition, deep learning methods proved very effective in the task of point cloud upsampling. Unlike traditional optimization-based methods, deep learning-based methods to point cloud upsampling does not rely on priors nor hand-crafted features to learn how to upsample point clouds. In this thesis, I discuss the impact and show the performance results of DeepGCNs in the task of point cloud part segmentation on PartNet dataset. I also illustrate the significance of using GCNs as upsampling modules in the task of point cloud upsampling by introducing two novel upsampling modules: Multi-branch GCN and Clone GCN. I show quantitatively and qualitatively the performance results of our novel and versatile upsampling modules when evaluated on a new proposed standardized dataset: PU600, which is the largest and most diverse point cloud upsampling dataset currently in the literature.
|
10 |
A Lennard-Jones Layer for Distribution NormalizationNa, Mulun 11 May 2023 (has links)
We introduce a Lennard-Jones layer (LJL) to equalize the density across the distribution of 2D and 3D point clouds by systematically rearranging points without destroying their overall structure (distribution normalization). LJL simulates a dissipative process of repulsive and weakly attractive interactions between individual points by solely considering the nearest neighbor of each point at a given moment in time. This pushes the particles into a potential valley, reaching a well-defined stable configuration that approximates an equidistant sampling after the stabilization process. We apply LJLs to redistribute randomly generated point clouds into a randomized uniform distribution over the 2D Euclidean plane and 3D mesh surfaces. Moreover, LJLs are embedded in point cloud generative network architectures by adding them at later stages of the inference process. The improvements coming with LJLs for generating 3D point clouds are evaluated qualitatively and quantitatively. Finally, we apply LJLs to improve the point distribution of a score-based 3D point cloud denoising network. In general, we demonstrate that LJLs are effective for distribution normalization which can be applied at negligible cost without retraining the given neural networks.
|
Page generated in 0.033 seconds