<p>As buildings
have almost come to a saturation point in most developed countries, the management
and maintenance of existing buildings have become the major problem of the
field. Building Information Modeling (BIM) is the underlying
technology to solve this problem. It is a 3D semantic representation of
building construction and facilities that contributes to not only the design
phase but also the construction and maintenance phases, such as life-cycle
management and building energy performance measurement. This study aims at the
processes of creating as-built BIM models, which are constructed after the
design phase. Point cloud, a set of points in 3D space, is an intermediate
product of as-built BIM models that is often acquired by 3D laser scanning and
photogrammetry. A raw point cloud typically requires further procedures, e.g. registration,
segmentation, classification, etc. In terms of segmentation and classification,
machine learning methodologies are trending due to the enhanced speed of
computation. However, supervised machine learning methodologies require labelling
the training point clouds in advance, which is time-consuming and often leads
to inevitable errors. And due to the complexity and uncertainty of real-world
environments, the attributes of one point vary from the attributes of others.
These situations make it difficult to analyze how one single attribute
contributes to the result of segmentation and classification. This study
developed a method of producing point clouds from a fast-generating 3D virtual
indoor environment using procedural modeling. This research focused on two
attributes of simulated point clouds, point density and the level of random errors.
According to Silverman (1986), point density is associated with the point
features around each output raster cell. The number of points within a
neighborhood divided the area of the neighborhood is the point density.
However, in this study, there was a little different. The point density was
defined as the number of points on a surface divided by the surface area. And
the unit is points per square meters (pts/m<sup>2</sup>). This research
compared the performances of a machine learning segmentation and classification
algorithm on ten different point cloud datasets. The mean loss and accuracy of
segmentation and classification were analyzed and evaluated
to show how the point density and level of random errors affect the performance
of the segmentation and classification models. Moreover, the real-world point
cloud data were used as additional data to evaluate the applicability of
produced models.</p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/12254093 |
Date | 07 May 2020 |
Creators | Junzhe Shen (8804144) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/A_SIMULATED_POINT_CLOUD_IMPLEMENTATION_OF_A_MACHINE_LEARNING_SEGMENTATION_AND_CLASSIFICATION_ALGORITHM/12254093 |
Page generated in 0.0021 seconds