191 |
3D model vybraného objektu / 3D model of the selected objectMrůzek, Tomáš January 2021 (has links)
This diploma thesis describes the implementation of a 3D model of two objects using laser scanning. This paper deals with the accuracy evaluation of several data interpretation. The first two methods are the outputs of the results from the FARO SCENE program and other interpretations are the outputs from the TRIMBLE REAL WORKS program. To assess accuracy and veracity, the exact test field of points previously built in the AdMas complex was used. The result of the project is a georeferenced 3D model of two objects with the surrounding environment.
|
192 |
6-DOF lokalizace objektů v průmyslových aplikacích / 6-DOF Object Localization in Industrial ApplicationsMacurová, Nela January 2021 (has links)
The aim of this work is to design a method for the object localization in the point could and as accurately as possible estimates the 6D pose of known objects in the industrial scene for bin picking. The design of the solution is inspired by the PoseCNN network. The solution also includes a scene simulator that generates artificial data. The simulator is used to generate a training data set containing 2 objects for training a convolutional neural network. The network is tested on annotated real scenes and achieves low success, only 23.8 % and 31.6 % success for estimating translation and rotation for one type of obejct and for another 12.4 % and 21.6 %, while the tolerance for correct estimation is 5 mm and 15°. However, by using the ICP algorithm on the estimated results, the success of the translation estimate is 81.5 % and the rotation is 51.8 % and for the second object 51.9 % and 48.7 %. The benefit of this work is the creation of a generator and testing the functionality of the network on small objects
|
193 |
3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data RepresentationsKonradsson, Albin, Bohman, Gustav January 2021 (has links)
This thesis provides a comparison between instance segmentation methods using point clouds and depth images. Specifically, their performance on cluttered scenes of irregular objects in an industrial environment is investigated. Recent work by Wang et al. [1] has suggested potential benefits of a point cloud representation when performing deep learning on data from 3D cameras. However, little work has been done to enable quantifiable comparisons between methods based on different representations, particularly on industrial data. Generating synthetic data provides accurate grayscale, depth map, and point cloud representations for a large number of scenes and can thus be used to compare methods regardless of datatype. The datasets in this work are created using a tool provided by SICK. They simulate postal packages on a conveyor belt scanned by a LiDAR, closely resembling a common industry application. Two datasets are generated. One dataset has low complexity, containing only boxes.The other has higher complexity, containing a combination of boxes and multiple types of irregularly shaped parcels. State-of-the-art instance segmentation methods are selected based on their performance on existing benchmarks. We chose PointGroup by Jiang et al. [2], which uses point clouds, and Mask R-CNN by He et al. [3], which uses images. The results support that there may be benefits of using a point cloud representation over depth images. PointGroup performs better in terms of the chosen metric on both datasets. On low complexity scenes, the inference times are similar between the two methods tested. However, on higher complexity scenes, MaskR-CNN is significantly faster.
|
194 |
Zpracování snímků pořízených pomocí UAV / Processing of images taken from UAVPtáček, Ondřej January 2014 (has links)
This diploma thesis deals with the processing and evaluation of the pictures taken by unmanned aerial vehicles - UAV. The introductory part is devoted to the definition, use, applications and types of UAV especially for photogrammetric purposes. Also the software equipment is described, including a description and examples of several types of possible outcomes. Further the measurements, computational works and process of elaboration in used software programs are described. Achieved outcomes of elaboration are also presented. In conclusion, the overall evaluation and assessment of the results of measurement is done of set of points.
|
195 |
Detekce objektů na desce pracovního stolu / Tabletop Object DetectionVarga, Tomáš January 2015 (has links)
This work describes the issue of tabletop object detection in point cloud. Point cloud is recorded with Kinect sensor. Designed solution uses algorithm RANSAC for plane detection, algorithm Euclidean clustering for segmentation and ICP algorithm for object detection. Algorithm ICP is modified and mainly it can detect rotational symetric objects and objects without any transformation against it's models. The final package is build on platform ROS. The achieved results with own dataset are good despite of the limited functionality of the detector.
|
196 |
Sandhagen 2 : A project about reusing materials as a way to rethink how architecture can be produced.McDavitt Wallin, Frida January 2020 (has links)
In 2020, the meatpacking district of Stockholm (Slakthusområdet) is at the beginning of a period of change. A lot of its buildings are being demolished, or at least gutted, to transform a historical area of industry into a more urban district of housing, offices, trade, and services along with new parks and squares (Stockholms Stad, 2020). This thesis project is specifically about the first building that was torn down as part of the development of the area, Sandhagen 2. We should consider our condemned buildings a precious resource and extract from them rather than from the earth. In every house there is invested energy which is lost the day it is demolished but there is also something else that is lost other than precious resources. The research aims to highlight the importance of reuse not from the more obvious sustainability point of view, but as something that can be aesthetically motivated. The method involves a dissection of Sandhagen 2, extracting interior architectural elements without excessive alterations, and making an organized taxonomy. The taxonomy is then rearranged into a new spatial composition. How can a space be created from a taxonomy defined by an interior architect? How does a material’s earlier life add or take away potential in its future life? The proposal is a strange space where the tension created by reuse is completely between the elements themselves, a result of having to become the conventional parts of architecture that complete a space; steps, something to sit on, floor, partitions.
|
197 |
A SIMULATED POINT CLOUD IMPLEMENTATION OF A MACHINE LEARNING SEGMENTATION AND CLASSIFICATION ALGORITHMJunzhe Shen (8804144) 07 May 2020 (has links)
<p>As buildings
have almost come to a saturation point in most developed countries, the management
and maintenance of existing buildings have become the major problem of the
field. Building Information Modeling (BIM) is the underlying
technology to solve this problem. It is a 3D semantic representation of
building construction and facilities that contributes to not only the design
phase but also the construction and maintenance phases, such as life-cycle
management and building energy performance measurement. This study aims at the
processes of creating as-built BIM models, which are constructed after the
design phase. Point cloud, a set of points in 3D space, is an intermediate
product of as-built BIM models that is often acquired by 3D laser scanning and
photogrammetry. A raw point cloud typically requires further procedures, e.g. registration,
segmentation, classification, etc. In terms of segmentation and classification,
machine learning methodologies are trending due to the enhanced speed of
computation. However, supervised machine learning methodologies require labelling
the training point clouds in advance, which is time-consuming and often leads
to inevitable errors. And due to the complexity and uncertainty of real-world
environments, the attributes of one point vary from the attributes of others.
These situations make it difficult to analyze how one single attribute
contributes to the result of segmentation and classification. This study
developed a method of producing point clouds from a fast-generating 3D virtual
indoor environment using procedural modeling. This research focused on two
attributes of simulated point clouds, point density and the level of random errors.
According to Silverman (1986), point density is associated with the point
features around each output raster cell. The number of points within a
neighborhood divided the area of the neighborhood is the point density.
However, in this study, there was a little different. The point density was
defined as the number of points on a surface divided by the surface area. And
the unit is points per square meters (pts/m<sup>2</sup>). This research
compared the performances of a machine learning segmentation and classification
algorithm on ten different point cloud datasets. The mean loss and accuracy of
segmentation and classification were analyzed and evaluated
to show how the point density and level of random errors affect the performance
of the segmentation and classification models. Moreover, the real-world point
cloud data were used as additional data to evaluate the applicability of
produced models.</p>
|
198 |
LiDAR Point Cloud De-noising for Adverse WeatherBergius, Johan, Holmblad, Jesper January 2022 (has links)
Light Detection And Ranging (LiDAR) is a hot topic today primarily because of its vast importance within autonomous vehicles. LiDAR sensors are capable of capturing and identifying objects in the 3D environment. However, a drawback of LiDAR is that they perform poorly under adverse weather conditions. Noise present in LiDAR scans can be divided into random and pseudo-random noise. Random noise can be modeled and mitigated by statistical means. The same approach works on pseudo-random noise, but it is less effective. For this, Deep Neural Nets (DNN) are better suited. The main goal of this thesis is to investigate how snow can be detected in LiDAR point clouds and filtered out. The dataset used is Winter Adverse DrivingdataSet (WADS). Supervised filtering contains a comparison between statistical filtering and segmentation-based neural networks and is evaluated on recall, precision, and F1. The supervised approach is expanded by investigating an ensemble approach. The supervised result indicates that neural networks have an advantage over statistical filters, and the best result was obtained from the 3D convolution network with an F1 score of 94.58%. Our ensemble approaches improved the F1 score but did not lead to more snow being removed. We determine that an ensemble approach is a sub-optimal way of increasing the prediction performance and holds the drawback of being more complex. We also investigate an unsupervised approach. The unsupervised networks are evaluated on their ability to find noisy data and correct it. Correcting the LiDAR data means predicting new values for detected noise instead of just removing it. Correctness of such predictions is evaluated manually but with the assistance of metrics like PSNR and SSIM. None of the unsupervised networks produced an acceptable result. The reason behind this negative result is investigated and presented in our conclusion, along with a model that suffers none of the flaws pointed out.
|
199 |
En jämförelsestudie mellan punktmoln framställda med UAS-fotogrammetri och Laserdata NH på ett industriområde i Västsverige / A comparative study of point clouds generated from UAS-photogrammetry and Laserdata NH of industrial area in West SwedenEskina, Ksenija, Watoot, Ali January 2020 (has links)
Framställning av digitala terrängmodell (Digital Terrain Model, DTM) är en viktig del för projekteringsunderlag vid markrelaterade frågor. Grunden för en DTM är punktmolnet som innehåller grunddata från mätningen. DTM är användbara i många olika områden, kvalitén bestäms beroende på vilken uppdrag som DTM gäller för. UAS-fotogrammetri är en av metoder som tillämpas för att framställa en DTM, det går även att framställa en DTM utifrån punktmoln från Laserdata NH. En DTM är en modell av endast markyta, där data samlas genom mätning av ett visst objekt. Syftet med detta examensarbete som är utfört vid Institutionen för ingenjörsvetenskap vid Högskolan Väst var att jämföra två olika metoder för framställning av ett punktmoln som är till underlag för en DTM. Punktmoln som framställs med egna mätningar från UASfotogrammetri och ett färdigt punktmoln från Laserdata NH. Målet med jämförelsen är att undersöka om det går att ersätta UAS-fotogrammetri med den kostnadseffektiva Laserdata NH i projektet för ett industriområde (Lödöse varvet) i Lilla Edets kommun, samt om det går att ersätta den överlag. Med hjälp av Agisoft Metashape programvaran framställdes det punktmolnet från mätning från UAS av modellen DJI Phantom 4 Advanced, sedan jämfördes den mot det färdiga punktmolnet från Laserdata NH i CloudCompare programmet. Resultatet på denna studie visar att det går att ersätta UAS-fotogrammetri mot Laserdata NH i just denna och andra liknande projekt som har samma syfte och viss bestämd noggrannhet då punktmolnen inte avviker signifikant från varandra. Medan det inte går att ersätta de mot varandra överlag, då UAS-fotogrammetri erhåller högre noggrannhet när det gäller framställning av ett punktmoln jämfört med vad Laserdata NH har för noggrannhet på sina mätningar / Generation of Digital Terrain Model (DTM) is an essential part in project planning in questions related to spatial planning. Basis for the DTM is the point cloud which obtains initial data from the measurement. DTM can be used in different areas, accepted quality level is depending on the assignment for which DTM is produced. UAS-photogrammetry is one of the methods which is used for DTM generation, but it is possible to produce DTM from point cloud originated from Laserdata NH. A DTM is a model representing entirely terrain surface, where the data used for its generation gathers from measuring of a certain object. The purpose of this study accomplished at Department of Engineering Science at University West was to compare two different methods for point cloud generation as a basis for DTM. First point cloud generated comes from own measurement with UAS-photogrammetry and second is a point cloud from acquired Laserdata NH. The goal of the comparison is to examine if it is possible to replace UAS-photogrammetry with the cost effective Laserdata NH in the project for the industrial area (Lödöse varvet) in Lilla Edet municipality, and if it is possible to replace it generally. With help of Agisoft Metashape software the point cloud from UAS-measurement with DJI Phantom 4 Advanced was generated and then compared to Laserdata NH point cloud in CloudCompare program. Result of this study is showing that it is possible to replace UAS-photogrammetry with Laserdata NH in this specific and others similar projects which have same purpose and certain decided precision since point clouds are not significantly deviating from each other. While it is not possible to replace them generally, as UAS-photogrammetry obtains higher precision concerning point cloud generation compared to accuracy that Laserdata NH has in its measurements.
|
200 |
Evaluation of Monocular Visual SLAM Methods on UAV Imagery to Reconstruct 3D TerrainJohansson, Fredrik, Svensson, Samuel January 2021 (has links)
When reconstructing the Earth in 3D, the imagery can come from various mediums, including satellites, planes, and drones. One significant benefit of utilizing drones in combination with a Visual Simultaneous Localization and Mapping (V-SLAM) system is that specific areas of the world can be accurately mapped in real-time at a low cost. Drones can essentially be equipped with any camera sensor, but most commercially available drones use a monocular rolling shutter camera sensor. Therefore, on behalf of Maxar Technologies, multiple monocular V-SLAM systems were studied during this thesis, and ORB-SLAM3 and LDSO were determined to be evaluated further. In order to provide an accurate and reproducible result, the methods were benchmarked on the public datasets EuRoC MAV and TUM monoVO, which includes drone imagery and outdoor sequences, respectively. A third dataset was collected with a DJI Mavic 2 Enterprise Dual drone to evaluate how the methods would perform with a consumer-friendly drone. The datasets were used to evaluate the two V-SLAM systems regarding the generated 3D map (point cloud) and estimated camera trajectory. The results showed that ORB-SLAM3 is less impacted by the artifacts caused by a rolling shutter camera sensor than LDSO. However, ORB-SLAM3 generates a sparse point cloud where depth perception can be challenging since it abstracts the images using feature descriptors. In comparison, LDSO produces a semi-dense 3D map where each point includes the pixel intensity, which improves the depth perception. Furthermore, LDSO is more suitable for dark environments and low-texture surfaces. Depending on the use case, either method can be used as long as the required prerequisites are provided. In conclusion, monocular V-SLAM systems are highly dependent on the type of sensor being used. The differences in the accuracy and robustness of the systems using a global shutter and a rolling shutter are significant, as the geometric artifacts caused by a rolling shutter are devastating for a pure visual pipeline. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
Page generated in 0.06 seconds