Spelling suggestions: "subject:"pointcloud"" "subject:"pointclouds""
1 |
Efficient rendering of real-world environments in a virtual reality application, using segmented multi-resolution meshesChiromo, Tanaka Alois January 2020 (has links)
Virtual reality (VR) applications are becoming increasingly popular and are being used in various applications. VR applications can be used to simulate large real-world landscapes in a computer program for various purposes such as entertainment, education or business.
Typically, 3-dimensional (3D) and VR applications use environments that are made up of meshes of relatively small size. As the size of the meshes increase, the applications start experiencing lagging and run-time memory errors. Therefore, it is inefficient to upload large-sized meshes into a VR application directly. Manually modelling an accurate real-world environment can also be a complicated task, due to the large size and complex nature of the landscapes. In this research, a method to automatically convert 3D point-clouds of any size and complexity into a format that can be efficiently rendered in a VR application is proposed. Apart from reducing the cost on performance, the solution also reduces the risks of virtual reality induced motion sickness.
The pipeline of the system incorporates three main steps: a surface reconstruction step, a texturing step and a segmentation step. The surface reconstruction step is necessary to convert the 3D point-clouds into 3D triangulated meshes. Texturing is required to add a realistic feel to the appearance of themeshes. Segmentation is used to split large-sized meshes into smaller components that can be rendered individually without overflowing the memory.
A novel mesh segmentation algorithm, the Triangle Pool Algorithm (TPA) is designed to segment the mesh into smaller parts. To avoid using the complex geometric and surface features of natural scenes, the TPA algorithm uses the colour attribute of the natural scenes for segmentation. The TPA algorithm manages to produce comparable results to those of state-of-the-art 3D segmentation algorithms when segmenting regular 3D objects and also manages to outperform the state-of-the-art algorithms when segmenting meshes of real-world natural landscapes.
The VR application is designed using the Unreal and Unity 3D engines. Its principle of operation involves rendering regions closer to the user using highly-detailed multiple mesh segments, whilst regions further away from the user are comprised of a lower detailed mesh. The rest of the segments that are not rendered at a particular time, are stored in external storage. The principle of operation manages to free up memory and also to reduce the amount of computational power required to render highly-detailed meshes. / Dissertation (MEng)--University of Pretoria, 2020. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
|
2 |
A comparative analysis of UAS photogrammatry and terrestrial LIDAR for reconstructing microtopography of harvested fieldsLee, Kang San 01 May 2019 (has links)
The purpose of this study is comparing elevation models from Terrestrial laser scanner (TLS) and Unmanned aerial system (UAS) photogrammetry focusing on detecting microtopography and the relationship between elevation differences and image textures. The soils on agricultural lands are permanently modified by intensive farming activities almost every year. The microtopography of the soil, that plays an important role in the surface runoff and infiltration, depends on cultivation practices and the field environment. By way of example: crop residues, furrows, tillage direction, and slope may impact the soil nutrient and erosion. To better understand and prevent soil degradation via erosion, 3-D reconstructions of high-resolution soil monitoring are required.
In this study, we try to circumnavigate the soil roughness associated with sustainable practices and physical characteristics of fields by collecting soil datasets from non-contacted remote sensing platforms. The amount of soil roughness was observed environmental conditions derived from the Terrestrial Laser Scanner (TLS) and the Unmanned Aerial System (UAS) photogrammetry within harvested fields in Eastern Central Iowa. Additionally, by focusing on local relief detections and the relationship between outlier distributions and image textures, the two datasets were compared.
Both TLS and UAS derived point clouds successfully reconstructed digital elevation models ~ 5cm RMSE after the registration and merge process, and these models showed local reliefs of study areas with fine details. However, several outlier cluster points were detected in the comparisons between TLS and UAS derived DEMs. To discover the outlier distributions, image texture was addressed with global and local block analysis. Since there were no significant correlations, most of the study sites show that poor texture of ground may trigger high elevation errors. To enhance the texture of images, several possible solutions are described, such as local contrast enhancement using the Wallis filter.
|
3 |
COMPUTER VISION SYSTEMS FOR PRACTICAL APPLICATIONS IN PRECISION LIVESTOCK FARMINGPrajwal Rao (19194526) 23 July 2024 (has links)
<p dir="ltr">The use of advanced imaging technology and algorithms for managing and monitoring livestock improves various aspects of livestock, such as health monitoring, behavioral analysis, early disease detection, feed management, and overall farming efficiency. Leveraging computer vision techniques such as keypoint detection, and depth estimation for these problems help to automate repeatable tasks, which in turn improves farming efficiency. In this thesis, we delve into two main aspects that are early disease detection, and feed management:</p><ul><li><b>Phenotyping Ducks using Keypoint Detection: </b>A platform to measure duck phenotypes such as wingspan, back length, and hip width packaged in an online user interface for ease of use.</li><li><b>Real-Time Cattle Intake Monitoring Using Computer Vision:</b> A complete end-to-end real-time monitoring system to measure cattle feed intake using stereo cameras.</li></ul><p dir="ltr">Furthermore, considering the above implementations and their drawbacks, we propose a cost-effective simulation environment for feed estimation to conduct extensive experiments prior to real-world implementation. This approach allows us to test and refine the computer vision systems under controlled conditions, identify potential issues, and optimize performance without the high costs and risks associated with direct deployment on farms. By simulating various scenarios and conditions, we can gather valuable data, improve algorithm accuracy, and ensure the system's robustness. Ultimately, this preparatory step will facilitate a smoother transition to real-world applications, enhancing the reliability and effectiveness of computer vision in precision livestock farming.</p>
|
4 |
A 3D OBJECT SCANNER : An approach using Microsoft Kinect.Manikhi, Omid, Adlkhast, Behnam January 2013 (has links)
In this thesis report, an approach to use Microsoft Kinect to scan an object and providea 3D model for further processing has been proposed. The additional requiredhardware to rotate the object and fully expose it to the sensor, the drivers and SDKsused and the implemented software are discussed. It is explained how the acquireddata is stored and an efficient storage and mapping method requiring no specialhardware and memory is introduced. The solution proposed circumvents the PointCloud registration task based on the fact that the transformation from one frame tothe next is known with extremely high precision. Next, a method to merge theacquired 3D data from all over the object into a single noise-free model is proposedusing Spherical Transformation and a few experiments and their results aredemonstrated and discussed.
|
5 |
Automatisierte Bestimmung von Schüttgutvolumen aus PunktwolkenRosenbohm, Mario 03 June 2020 (has links)
Ziel dieser Diplomarbeit ist es, eine Software zu entwickeln, welche den Benutzer bei der Volumenberechnung aus Punktwolken unterstützt.
Dazu werden verschiedene Methoden diskutiert, mit deren Hilfe eine Trennung der Punktwolke in zwei Klassen ('gehört zur Volumenoberfläche' und 'gehört nicht zur Volumenoberfläche') möglich ist. Eine Methode stellt sich, unter bestimmten Bedingungen, als geeignet dar. Mit dieser Methode lässt sich die geforderte Klassifizierung effizient durchführen.
Weiterhin gilt es, für die eigentliche Volumenberechnung, Methoden zu analysieren und auf deren Tauglichkeit für die Anwendung in der Punktwolke zu prüfen. Aus diesen Ergebnissen wird ein Verfahren zur Klassifizierung der Scanpunkte und direkt anschließender Volumenmodellerstellung erarbeitet.
Die Darstellung des Volumens ist an vorgegebene Rahmenbedingungen, das Verwenden einer bestimmten CAD-Software, gebunden. Mit Hilfe dieser CAD-Software ist eine Darstellungsvariante zu wählen, welche ein zügiges Arbeiten mit dem Volumenmodell ermöglicht.
Alle Verfahren und Methoden, einschließlich der dabei auftretenden Probleme werden in Programmcode umgesetzt, damit am Ende eine funktionsfähige, unterstützende Software entsteht.:1 Einleitung
1.1 Thesen / Fragen
1.2 Literatur
1.3 Abgrenzung
2 Grundlagen
2.1 Rahmenbedingungen
2.2 Struktur des Programmierprojektes
2.2.1 Speicherung von Einstellungen / Parametern
2.3 Datengrundlage
2.3.1 Dateiformate der Punktwolkendateien
3 Filterung der Punktwolke
3.1 Das Voxelsystem
3.1.1 Programmierung des Voxelsystems
3.1.2 Speicherung der Voxeldaten
3.2 Filterung mit Hilfe des Voxelsystem
4 Volumenberechnungen
4.1 Erläuterung der verwendeten Säulenprismen
4.1.1 Rasterung der Grundfläche
4.1.2 Umsetzung des Texelsystems als Quadtree
4.2 Einsatz des Texelsystem in der Volumenberechnung
4.2.1 Speicherung der Volumendaten
4.3 Darstellung der Volumendaten
5 Bearbeitungen der Volumendaten
5.1 Füllen von Löchern in der Volumenfläche
5.2 Glätten von Bereichen
6 Die Software in Gänze betrachtet
6.1 Analyse der Effektivität
6.2 Ausblick und Ausbau / The intention of this thesis is to develop a software which supports the user in calculating volumes from point clouds.
For this purpose, different methods are discussed that allow a separation of the point cloud into two classes (i.e. 'belongs to the volume surface' and 'does not belong to the volume surface'). One method is shown to be suitable under certain conditions. With this method, the required classification can be carried out efficiently.
Furthermore, methods for the volume calculation itself are analyzed and tested to ensure that they are suitable for use in the point cloud. Based on these results, a method for the classification of the scan points and the ensuing generation of the volume model are developed.
The representation of the volume is bound to given framework conditions, including the use of a specific CAD software. With the help of this CAD software, a representation variant is to be selected which enables a user to easily work with the volume model.
All procedures and methods, including the problems that arise in the process, are converted into program code, so that in the end a functional, helpful software is created.:1 Einleitung
1.1 Thesen / Fragen
1.2 Literatur
1.3 Abgrenzung
2 Grundlagen
2.1 Rahmenbedingungen
2.2 Struktur des Programmierprojektes
2.2.1 Speicherung von Einstellungen / Parametern
2.3 Datengrundlage
2.3.1 Dateiformate der Punktwolkendateien
3 Filterung der Punktwolke
3.1 Das Voxelsystem
3.1.1 Programmierung des Voxelsystems
3.1.2 Speicherung der Voxeldaten
3.2 Filterung mit Hilfe des Voxelsystem
4 Volumenberechnungen
4.1 Erläuterung der verwendeten Säulenprismen
4.1.1 Rasterung der Grundfläche
4.1.2 Umsetzung des Texelsystems als Quadtree
4.2 Einsatz des Texelsystem in der Volumenberechnung
4.2.1 Speicherung der Volumendaten
4.3 Darstellung der Volumendaten
5 Bearbeitungen der Volumendaten
5.1 Füllen von Löchern in der Volumenfläche
5.2 Glätten von Bereichen
6 Die Software in Gänze betrachtet
6.1 Analyse der Effektivität
6.2 Ausblick und Ausbau
|
6 |
3D rekonstrukce z více pohledů kamer / 3D reconstruction from multiple viewsSládeček, Martin January 2019 (has links)
This thesis deals with the task of three-dimensional scene reconstruction using image data obtained from multiple views. It is assumed that intrinsic parameters of the utilized cameras are known. The theoretical chapters describe the basic priciples of individual reconstruction steps. Variuous possible implementaions of data model suitable for this task are also described. The practical part also includes a comparison of false keypoint correspondence filtering, implementation of polar stereo rectification and comparison of disparity map calculation methods that are bundled with the OpenCV library. In the final portion of the thesis, examples of reconstructed 3D models are presented and discussed.
|
7 |
Bin Picking a robotické vidění / Bin Picking and Robotic VisionMúčka, Jan January 2019 (has links)
The aim of this master’s thesis is to describe the Robotic Vision for Bin Picking usage and creating an application for the realization of this task. This application will be able to distinguish several objects based on data from a camera with deep perception and should find the location of object, recognize it and determine its location and orientation. Bin Picking is one of the biggest challenges in today's automation.
|
8 |
Identifikace 3D objektů pro robotické aplikace / Identification of 3D objects for Robotic ApplicationsHujňák, Jaroslav January 2020 (has links)
This thesis focuses on robotic 3D vision for application in Bin Picking. The new method based on Conformal Geometric Algebra (CGA) is proposed and tested for identification of spheres in Pointclouds created with 3D scanner. The speed, precision and scalability of this method is compared to traditional descriptors based method. It is proved that CGA maintains the same precision as the traditional method in much shorter time. The CGA based approach seems promising for the use in the future of robotic 3D vision for identification and localization of spheres.
|
9 |
Reconstruction of trees from 3D point cloudsStålberg, Martin January 2017 (has links)
The geometrical structure of a tree can consist of thousands, even millions, of branches, twigs and leaves in complex arrangements. The structure contains a lot of useful information and can be used for example to assess a tree's health or calculate parameters such as total wood volume or branch size distribution. Because of the complexity, capturing the structure of an entire tree used to be nearly impossible, but the increased availability and quality of particularly digital cameras and Light Detection and Ranging (LIDAR) instruments is making it increasingly possible. A set of digital images of a tree, or a point cloud of a tree from a LIDAR scan, contains a lot of data, but the information about the tree structure has to be extracted from this data through analysis. This work presents a method of reconstructing 3D models of trees from point clouds. The model is constructed from cylindrical segments which are added one by one. Bayesian inference is used to determine how to optimize the parameters of model segment candidates and whether or not to accept them as part of the model. A Hough transform for finding cylinders in point clouds is presented, and used as a heuristic to guide the proposals of model segment candidates. Previous related works have mainly focused on high density point clouds of sparse trees, whereas the objective of this work was to analyze low resolution point clouds of dense almond trees. The method is evaluated on artificial and real datasets and works rather well on high quality data, but performs poorly on low resolution data with gaps and occlusions.
|
Page generated in 0.0244 seconds