• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 40
  • 15
  • 11
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 106
  • 64
  • 58
  • 54
  • 49
  • 42
  • 42
  • 40
  • 38
  • 38
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Real-Time Body Tracking and Projection Mapping in the Interactive Arts

Baroya, Sydney 01 December 2020 (has links) (PDF)
Projection mapping, a subtopic of augmented reality, displays computer-generated light visualizations from projectors onto the real environment. A challenge for projection mapping in performing interactive arts is dynamic body movements. Accuracy and speed are key components for an immersive application of body projection mapping and dependent on scanning and processing time. This thesis presents a novel technique to achieve real-time body projection mapping utilizing a state of the art body tracking device, Microsoft’s Azure Kinect DK, by using an array of trackers for error minimization and movement prediction. The device's Sensor and Bodytracking SDKs allow multiple device synchronization. We combine our tracking results from this feature with motion prediction to provide an accurate approximation for body joint tracking. Using the new joint approximations and the depth information from the Kinect, we create a silhouette and map textures and animations to it before projecting it back onto the user. Our implementation of gesture detection provides interaction between the user and the projected images. Our results decreased the lag time created from the devices, code, and projector to create a realistic real-time body projection mapping. Our end goal was to display it in an art show. This thesis was presented at Burning Man 2019 and Delfines de San Carlos 2020 as interactive art installations.
142

Автоматизация формирования облака точек на основе данных, полученных методом фотограмметрии с помощью программного обеспечения Metashape : магистерская диссертация / Automation of point cloud formation based on data obtained by photogrammetry using Metashape software

Клещева, М. С., Kleshcheva, M. S. January 2023 (has links)
В работе представлены результаты съемки элементов интерьера на объекте культурного наследия. Показана обработка результата сканирования. Получены облака точек и ортофотопланы. Написан скрипт для автоматизации работы в программном обеспечении Metashape. / The paper presents the results of shooting interior elements at a cultural heritage site. The processing of the scan result is shown. Point clouds and orthophotoplanes were obtained. A script has been written to automate work in the Metashape software.
143

Terrestrial Laser Scanning for Wooden Facade-system Inspection

Scharf, Alexander January 2019 (has links)
The objective of this study was to evaluate the feasibility of measuring movement, deformation and displacement in wooden façade-systems by terrestrial laser scanning. An overview of different surveying techniques and methods has been created. Point cloud structure and processing was explained in detail as it is the foundation for understanding the advantages and disadvantages of laser scanning.    The boundaries of monitoring façades with simple and complex façade structures were tested with the phase-based laser scanner FARO Focus 3DS. In-field measurements of existing facades were done to show the capabilities of extracting defect features such as cracks by laser scanning. The high noise in the data caused by the limited precision of 3D laser scanners is problematic. Details on a scale of several mm are hidden by the data noise. Methods to reduce the noise during point cloud processing have proven to be very data-specific. The uneven point cloud structure of a façade scan made it therefore difficult to find a method working for the whole scans. Dividing the point cloud data automatically into different façade parts by a process called segmentation could make it possible. However, no suitable segmentation algorithm was found and developing an own algorithm would have exceeded the scope of this thesis. Therefore, the goal of automatic point cloud processing was not fulfilled and neglected in the further analyses of outdoor facades and laboratory experiments. The experimental scans showed that several information could be extracted out of the scans. The accuracy of measured board and gap dimensions were, however, highly depended on the point cloud cleaning steps but provided information which could be used for tracking development of a facade’s features. Extensive calibration might improve the accuracy of the measurements. Deviation of façade structures from flat planes were clearly visible when using colorization of point clouds and might be the main benefit of measuring spatial information of facades by non-contact methods. The determination of façade displacement was done under laboratory conditions. A façade panel was displaced manually, and displacement was calculated with different algorithms. The algorithm determining distance to the closest point in a pair of point clouds provided the best results, while being the simplest one in terms of computational complexity. Out-of-plane displacement was the most suitable to detect with this method. Displacement sideways or upwards required more advanced point cloud processing and manual interpretation by the software operator. Based on the findings during the study it can be concluded that laser scanning is not the correct methods for structural health monitoring of facades when the tracking of small deformations, especially deformations below 5 mm and defects like cracks are the main goal. Displacements, defects and deformations of larger scale can be detected but are tied to a large amount of point cloud processing. It is not clear if the equipment costs, surveying time and the problems caused by high variability of scans results based on façade color, shape and texture are in a positive relation to the benefits obtained from using laser scanning over manually surveying.
144

Point Cloud Registration using both Machine Learning and Non-learning Methods : with Data from a Photon-counting LIDAR Sensor

Boström, Maja January 2023 (has links)
Point Cloud Registration with data measured from a photon-counting LIDAR sensor from a large distance (500 m - 1.5 km) is an expanding field. Data measuredfrom far is sparse and have low detail, which can make the registration processdifficult, and registering this type of data is fairly unexplored. In recent years,machine learning for point cloud registration has been explored with promisingresults. This work compares the performance of the point cloud registration algorithm Iterative Closest Point with state-of-the-art algorithms, with data froma photon-counting LIDAR sensor. The data was provided by the Swedish Defense Research Agency (FOI). The chosen state-of-the-art algorithms were thenon-learning-based Fast Global Registration and learning-based D3Feat and SpinNet. The results indicated that all state-of-the-art algorithms achieve a substantial increase in performance compared to the Iterative Closest Point method. Allthe state-of-the-art algorithms utilize their calculated features to obtain bettercorrespondence points and therefore, can achieve higher performance in pointcloud registration. D3Feat performed point cloud registration with the highestaccuracy of all the state-of-the-art algorithms and ICP.
145

Automatic Point Cloud Modelling for BIM in AEC Sector / Automatisk punktmolnmodellering för BIM i AEC sektorn

Ishag, Mohamed Salih Abaker January 2022 (has links)
In this research, automatic point cloud modeling strategies have been tested to find the best strategies to model walls, floors and ceilings classes in terms of processing time and accuracy. The modeling was applied on a point cloud data related to the Architecture, Engineering, and Construction field which is a point cloud of a building collected by Sweco company in Sweden. This point cloud was segmented and classified into two classes: the first class is walls, and the second is floors and ceilings. The strategies were applied on each class using two commercial software which are: Leica Cyclone 3D-R and Pointfuse and one free open source software which is BLENDER.  The strategies were formed by making a collection of parameters for each strategy, some of these parameters are numeric and others are non numeric. The strategies were three combinations: in the first combination the default values of the numerical parameters were used. In the second combination these default values were increased by 50% and in the third combination these default values were decreased by 50%. The final results showed that all best strategies were done by using Leica Cyclone 3D-R software. Regarding the processing time to model the walls, the fastest strategy is by increasing the default numerical parameters of Regular Sampling function by 50% while ignoring the scanning directions. Regarding the processing time to model the floors and ceilings. The fastest strategy is also by increasing the default numerical parameters of meshing in two steps function by 50%  which in the first step of the function the try to create watertight mesh option is chosen for hole management method and the scanning directions are not ignored. In the second step the refine mesh without cloud option is chosen for refining method and under this option the following parameters are defined: deviation error, refine on free borders is not included and preserve sharp edges is included. Regarding the accuracy, the most accurate strategy to model the walls is by decreasing the default numerical parameters of meshing in two steps function by 50% which in the first step of the function the hole detection option is chosen for hole management method and the scanning directions are included. In the second step the refine mesh from cloud option is chosen for refining method and under this option the following parameters are defined: meshing by keep only best points, deviation error, distance, local reorganization is included and no free border modification is chosen for hole management method. Regarding the most accurate strategy to model the floors and ceilings is by decreasing the default numerical parameters of meshing in two steps function by 50% which in the first step of the function the hole detection option is chosen for hole management method and the scanning directions are included. In the second step the refine mesh from cloud interpolation option is chosen for refining method and under this option the following parameters are defined: refine with deviation error for refining method, deviation error, maximum number of triangles, minimum triangle size, refine with point evenly spaced is not included, distance is included, local reorganization is included, angle threshold on scanning directions is not include and refine free border is chosen for hole management method. / I denna forskning har automatiska punktmolnmodelleringsstrategier testats för att hitta de bästa strategierna för att modellera väggar, golv och takklasser när det gäller bearbetningstid och noggrannhet. Modelleringen applicerades på ett punktmolndata relaterat till Arkitektur, Ingenjörs- och Byggområdet som är ett punktmoln av en byggnad som samlats in av Sweco-företaget i Sverige. Detta punktmoln segmenterades och klassificerades i två klasser: den första klassen är väggar och den andra är golv och tak. Strategierna tillämpades på varje klass med hjälp av två kommersiella programvaror som är: Leica Cyclone 3D-R och Pointfuse och en gratis programvara med öppen källkod som är BLENDER. Strategierna bildades genom att göra en samling parametrar för varje strategi, några av dessa parametrar är numeriska och andra är icke-numeriska. Strategierna var tre kombinationer: i den första kombinationen användes standardvärdena för de numeriska parametrarna. I den andra kombinationen ökades dessa standardvärden med 50 % och i den tredje kombinationen minskades dessa standardvärden med 50 %. De slutliga resultaten visade att alla bästa strategier gjordes med hjälp av programvaran Leica Cyclone 3D-R. När det gäller bearbetningstiden för att modellera väggarna är den snabbaste strategin att öka de numeriska standardparametrarna för funktionen Regelbunden provtagning med 50 % samtidigt som man ignorerar skanningsanvisningarna Angående handläggningstiden för att modellera golv och tak. Den snabbaste strategin är också att öka de numeriska standardparametrarna för meshing i tvåstegsfunktion med 50 %, vilket i det första steget av funktionen är att försöka skapa vattentäta mesh-alternativet väljs för hålhanteringsmetoden och skanningsriktningarna ignoreras inte. I det andra steget väljs alternativet förfina mesh utan moln för förfiningsmetod och under detta alternativ definieras följande parametrar: avvikelsefel, förfina på fria gränser ingår inte och bevara skarpa kanter ingår. När det gäller noggrannheten är den mest exakta strategin för att modellera väggarna genom att minska de numeriska standardparametrarna för meshing i tvåstegsfunktion med 50 %, vilket i det första steget av funktionen väljs håldetekteringsalternativet för hålhanteringsmetoden och skanningsriktningarna ingår. I det andra steget väljs alternativet förfina mesh from cloud för raffineringsmetod och under detta alternativ definieras följande parametrar: meshing by keep only best points, deviation error, distance, local reorganization ingår och ingen fri gränsmodifiering är vald för hål förvaltningsmetod. När det gäller den mest exakta strategin för att modellera golv och tak är att minska de numeriska standardparametrarna för meshing i tvåstegsfunktion med 50 %, vilket i det första steget av funktionen väljs håldetekteringsalternativet för hålhanteringsmetoden och skanningsriktningarna är ingår. I det andra steget väljs alternativet förfina mesh från molninterpolation för raffineringsmetod och under detta alternativ definieras följande parametrar: förfina med avvikelsefel för förfiningsmetod, avvikelsefel, maximalt antal trianglar, minsta triangelstorlek, förfina med punkt jämnt mellanrum ingår inte, avstånd ingår, lokal omorganisation ingår, vinkeltröskel på skanningsriktningar ingår inte och förfina fri kant är vald för hålhanteringsmetod.
146

Adaptive Concrete 3D Printing Based on industrial Robotics / Adaptiv betong 3D-utskrift Baserad på industriell robotik

Hu, Ruiming January 2021 (has links)
Additive manufacturing, also known as 3D printing is the construction of a three-dimensional object from 3D CAD model. The process includes that material depositing, joining or solidifying using computer control. It is getting widely used in many fields, such as architecture and civil engineering, industry and even medical fields. Also, the prevalence of 6 axis industrial robot gives researchers and engineers extended possibilities to design and create with the additional degrees of freedom. This project has been conducted at KTH ABE school and ITM school. In recent years, The ABE school explored the possibility of 3D printing with building materials such as concrete which provides a practical basis for the implementation of this project. The ITM school gave guidance and suggestions for this project based on their experience in industrial manufacturing and robot control. The goals were to propose an improvement of current workflow and explore a detection strategy for the defection of concrete 3D printing product. Due to the material limitations of concrete and robot control, the previous printing tasks that should have been automated require human supervision and intervention, which affects work efficiency and completion of finished product. In order to avoid this, an Intel RealSense L515 Lidar camera was applied to capture a point cloud of product to detect the height of product and program can compensate the print layers number and robot trajectory. The industrial robot is controlled by KRL generated from the known trajectory. The implementation of this project consists of background research, design the layout of 3D printing system, algorithm development and case study. A simple clay model is produced during this project to study the feasibility of this method. / Additiv tillverkning, även känd som 3D-utskrift, är konstruktionen av ett tredimensionellt objektfrån 3D CAD-modellen. Processen innefattar att material avsätter, sammanfogar eller stelnar under datorstyrningen. Det används allmänt inom många områden, till exempel arkitektur och anläggning, industri och till och med medicinska områden. Också, förekomsten av 6-axligindustrirobot ger forskare och ingenjörer mer möjlighet att designa och skapa på grund av fler frihetsgrader. Detta projekt har genomförts vid KTH ABE-skolan och ITM-skolan. Under de senaste åren har ABE -skolan undersökt möjligheterna till 3D-utskrift med byggmaterial som betong, vilket ger en teoretisk grund för genomförandet av detta projekt. ITM-skolan gav vägledning och förslag för detta projekt baserat på deras erfarenhet av industrielltillverkning och robotstyrning. Målen var att föreslå en förbättring av det nuvarande arbetsflödet och utforska en detekteringsstrategi för osäkerheten i konkret 3D-utskrift. På grund av den materiella begränsningen av betong och felaktighet i robotstyrning kräver de tidigare utskriftsuppgifterna som borde ha automatiserats mänsklig övervakning och intervention. Detta påverkar arbetseffektiviteten och färdigställandet av den färdiga produkten. För att undvika detta tillämpades en Intel RealSense L515 -radarkamera för att fånga produktensmoln för att upptäcka produktens höjd och programmet kan kompensera antalet utskriftslager och robotbanan. Industriroboten styrs av KRL genererad från den kända banan. Genomförandet av detta projekt består av bakgrundsresurser, design av layouten för 3D -utskriftssystem, algoritmutveckling och fallstudier. En enkel lermodell produceras under detta projekt för att studera genomförbarheten av denna metod.
147

Trädhöjdsbestämning med UAV-fotogrammetri och UAV-laserskanning : En jämförande studie för detektering av riskträd

Larsson, Alexander, Oscarsson, Olle January 2020 (has links)
UAV (Unmanned Aerial Vehicle) eller drönare används för insamling av geografisk data och fotografering av såväl företag, myndigheter och privatpersoner. Tekniken förenklar insamling av data över stora geografiska områden och kan utnyttjas för kartering, modellering och analysering som volymbestämning. Studien genomfördes med syftet att detektera trädhöjder ur punktmoln genererade med laserskanning och digital fotogrammetri från luften. Vidare undersöktes det vilken metod som gav det mest tillförlitliga resultatet samt om teknikerna var applicerbara för detektering av riskträd. Riskträd innebär i denna studie träd som utgör ett potentiellt hot mot viktig infrastruktur som till exempel kraftledningar. Numera sker datainsamlingen primärt via helikopter för identifiering av sådana träd. Genom att använda olika drönartekniker för datainsamlingen kan kostnaderna reduceras. Insamlingen av data genomfördes över ett glest barrskogsområde i Rörberg strax utanför Gävle. Laserdata samlades in med en LiDAR (Light Detection and Ranging)-sensor från YellowScan monterad på en Geodrone X4L Professional-drönare och de fotogrammetriska data med en drönare av typen DJI Phantom 4 RTK (Real Time Kinematic) med standardkamera. För bägge insamlingarna georefererades insamlade data direkt genom enkelstations-RTK för laserskanningen och med SWEPOS Nätverks-RTK för den fotogrammetriska flygningen. För att kontrollera kvaliteten av insamlade data mättes sex stycken kontrollprofiler in med totalstation i skogspartiet. Dessa jämfördes sedan mot de skapade punktmolnen. Medelavvikelsen och standardavvikelsen mellan LiDAR och kontrollprofilerna fastställdes till -0,038 m och 0,049 m. För fotogrammetrin och kontrollprofilerna bestämdes medelavvikelsen till +0,060 m och standardavvikelsen 0,090 m. Dessa värden jämfördes sedan mot kraven i SIS-TS 21144:2016. För att bestämma absoluta höjder mättes tio stycken träd in med totalstation. Trädens högsta och lägsta punkter koordinatbestämdes och utifrån subtraktion erhölls absoluta värden för vilka höjder från LiDAR- och fotogrammetriskt framställda trädhöjdsmodeller kom att jämföras mot. Jämförelsen mellan metoderna visade en medelavvikelse på -0,325 m för LiDAR och -0,928 m för fotogrammetrin. Slutsatsen av denna studie visar att LiDAR är den mest lämpade tekniken för detektering av trädhöjder och skapande av trädhöjdsmodeller. Detta baseras på erhållna höjdvärden, den digitala terrängmodellens kvalitet och den goda täckningen av punkter i plan och höjd för punktmolnet. / UAVs (Unmanned Aerial Vehicles) or drones are commonly used for collecting spatial data and aerial images by companies, state agencies and civilians. The UAV techniques makes collection of geodata easier for large areas and can be used for mapping, 3D modelling and other analyses, e.g. for volume determination. The aim of this study was to compare 3D point clouds generated from airborne laser scanning and digital photogrammetry for detecting heights of trees. It was also investigated which method produced the most reliable results and if these were applicable for detecting risk trees. The definition of risk trees in this study are trees that run the potential risk of damaging important infrastructure such as electric power transmission lines. Nowadays the collection of data is mainly conducted using helicopters for identifying the risk trees, but with UAV technologies costs can be significantly reduced. The collection of data was performed over a sparse coniferous forest area in Gävle, Sweden. Laser data was collected using a YellowScan LiDAR (Light Detection and Ranging) sensor mounted on a drone. For the photogrammetric data, a DJI Phantom 4 RTK (Real Time Kinematic) drone was used with its standard camera. Both techniques were directly georeferenced using Single station RTK and SWEPOS Network RTK respectively. To check the quality of the collected data, six control profiles were established using a total station. These measurements were then compared to the generated point clouds. Our results show that the mean deviation and standard deviation in height between LiDAR point clouds and the control profiles are -0,038 m and 0,049 m, respectively. The mean deviation and standard deviation for photogrammetric point clouds and control profiles are +0,060 m and 0.090 m, respectively. These values were then compared to the requirements in SIS-TS 21144:2016. To determine absolute tree heights, ten random trees were measured using a total station. The coordinates of the highest and lowest points of each tree were then subtracted to serve as absolute height values. The comparison of the two UAV methods showed mean height deviations of   -0,325 m for LiDAR and -0,928 m for the photogrammetry. This study concludes that LiDAR is the most suitable technology of the two methods tested for detecting tree heights and creating canopy height models. This is based on the obtained height values, the quality of the digital terrain model and the good distribution of points in plane and height for the point cloud.
148

Rum i översättning : Utforskning av punktmolnsskannerns generativa möjligheter / Spatial Translations : An exploration through the generative possibilities of point cloud scanning

Walter, Emma January 2024 (has links)
Spatial Translations (Rum i översättning) explores how point cloud scanning, a new technical tool for spatial documentation, also has the possibility to be generative in a new design process within interior architecture and furniture design. The tool has streamlined the process of gathering spatial information and provides a detailed digital basis for further designing. I believe that all tools and materials that the architect uses, affect the design work in different ways. This project explores how this new type of digital material, with a greater amount of spatial information and interpretations, actively can influence further design that takes place in the scanned space. The constant development of technical and virtual means within the profession has a major impact on future architecture and the built environment around us. It is important to increase the understanding of the technology as well as to be critical and innovative about its impact and possibilities in a creative practice. The investigation of the technology is explored theoretically and practically in different spatial contexts as well as in digital manipulations of the information. Spatial Translations is inspired by colleagues, such as Interiors Matter, who have used point cloud scanning in other projects. This project tests and takes on similar explorations and materialization methods, but in a different context and implemented design. In the practical examination of the digital information, interesting misscans and spatial manipulations are found. These errors are the scanners perception of the rooms and occurs through various spatial challenges it was faced with. The project concludes that the technology offers a large amount of detailed digital and spatial material that is inspiring in many ways for new developments of ideas in further design processes.
149

Digital State Models for Infrastructure Condition Assessment and Structural Testing

Lama Salomon, Abraham 10 February 2017 (has links)
This research introduces and applies the concept of digital state models for civil infrastructure condition assessment and structural testing. Digital state models are defined herein as any transient or permanent 3D model of an object (e.g. textured meshes and point clouds) combined with any electromagnetic radiation (e.g., visible light, infrared, X-ray) or other two-dimensional image-like representation. In this study, digital state models are built using visible light and used to document the transient state of a wide variety of structures (ranging from concrete elements to cold-formed steel columns and hot-rolled steel shear-walls) and civil infrastructures (bridges). The accuracy of digital state models was validated in comparison to traditional sensors (e.g., digital caliper, crack microscope, wire potentiometer). Overall, features measured from the 3D point clouds data presented a maximum error of ±0.10 in. (±2.5 mm); and surface features (i.e., crack widths) measured from the texture information in textured polygon meshes had a maximum error of ±0.010 in. (±0.25 mm). Results showed that digital state models have a similar performance between all specimen surface types and between laboratory and field experiments. Also, it is shown that digital state models have great potential for structural assessment by significantly improving data collection, automation, change detection, visualization, and augmented reality, with significant opportunities for commercial development. Algorithms to analyze and extract information from digital state models such as cracks, displacement, and buckling deformation are developed and tested. Finally, the extensive data sets collected in this effort are shared for research development in computer vision-based infrastructure condition assessment, eliminating the major obstacle for advancing in this field, the absence of publicly available data sets. / Ph. D.
150

Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

Vock, Dominik 08 May 2014 (has links) (PDF)
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.

Page generated in 0.0657 seconds