• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 3
  • 1
  • 1
  • Tagged with
  • 23
  • 19
  • 19
  • 13
  • 12
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The V-SLAM Hurdler : A Faster V-SLAM System using Online Semantic Dynamic-and-Hardness-aware Approximation / V-SLAM Häcklöparen : Ett Snabbare V-SLAM System med Online semantisk Dynamisk-och-Hårdhetsmedveten Approximation

Mingxuan, Liu January 2022 (has links)
Visual Simultaneous Localization And Mapping (V-SLAM) and object detection algorithms are two critical prerequisites for modern XR applications. V-SLAM allows XR devices to geometrically map the environment and localize itself within the environment, simultaneously. Furthermore, object detectors based on Deep Neural Network (DNN) can be used to semantically understand what those features in the environment represent. However, both of these algorithms are computationally expensive, which makes it challenging for them to achieve good real-time performance on device. In this thesis, we first present TensoRT Quantized YOLOv4 (TRTQYOLOv4), a faster implementation of YOLOv4 architecture [1] using FP16 reduced precision and INT8 quantization powered by NVIDIA TensorRT [2] framework. Second, we propose the V-SLAM Hurdler: A Faster VSLAM System using Online Dynamic-and-Hardness-aware Approximation. The proposed system integrates the base RGB-D V-SLAM ORB-SLAM3 [3] with the INT8 TRTQ-YOLOv4 object detector, a novel Entropy-based Degreeof- Difficulty Estimator, an Online Hardness-aware Approximation Controller and a Dynamic Object Eraser, applying online dynamic-and-hardness aware approximation to the base V-SLAM system during runtime while increasing its robustness in dynamic scenes. We first evaluate the proposed object detector on public object detection dataset. The proposed FP16 precision TRTQ-YOLOv4 achieves 2×faster than the full-precision model without loss of accuracy, while the INT8 quantized TRTQ-YOLOv4 is almost 3×faster than the full-precision one with only 0.024 loss in mAP@50:5:95. Second, we evaluate our proposed V-SLAM system on public RGB-D SLAM dataset. In static scenes, the proposed system speeds up the base VSLAM system by +21.2% on average with only −0.7% loss of accuracy. In dynamic scenes, the proposed system not only accelerate the base system by +23.5% but also improves the accuracy by +89.3%, making it as robust as in the static scenes. Lastly, the comparison against the state-of-the-art SLAMs designed dynamic environments shows that our system outperforms most of the compared methods in highly dynamic scenes. / Visual SLAM (V-SLAM) och objektdetekteringsalgoritmer är två kritiska förutsättningar för moderna XR-applikationer. V-SLAM tillåter XR-enheter att geometriskt kartlägga miljön och lokalisera sig i miljön samtidigt. Dessutom kan DNN-baserade objektdetektorer användas för att semantiskt förstå vad dessa egenskaper i miljön representerar. Men båda dessa algoritmer är beräkningsmässigt dyra, vilket gör det utmanande för dem att uppnå bra realtidsprestanda på enheten. I det här examensarbetet presenterar vi först TRTQ-YOLOv4, en snabbare implementering av YOLOv4 arkitektur [1] med FP16 reducerad precision och INT8 kvantisering som drivs av NVIDIA TensorRT [2] ramverk. För det andra föreslår vi V-SLAM-häckaren: ett snabbare V-SLAM-system som använder online-dynamisk och hårdhetsmedveten approximation. Det föreslagna systemet integrerar basen RGB-D V-SLAM ORB-SLAM3 [3] med INT8 TRTQYOLOv4 objektdetektorn, en ny Entropi-baserad svårighetsgradsuppskattare, en online hårdhetsmedveten approximationskontroller och en Dynamic Object Eraser, applicerar online-dynamik- och hårdhetsmedveten approximation till bas-V-SLAM-systemet under körning samtidigt som det ökar dess robusthet i dynamiska scener. Vi utvärderar först den föreslagna objektdetektorn på datauppsättning för offentlig objektdetektering. Den föreslagna FP16 precision TRTQ-YOLOv4 uppnår 2× snabbare än fullprecisionsmodellen utan förlust av noggrannhet, medan den INT8 kvantiserade TRTQ-YOLOv4 är nästan 3× snabbare än fullprecisionsmodellen med endast 0.024 förlust i mAP@50:5:95. För det andra utvärderar vi vårt föreslagna V-SLAM-system på offentlig RGB-D SLAM-datauppsättning. I statiska scener snabbar det föreslagna systemet upp V-SLAM-bassystemet med +21.2% i genomsnitt med endast −0.7% förlust av noggrannhet. I dynamiska scener accelererar det föreslagna systemet inte bara bassystemet med +23.5% utan förbättrar också noggrannheten med +89.3%, vilket gör det lika robust som i de statiska scenerna. Slutligen visar jämförelsen med de senaste SLAM-designade dynamiska miljöerna att vårt system överträffar de flesta av de jämförda metoderna i mycket dynamiska scener.
12

Optimierung von Algorithmen zur Videoanalyse / Optimization of algorithms for video analysis : A framework to fit the demands of local television stations

Ritter, Marc 02 February 2015 (has links) (PDF)
Die Datenbestände lokaler Fernsehsender umfassen oftmals mehrere zehntausend Videokassetten. Moderne Verfahren werden benötigt, um derartige Datenkollektionen inhaltlich automatisiert zu erschließen. Das Auffinden relevanter Objekte spielt dabei eine übergeordnete Rolle, wobei gesteigerte Anforderungen wie niedrige Fehler- und hohe Detektionsraten notwendig sind, um eine Korruption des Suchindex zu verhindern und erfolgreiche Recherchen zu ermöglichen. Zugleich müssen genügend Objekte indiziert werden, um Aussagen über den tatsächlichen Inhalt zu treffen. Diese Arbeit befasst sich mit der Anpassung und Optimierung bestehender Detektionsverfahren. Dazu wird ein auf die hohen Leistungsbedürfnisse der Videoanalyse zugeschnittenes holistisches Workflow- und Prozesssystem mit der Zielstellung implementiert, die Entwicklung von Bilderkennungsalgorithmen, die Visualisierung von Zwischenschritten sowie deren Evaluation zu ermöglichen. Im Fokus stehen Verfahren zur strukturellen Zerlegung von Videomaterialien und zur inhaltlichen Analyse im Bereich der Gesichtsdetektion und Fußgängererkennung. / The data collections of local television stations often consist of multiples of ten thousand video tapes. Modern methods are needed to exploit the content of such archives. While the retrieval of objects plays a fundamental role, essential requirements incorporate low false and high detection rates in order to prevent the corruption of the search index. However, a sufficient number of objects need to be found to make assumptions about the content explored. This work focuses on the adjustment and optimization of existing detection techniques. Therefor, the author develops a holistic framework that directly reflects on the high demands of video analysis with the aim to facilitate the development of image processing algorithms, the visualization of intermediate results, and their evaluation and optimization. The effectiveness of the system is demonstrated on the structural decomposition of video footage and on content-based detection of faces and pedestrians.
13

Forest Growth And Volume Estimation Using Machine Learning

Dahmén, Gustav, Strand, Erica January 2022 (has links)
Estimation of forest parameters using remote sensing information could streamline the forest industry from a time and economic perspective. This thesis utilizes object detection and semantic segmentation to detect and classify individual trees from images over 3D models reconstructed from satellite images. This thesis investigated two methods that showed different strengths in detecting and classifying trees in deciduous, evergreen, or mixed forests. These methods are not just valuable for forest inventory but can be greatly useful for telecommunication companies and in defense and intelligence applications. This thesis also presents methods for estimating tree volume and estimating tree growth in 3D models. The results from the methods show the potential to be used in forest management. Finally, this thesis shows several benefits of managing a digitalized forest, economically, environmentally, and socially.
14

opticSAM: Entwicklung einer optischen, selbstlernenden Störungsdiagnose in Verarbeitungsmaschinen

Schroth, Moritz 09 December 2019 (has links)
No description available.
15

Optical Inspection for Soldering Fault Detection in a PCB Assembly using Convolutional Neural Networks

Bilal Akhtar, Muhammad January 2019 (has links)
Convolutional Neural Network (CNN) has been established as a powerful toolto automate various computer vision tasks without requiring any aprioriknowledge. Printed Circuit Board (PCB) manufacturers want to improve theirproduct quality by employing vision based automatic optical inspection (AOI)systems at PCB assembly manufacturing. An AOI system employs classiccomputer vision and image processing techniques to detect variousmanufacturing faults in a PCB assembly. Recently, CNN has been usedsuccessfully at various stages of automatic optical inspection. However, nonehas used 2D image of PCB assembly directly as input to a CNN. Currently, allavailable systems are specific to a PCB assembly and require a lot ofpreprocessing steps or a complex illumination system to improve theaccuracy. This master thesis attempts to design an effective soldering faultdetection system using CNN applied on image of a PCB assembly, withRaspberry Pi PCB assembly as the case in point.Soldering faults detection is considered as equivalent of object detectionprocess. YOLO (short for: “You Only Look Once”) is state-of-the-art fast objectdetection CNN. Although, it is designed for object detection in images frompublicly available datasets, we are using YOLO as a benchmark to define theperformance metrics for the proposed CNN. Besides accuracy, theeffectiveness of a trained CNN also depends on memory requirements andinference time. Accuracy of a CNN increases by adding a convolutional layer atthe expense of increased memory requirement and inference time. Theprediction layer of proposed CNN is inspired by the YOLO algorithm while thefeature extraction layer is customized to our application and is a combinationof classical CNN components with residual connection, inception module andbottleneck layer.Experimental results show that state-of-the-art object detection algorithmsare not efficient when used on a new and different dataset for object detection.Our proposed CNN detection algorithm predicts more accurately than YOLOalgorithm with an increase in average precision of 3.0%, is less complexrequiring 50% lesser number of parameters, and infers in half the time takenby YOLO. The experimental results also show that CNN can be an effectivemean of performing AOI (given there is plenty of dataset available for trainingthe CNN). / Convolutional Neural Network (CNN) har etablerats som ett kraftfullt verktygför att automatisera olika datorvisionsuppgifter utan att kräva någon apriorikunskap. Printed Circuit Board (PCB) tillverkare vill förbättra sinproduktkvalitet genom att använda visionbaserade automatiska optiskainspektionssystem (AOI) vid PCB-monteringstillverkning. Ett AOI-systemanvänder klassiska datorvisions- och bildbehandlingstekniker för att upptäckaolika tillverkningsfel i en PCB-enhet. Nyligen har CNN använts framgångsrikti olika stadier av automatisk optisk inspektion. Ingen har dock använt 2D-bildav PCB-enheten direkt som inmatning till ett CNN. För närvarande är allatillgängliga system specifika för en PCB-enhet och kräver mångaförbehandlingssteg eller ett komplext belysningssystem för att förbättranoggrannheten. Detta examensarbete försöker konstruera ett effektivtlödningsfelsdetekteringssystem med hjälp av CNN applicerat på bild av enPCB-enhet, med Raspberry Pi PCB-enhet som fallet.Detektering av lödningsfel anses vara ekvivalent medobjektdetekteringsprocessen. YOLO (förkortning: “Du ser bara en gång”) ärdet senaste snabba objektdetekteringen CNN. Även om det är utformat förobjektdetektering i bilder från offentligt tillgängliga datasätt, använder viYOLO som ett riktmärke för att definiera prestandametriken för detföreslagna CNN. Förutom noggrannhet beror effektiviteten hos en tränadCNN också på minneskrav och slutningstid. En CNNs noggrannhet ökargenom att lägga till ett invändigt lager på bekostnad av ökat minnesbehov ochinferingstid. Förutsägelseskiktet för föreslaget CNN är inspirerat av YOLOalgoritmenmedan funktionsekstraktionsskiktet anpassas efter vår applikationoch är en kombination av klassiska CNN-komponenter med restanslutning,startmodul och flaskhalsskikt.Experimentella resultat visar att modernaste objektdetekteringsalgoritmerinte är effektiva när de används i ett nytt och annorlunda datasätt förobjektdetektering. Vår föreslagna CNN-detekteringsalgoritm förutsäger merexakt än YOLO-algoritmen med en ökning av den genomsnittliga precisionenpå 3,0%, är mindre komplicerad som kräver 50% mindre antal parametraroch lägger ut under halva tiden som YOLO tar. De experimentella resultatenvisar också att CNN kan vara ett effektivt medel för att utföra AOI (med tankepå att det finns gott om datamängder tillgängliga för utbildning av CNN)
16

Object Detection via Contextual Information / Objektdetektion via Kontextuell Information

Stålebrink, Lovisa January 2022 (has links)
Using computer vision to automatically process and understand images is becoming increasingly popular. One frequently used technique in this area is object detection, where the goal is to both localize and classify objects in images. Today's detection models are accurate, but there is still room for improvement. Most models process objects independently and do not take any contextual information into account in the classification step. This thesis will therefore investigate if a performance improvement can be achieved by classifying all objects jointly with the use of contextual information. An architecture that has the ability to learn relationships of this type of information is the transformer. To investigate what performance that can be achieved, a new architecture is constructed where the classification step is replaced by a transformer block. The model is trained and evaluated on document images and shows promising results with a mAP score of 87.29. This value is compared to a mAP of 88.19, which was achieved by the object detector, Mask R-CNN, that the new model is built upon.  Although the proposed model did not improve the performance, it comes with some benefits worth exploring further. By using contextual information the proposed model can eliminate the need for Non-Maximum Suppression, which can be seen as a benefit since it removes one hand-crafted process. Another benefit is that the model tends to learn relatively quickly and a single pass over the dataset seems sufficient. The model, however, comes with some drawbacks, including a longer inference time due to the increase in model parameters. The model predictions are also less secure than for Mask R-CNN. With some further investigation and optimization, these drawbacks could be reduced and the performance of the model be improved.
17

Robust Multi-Modal Fusion for 3D Object Detection : Using multiple sensors of different types to robustly detect, classify, and position objects in three dimensions. / Robust multi-modal fusion för 3D-objektdetektion : Använda flera sensorer av olka typer för att robust detektera, klassificera och positionera objekt i tre dimensioner.

Kårefjärd, Viktor January 2023 (has links)
The computer vision task of 3D object detection is fundamentally necessary for autonomous driving perception systems. These vehicles typically feature a multitude of sensors, such as cameras, radars, and light detection and ranging sensors. A neural network architecture approach to make use of these sensor modalities is a multi-modal 3D object detection network with a fusion step that combines the information from multiple data streams to jointly predicted bounding boxes of detected objects. How this step should be performed, however, remains largely an open question due to the contemporary nature of this literature space. Thus, the question arises: How can sensor information from different sensors be combined to perform 3D object detection for a real-world application such as a mobile delivery robot with robustness requirements and how should a fusion step be performed as a part of a larger multi-modal fusion network? This work explores state-of-the-art multi-modal fusion models by testing with sub-optimal sensor data augmentations to quantify robustness including LiDAR point cloud subsampling and low-resolution LiDAR data. Sensor-to-sensor misalignments from poor calibration, decalibration, or spatial-temporal mis-synchronization problems are also simulated and a set of fusion steps are compared and evaluated. Three novel fusion steps are proposed where the best-performing fusion step is a convolution fusion with an encode-decoder and a squeeze and excitation block. The results indicate how early and late fusion methods are sensitive to sub-optimal LiDAR sensor conditions, and thus not suitable for an application with requirements of robust detection. Instead, Deep-fusion based models are preferred. Furthermore, a bird’s eye fusion model is demonstrated to not be overly sensitive to small sensor-to-sensor misalignments, and how the proposed fusion step with an encoder-decoder structure and a squeeze and excitation block can further limit misalignment-related performance deficits. The introduction of sensor misalignment as a training augmentation is also proven to alleviate and generalize the fusion step under heavy misalignment. / Datorseende uppgiften 3D-objektdetektering är i grunden nödvändig för autonomt körande system. Dessa fordon har vanligtvis ett flertal sensorer, såsom kameror, radar och ljusdetekterings- och avståndssensorer. Ett tillvägagångssätt med neural nätverksarkitektur för att använda dessa sensormodaliteter är ett multimodalt 3D-objektdetekteringsnätverk med ett fusionssteg som kombinerar informationen från flera dataströmmar för att gemensamt föreslå beggrränsade boxar för upptäckta objekt. Hur detta steg bör utformas förblir dock till stor del en öppen fråga på grund av litteraturutrymmes obestämda karaktär. Därför uppstår frågan: Hur kan sensorinformation från olika sensorer kombineras för att utföra 3D-objektdetektering för en verklig applikation som en mobil leveransrobot med robusthetskrav och hur ska ett fusionssteg utföras som en del av i ett större multimodalt fusionsnätverk? Detta arbete utforskar moderna multimodala fusionsmodeller genom att testa med suboptimala sensordataaugmenteringar för att kvantifiera robusthet inklusive LiDAR punktmolnsdelsampling och lågupplöst LiDAR-data. Sensor-till-sensor feljusteringar från dålig kalibrering, dekalibrering eller rumsliga-temporala felsynkroniseringsproblem simuleras också och en uppsättning fusionssteg jämförs och utvärderas. Tre nya fusionssteg föreslås där det bästa fusionssteget av de presterande är en convolution med en inkodare-avkodare och ett kläm- och exciteringsblock. Resultaten indikerar hur tidiga och sena fusionsmetoder är känsliga för suboptimala LiDAR-sensorförhållanden och därför inte lämpar sig för en applikation med krav på robust detektion. Istället föredras djupfusion modeller. Dessutom har en fusionsmodell av fågelvy typ visat sig inte vara känslig för små sensor-till-sensor feljusteringar, och hur det föreslagna fusionssteget med en inkodare-avkodarestruktur och ett kläm- och exciteringsblock ytterligare kan begränsa feljusteringsrelaterade prestandabrister. Införandet av sensorfeljustering som en träningsaugmentering har också visat sig lindra och generalisera fusionssteget under kraftig feljustering.
18

3D Object Detection Using Sidescan Sonar Images

Georgiev, Ivaylo January 2024 (has links)
Sidescan sonars are tools used in seabed inspection and imagery. As a smaller and cheaper compared to the alternatives tool, it has attracted attention and many studies have been developed to extract information about the seabed altitude from the produced images. The main issue is that sidescan sonars do not provide elevation angle information, therefore a 3D map of the seabed cannot be inferred directly. One of the most recent techniques to tackle this problem is called neural rendering [1], in which the sea surface bathymetry is implicitly represented using a neural network. The purpose of this thesis is (1) to find the minimum altitude change that can be detected using this technique, (2) to check whether the position of the sonar ensonification has any effect on these results, and (3) to check from how many sides is it sufficient to ensonify the region with altitude change in order to detect it confidently. To conduct this research, simulations of missions conducted by an autonomous underwater vehicle with sidescan sonar heads on both sides are done on a map, where different objects from various sizes and shapes are put. Then, neural rendering is used to reconstruct the bathymetry of the maps before and after the object insertion from the sidescan sonar. The reconstructed seabed elevations are then compared and the objects with the smallest size or altitude that were detected (meaning that the predicted height from the model trained on the map with the objects is significantly larger than that of the model trained on the initial map) would be the answer to the first question. Then, those smallest objects are again put on the same map, and now smaller autonomous underwater vehicle missions are used to check how many sides are need so that the objects are still detectable. The conducted experiments suggest that objects with bathymetry elevation in the range of centimeters can be detected, and in some cases ensonification from 2 sides is sufficient to detect an object with confidence. / Sidenskannings-sonarer spelar en avgörande roll i inspektionen av havsbotten och erbjuder kostnadseffektiva alternativ till traditionella verktyg. Bristen på information om elevationsvinklar utgör dock en utmaning för att direkt härleda en 3D-karta över havsbotten. Denna avhandling undersöker tillämpningen av neural rendering [1], en nyligen utvecklad teknik som implicit representerar havsytsbathymetri med neurala nätverk, för att adressera denna begränsning. Målen med denna forskning är tre: (1) att bestämma den minsta detekterbara höjdändringen med hjälp av neural rendering, (2) att bedöma effekten av sonarens ensonifieringsposition på detektionsresultaten och (3) att undersöka det minsta antalet sidor som krävs för pålitlig objektdetektion i områden med höjdändringar. Metoden innefattar simuleringar av autonoma undervattensfordonsuppdrag utrustade med sidenskannings-sonarer på båda sidor. Olika objekt av varierande storlekar och former introduceras på en karta, och neural rendering används för att återskapa bathymetrier före och efter objektets insättning. Analysen fokuserar på att jämföra de återskapade havsbottenhöjderna och identifiera de minsta objekten eller höjdändringarna som är möjliga att detektera med modellen. Därefter återintroduceras dessa minimala objekt på kartan, och mindre uppdrag med autonoma undervattensfordon genomförs för att fastställa det minsta antalet sidor som krävs för pålitlig detektion. Forskningsresultaten indikerar att objekt med höjdändringar i centimeterskalan kan detekteras pålitligt. Dessutom tyder experimenten på att i vissa fall är ensonifiering från endast två sidor tillräckligt för pålitlig objektdetektion. Denna forskning bidrar med värdefulla insikter för att optimera sidenskanningssonarapplikationer för havsbotteninspektion, vilket erbjuder potentiella förbättringar av effektivitet och kostnadseffektivitet för undervattensutforskning och kartläggning.
19

Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

Vock, Dominik 08 May 2014 (has links) (PDF)
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.
20

Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

Vock, Dominik 18 December 2013 (has links)
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.

Page generated in 0.0988 seconds