131 |
Underwater Laser Scanning - Refractive Calibration, Self-calibration and Mapping for 3D Reconstruction / Laserscanning unter Wasser - Refraktive Kalibrierung, Selbstkalibrierung und Kartierung zur 3D RekonstruktionBleier, Michael January 2023 (has links) (PDF)
There is great interest in affordable, precise and reliable metrology underwater:
Archaeologists want to document artifacts in situ with high detail.
In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport.
Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential.
While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task.
Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption.
However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems.
This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water.
It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector.
The prototype is configured with a motorized yaw axis for capturing scans from a tripod.
Alternatively, it is mounted to a moving platform for mobile mapping.
The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction.
For highest accuracy, the refraction at the individual media interfaces must be taken into account.
This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model.
In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects.
As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light.
The system was successfully deployed in various configurations for both static scanning and mobile mapping.
An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance.
Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection.
Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle.
RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color.
3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks.
The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective. / Das Interesse an präziser, zuverlässiger und zugleich kostengünstiger Unterwassermesstechnik ist groß.
Beispielsweise wollen Archäologen Artefakte in situ mit hoher Detailtreue dokumentieren und in der Meeresforschung benötigen Biologen Messwerkzeuge zur Beobachtung des Korallenwachstums.
Auch Geologen sind auf Messdaten angewiesen, um Sedimenttransporte zu modellieren.
Darüber hinaus ist für die Errichtung von Offshore-Bauwerken, sowie deren Wartung und Inspektion eine millimetergenaue Vermessung von vorhandenen Strukturen und Defekten unerlässlich.
Während die Digitalisierung einzelner Objekte und ganzer Areale an Land gut erforscht ist und verschiedene Standardmethoden, wie zum Beispiel Structure from Motion oder terrestrisches Laserscanning, regelmäßig eingesetzt werden, ist die präzise und hochauflösende Unterwasservermessung nach wie vor eine komplexe und schwierige Aufgabe.
Die Anwendung optischer Messtechnik im Wasser ist aufgrund der eingeschränkten Sichttiefe durch Trübung und Lichtabsorption eine Herausforderung.
Optische Unterwasserscanner bieten jedoch Vorteile hinsichtlich der erreichbaren Auflösung und Genauigkeit gegenüber akustischen Systemen.
In dieser Arbeit werden ein Unterwasser-Laserscanning-System und die Algorithmen zur Erzeugung von 3D-Scans mit hoher Punktdichte im Wasser vorgestellt.
Es basiert auf Lasertriangulation und die optischen Hauptkomponenten sind eine Unterwasserkamera und ein Kreuzlinienlaserprojektor.
Das System ist mit einer motorisierten Drehachse ausgestattet, um Scans von einem Stativ aus aufzunehmen.
Alternativ kann es von einer beweglichen Plattform aus für mobile Kartierung eingesetzt werden.
Das Hauptaugenmerk liegt auf der refraktiven Kalibrierung der Unterwasserkamera und des Laserprojektors, der Bildverarbeitung und der 3D-Rekonstruktion.
Um höchste Genauigkeit zu erreichen, muss die Brechung an den einzelnen Medienübergängen berücksichtigt werden.
Dies wird durch ein physikalisch-geometrisches Kameramodell, das auf einer analytischen Beschreibung der Strahlenverfolgung basiert, und ein optimierungsbasiertes Kalibrierverfahren erreicht.
Neben dem Scannen von Unterwasserstrukturen wird in dieser Arbeit auch die 3D-Erfassung von teilweise im Wasser befindlichen Strukturen und die Korrektur der dabei auftretenden Brechungseffekte vorgestellt.
Da die Kalibrierung im Wasser komplex und zeitintensiv ist, wird die Übertragung einer Kalibrierung des Scanners in Luft auf die Bedingungen im Wasser ohne Neukalibrierung, sowie die Selbstkalibrierung für Lichtschnittverfahren untersucht.
Das System wurde in verschiedenen Konfigurationen sowohl für statisches Scannen als auch für die mobile Kartierung erfolgreich eingesetzt.
Die Validierung der Kalibrierung und der 3D-Rekonstruktion anhand von Referenzobjekten und der Vergleich von Freiformflächen in klarem Wasser zeigen das hohe Genauigkeitspotenzial im Bereich von einem Millimeter bis weniger als einem Zentimeter in Abhängigkeit von der Messdistanz.
Die mobile Unterwasserkartierung und Bewegungskompensation anhand visuell-inertialer Odometrie wird mit einem neuen optischen Unterwasserscanner auf Basis der Streifenprojektion demonstriert.
Dabei ermöglicht die kontinuierliche Registrierung von Einzelscans die Erfassung von 3D-Modellen von einem Unterwasserfahrzeug aus.
Mit Hilfe von parallel aufgenommenen RGB-Bildern werden dabei farbige 3D-Punktwolken der Unterwasserszenen erstellt.
Diese 3D-Karten dienen beispielsweise dem Bediener bei der Fernsteuerung von Unterwasserfahrzeugen und bilden die Grundlage für Offshore-Inspektions- und Vermessungsaufgaben.
Die fortschreitende Automatisierung der Messtechnik wird somit auch eine Verwendung durch Nichtfachleute ermöglichen und gleichzeitig die Erfassungszeit erheblich verkürzen und die Genauigkeit verbessern, was die Vermessung im Wasser kostengünstiger und effizienter macht.
|
132 |
Fluorescence imaging microscopy studies on single molecule diffusion and photophysical dynamicsSchäfer, Stephan 27 March 2007 (has links) (PDF)
Within the last years, e.g. by investigating the fluorescence of single molecules in biological cells, remarkable progress has been made in cell biology extending conventional ensemble techniques concerning temporal / spatial resolution and the detection of particle subpopulations [82]. In addition to employing single fluorophores as "molecular beacons" to determine the position of biomolecules, single molecule fluorescence studies allow to access the photophysical dynamics of genetically encoded fluorescent proteins itself. However, in order to gain statistically consistent results, e.g. on the mobility behavior or the photophysical properties, the fluorescence image sequences have to be analyzed in a preferentially automated and calibrated (non-biased) way. In this thesis, a single molecule fluorescence optical setup was developed and calibrated and experimental biological in-vitro systems were adapted to the needs of single molecule imaging. Based on the fluorescence image sequences obtained, an automated analysis algorithm was developed, characterized and its limits for reliable quantitative data analysis were determined. For lipid marker molecules diffusing in an artifcial lipid membrane, the optimum way of the single molecule trajectory analysis of the image sequences was explored. Furthermore, effects of all relevant artifacts (specifically low signal-to-noise ratio, finite acquisition time and high spot density, in combination with photobleaching) on the recovered diffusion coefficients were carefully studied. The performance of the method was demonstrated in two series of experiments. In one series, the diffusion of a fluorescent lipid probe in artificial lipid bilayer membranes of giant unilamellar vesicles was investigated. In another series of experiments, the photoconversion and photobleaching behavior of the fluorescent protein Kaede-GFP was characterized and protein subpopulations were identified.
|
133 |
Benchmarking of Vision-Based Prototyping and Testing ToolsBalasubramanian, ArunKumar 08 November 2017 (has links) (PDF)
The demand for Advanced Driver Assistance System (ADAS) applications is increasing day by day and their development requires efficient prototyping and real time testing. ADTF (Automotive Data and Time Triggered Framework) is a software tool from Elektrobit which is used for Development, Validation and Visualization of Vision based applications, mainly for ADAS and Autonomous driving. With the help of ADTF tool, Image or Video data can be recorded and visualized and also the testing of data can be processed both on-line and off-line. The development of ADAS applications needs image and video processing and the algorithm has to be highly efficient and must satisfy Real-time requirements. The main objective of this research would be to integrate OpenCV library with ADTF cross platform. OpenCV libraries provide efficient image processing algorithms which can be used with ADTF for quick benchmarking and testing. An ADTF filter framework has been developed where the OpenCV algorithms can be directly used and the testing of the framework is carried out with .DAT and image files with a modular approach. CMake is also explained in this thesis to build the system with ease of use. The ADTF filters are developed in Microsoft Visual Studio 2010 in C++ and OpenMP API are used for Parallel programming approach.
|
134 |
Optische Methoden zur Positionsbestimmung auf Basis von LandmarkenBilda, Sebastian 07 September 2017 (has links) (PDF)
Die Innenraumpositionierung kommt in der heutigen Zeit immer mehr Aufmerksamkeit zu teil. Neben der Navigation durch das Gebäude sind vor allem Location Based Services von Bedeutung, welche Zusatzinformationen zu spezifischen Objekten zur Verfügung stellen Da für eine Innenraumortung das GPS Signal jedoch zu schwach ist, müssen andere Techniken zur Lokalisierung gefunden werden. Neben der häufig verwendeten Positionierung durch Auswertung von empfangenen Funkwellen existieren Methoden zur optischen Lokalisierung mittels Landmarken. Das kamerabasierte Verfahren bietet den Vorteil, dass eine oft zentimetergenaue Positionierung möglich ist.
In dieser Masterarbeit erfolgt die Bestimmung der Position im Gebäude mittels Detektion von ArUco-Markern und Türschildern aus Bilddaten. Als Evaluationsgeräte sind zum einen die Kinect v2 von Microsoft, als auch das Lenovo Phab 2 Pro Smartphone verwendet worden. Neben den Bilddaten stellen diese auch mittels Time of Flight Sensoren generierte Tiefendaten zur Verfügung. Durch den Vergleich von aus dem Bild extrahierten Eckpunkten der Landmarke, mit den aus einer Datenbank entnommenen realen geometrischen Maßen des Objektes, kann die Entfernung zu einer gefundenen Landmarke bestimmt werden. Neben der optischen Distanzermittlung wird die Position zusätzlich anhand der Tiefendaten ermittelt. Abschließend werden beiden Verfahren miteinander verglichen und eine Aussage bezüglich der Genauigkeit und Zuverlässigkeit des in dieser Arbeit entwickelten Algorithmus getroffen. / Indoor Positioning is receiving more and more attention nowadays. Beside the navigation through a building, Location Bases Services offer the possibility to get more information about certain objects in the enviroment. Because GPS signals are too weak to penetrate buildings, other techniques for localization must be found. Beneath the commonly used positioning via the evaluation of received radio signals, optical methods for localization
with the help of landmarks can be used. These camera-based procedures have the advantage, that an inch-perfect positioning is possible.
In this master thesis, the determination of the position in a building is chieved through the detection of ArUco-Marker and door signs in images gathered by a camera. The evaluation is done with the Microsoft Kinect v2 and the Lenovo Phab 2 Pro Smartphone. They offer depth data gained by a time of flight sensor beside the color images. The range to a detected landmark is calculated by comparing the object´s corners in the image with
the real metrics, extracted from a database. Additionally, the distance is determined by the evaluation of the depth data. Finally, both procedures are compared with each other and a statement about the accuracy and responsibility is made.
|
135 |
Extraction of Key-Frames from an Unstable Video FeedVempati, Nikhilesh 13 July 2017 (has links)
The APOLI project deals with Automated Power Line Inspection using Highly-automated Unmanned Aerial Systems. Beside the Real-time damage assessment by on-board high-resolution image data exploitation a postprocessing of the video data is necessary. This Master Thesis deals with the implementation of an Isolator Detector Framework and a Work ow in the Automotive Data and Time-triggered Framework(ADTF) that loads a video direct from a camera or from a storage and extracts the Key Frames which contain objects of interest. This is done by the implementation of an object detection system using C++ and the creation of ADTF Filters that perform the task of detection of the objects of interest and extract the Key Frames using a supervised learning platform. The use case is the extraction of frames from video samples that contain Images of Isolators from Power Transmission Lines.
|
136 |
Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision SystemBdiwi, Mohamad 12 August 2014 (has links) (PDF)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows:
• How to define which subspaces should be vision, position or force controlled?
• When the controller should switch from one control mode to another one?
• How to insure that the visual information could be reliably used?
• How to define the most appropriated vision/force control structure?
In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed.
In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely:
1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge.
2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable.
3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene.
4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user.
If the previous properties are relatively achieved, the proposed robot system can:
• Perform different successive and complex tasks.
• Grasp/contact and track imprecisely placed objects with different poses.
• Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events.
• Benefit from all the advantages of different vision/force control structures.
• Benefit from all the information provided by the sensors.
• Reduce the human intervention or reprogramming during the execution of the task.
• Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
|
137 |
ValidAX - Validierung der Frameworks AMOPA und XTRIEVALBerger, Arne, Eibl, Maximilian, Heinich, Stephan, Herms, Robert, Kahl, Stefan, Kürsten, Jens, Kurze, Albrecht, Manthey, Robert, Rickert, Markus, Ritter, Marc 03 February 2015 (has links) (PDF)
Das Projekt „ValidAX - Validierung der Frameworks AMOPA und XTRIEVAL“ untersucht die Möglichkeiten die an der Professur Medieninformatik der TU Chemnitz erstellten Softwareframeworks AMOPA (Automated Moving Picture Annotator) und Xtrieval (Extensible Information Retrieval Framework) in Richtung einer wirtschaftlichen Verwertbarkeit weiterzuentwickeln und in Arbeitsprozesse praktisch einzubinden. AMOPA ist in der Lage, beliebige audiovisuelle Medien zu analysieren und Metadaten wie Schnittgrenzen, Szenen, Personen, Audiotranskriptionen und andere durchzuführen. Xtrieval ist ein hochflexibles Werkzeug, welches die Recherche in beliebigen Medien optimal ermöglicht. Für die Durchführung des Projekts wurden insgesamt drei mögliche Einsatzszenarien definiert, in denen die Frameworks unterschiedlichen Anforderungen ausgesetzt waren:
- Archivierung
- Interaktives und automatisiertes Fernsehen
- Medizinische Videoanalysen
Entsprechend der Szenarien wurden die Frameworks optimiert und technische Workflows konzipiert und realisiert. Demonstratoren dienen zur Gewinnung weiterer Verwertungspartner. / The project "ValidAX - Validation of the frameworks AMOPA and XTRIEVAL" examines the possibilities of developing the software framework AMOPA (Automated Moving Picture Annotator) and Xtrieval (Extensible Information Retrieval Framework) towards a commercial usage. The frameworks have been created by the Chair Media Informatics at the TU Chemnitz. AMOPA is able to analyze any audiovisual media and to generate additional metadata such as scene detection, face detection, audio transcriptions and others. Xtrieval is a highly flexible tool that allows users to search in any media. For the implementation of the project a total of three possible scenarios have been defined, in which the frameworks were exposed to different requirements:
• Archiving
• Interactive and automated TV
• Medical video analysis
According to the scenarios, the frameworks were optimized and designed and technical workflows were conceptualized and implemented. Demonstrators are used to obtain further commercialization partner.
|
138 |
Optische Methoden zur Positionsbestimmung auf Basis von LandmarkenBilda, Sebastian 24 April 2017 (has links)
Die Innenraumpositionierung kommt in der heutigen Zeit immer mehr Aufmerksamkeit zu teil. Neben der Navigation durch das Gebäude sind vor allem Location Based Services von Bedeutung, welche Zusatzinformationen zu spezifischen Objekten zur Verfügung stellen Da für eine Innenraumortung das GPS Signal jedoch zu schwach ist, müssen andere Techniken zur Lokalisierung gefunden werden. Neben der häufig verwendeten Positionierung durch Auswertung von empfangenen Funkwellen existieren Methoden zur optischen Lokalisierung mittels Landmarken. Das kamerabasierte Verfahren bietet den Vorteil, dass eine oft zentimetergenaue Positionierung möglich ist.
In dieser Masterarbeit erfolgt die Bestimmung der Position im Gebäude mittels Detektion von ArUco-Markern und Türschildern aus Bilddaten. Als Evaluationsgeräte sind zum einen die Kinect v2 von Microsoft, als auch das Lenovo Phab 2 Pro Smartphone verwendet worden. Neben den Bilddaten stellen diese auch mittels Time of Flight Sensoren generierte Tiefendaten zur Verfügung. Durch den Vergleich von aus dem Bild extrahierten Eckpunkten der Landmarke, mit den aus einer Datenbank entnommenen realen geometrischen Maßen des Objektes, kann die Entfernung zu einer gefundenen Landmarke bestimmt werden. Neben der optischen Distanzermittlung wird die Position zusätzlich anhand der Tiefendaten ermittelt. Abschließend werden beiden Verfahren miteinander verglichen und eine Aussage bezüglich der Genauigkeit und Zuverlässigkeit des in dieser Arbeit entwickelten Algorithmus getroffen. / Indoor Positioning is receiving more and more attention nowadays. Beside the navigation through a building, Location Bases Services offer the possibility to get more information about certain objects in the enviroment. Because GPS signals are too weak to penetrate buildings, other techniques for localization must be found. Beneath the commonly used positioning via the evaluation of received radio signals, optical methods for localization
with the help of landmarks can be used. These camera-based procedures have the advantage, that an inch-perfect positioning is possible.
In this master thesis, the determination of the position in a building is chieved through the detection of ArUco-Marker and door signs in images gathered by a camera. The evaluation is done with the Microsoft Kinect v2 and the Lenovo Phab 2 Pro Smartphone. They offer depth data gained by a time of flight sensor beside the color images. The range to a detected landmark is calculated by comparing the object´s corners in the image with
the real metrics, extracted from a database. Additionally, the distance is determined by the evaluation of the depth data. Finally, both procedures are compared with each other and a statement about the accuracy and responsibility is made.
|
139 |
Benchmarking of Vision-Based Prototyping and Testing ToolsBalasubramanian, ArunKumar 21 September 2017 (has links)
The demand for Advanced Driver Assistance System (ADAS) applications is increasing day by day and their development requires efficient prototyping and real time testing. ADTF (Automotive Data and Time Triggered Framework) is a software tool from Elektrobit which is used for Development, Validation and Visualization of Vision based applications, mainly for ADAS and Autonomous driving. With the help of ADTF tool, Image or Video data can be recorded and visualized and also the testing of data can be processed both on-line and off-line. The development of ADAS applications needs image and video processing and the algorithm has to be highly efficient and must satisfy Real-time requirements. The main objective of this research would be to integrate OpenCV library with ADTF cross platform. OpenCV libraries provide efficient image processing algorithms which can be used with ADTF for quick benchmarking and testing. An ADTF filter framework has been developed where the OpenCV algorithms can be directly used and the testing of the framework is carried out with .DAT and image files with a modular approach. CMake is also explained in this thesis to build the system with ease of use. The ADTF filters are developed in Microsoft Visual Studio 2010 in C++ and OpenMP API are used for Parallel programming approach.
|
140 |
Fluorescence imaging microscopy studies on single molecule diffusion and photophysical dynamicsSchäfer, Stephan 09 March 2007 (has links)
Within the last years, e.g. by investigating the fluorescence of single molecules in biological cells, remarkable progress has been made in cell biology extending conventional ensemble techniques concerning temporal / spatial resolution and the detection of particle subpopulations [82]. In addition to employing single fluorophores as "molecular beacons" to determine the position of biomolecules, single molecule fluorescence studies allow to access the photophysical dynamics of genetically encoded fluorescent proteins itself. However, in order to gain statistically consistent results, e.g. on the mobility behavior or the photophysical properties, the fluorescence image sequences have to be analyzed in a preferentially automated and calibrated (non-biased) way. In this thesis, a single molecule fluorescence optical setup was developed and calibrated and experimental biological in-vitro systems were adapted to the needs of single molecule imaging. Based on the fluorescence image sequences obtained, an automated analysis algorithm was developed, characterized and its limits for reliable quantitative data analysis were determined. For lipid marker molecules diffusing in an artifcial lipid membrane, the optimum way of the single molecule trajectory analysis of the image sequences was explored. Furthermore, effects of all relevant artifacts (specifically low signal-to-noise ratio, finite acquisition time and high spot density, in combination with photobleaching) on the recovered diffusion coefficients were carefully studied. The performance of the method was demonstrated in two series of experiments. In one series, the diffusion of a fluorescent lipid probe in artificial lipid bilayer membranes of giant unilamellar vesicles was investigated. In another series of experiments, the photoconversion and photobleaching behavior of the fluorescent protein Kaede-GFP was characterized and protein subpopulations were identified.
|
Page generated in 0.1042 seconds