• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Factors affecting brightness and colour vision under water

Emmerson, Paul G. January 1984 (has links)
Both theoretical and practical importance can be attached to attempts to model human threshold and supra-threshold visual performance under water. Previously, emphasis has been given to the integration of visual data from experiments conducted in air with data of the physical specification of the underwater light field. However, too few underwater studies have been undertaken for the validity of this approach to be assessed. The present research therefore was concerned with the acquisition of such data. Four experiments were carried out: (a) to compare the predicted and obtained detection thresholds of achromatic targets, (b) to measure the relative recognition thresholds of coloured targets, (c) to compare the predicted and obtained supra-threshold appearance of coloured targets at various viewing distances and under different experimental instructions, (d) to compare the predicted and obtained detection thresholds for achromatic targets under realistic search conditions. Within each experiment, observers were tested on visual tasks in the field and in laboratory simulations. Physical specifications of targets and backgrounds were determined by photometry and spectroradiometry. The data confirmed that: (a) erroneous predictions of the detection threshold could occur when the contributions of absorption and scattering to the attenuation of light were not differentiated, (b) the successful replication of previous findings for the relative recognition thresholds of colours depended on the brightness of the targets, (c) the perceived change in target colour with increasing viewing distance was less than that measured physically, implying the presence of a colour constancy mechanism other than chromatic adaptation and simultaneous colour contrast; the degree of colour constancy also varied with the type of target and experimental instructions, (d) the successful prediction of the effects of target-observer motion and target location uncertainty required more than simple numerical corrections to the basic detection threshold model. It was concluded that further progress in underwater visibility modelling is possible provided that the tendency to oversimplify human visual performance is suppressed.
2

A Deep Learning Approach To Target Recognition In Side-Scan Sonar Imagery

Unknown Date (has links)
Automatic target recognition capabilities in autonomous underwater vehicles has been a daunting task, largely due to the noisy nature of sonar imagery and due to the lack of publicly available sonar data. Machine learning techniques have made great strides in tackling this feat, although not much research has been done regarding deep learning techniques for side-scan sonar imagery. Here, a state-of-the-art deep learning object detection method is adapted for side-scan sonar imagery, with results supporting a simple yet robust method to detect objects/anomalies along the seabed. A systematic procedure was employed in transfer learning a pre-trained convolutional neural network in order to learn the pixel-intensity based features of seafloor anomalies in sonar images. Using this process, newly trained convolutional neural network models were produced using relatively small training datasets and tested to show reasonably accurate anomaly detection and classification with little to no false alarms. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
3

Fusion multimodale pour la cartographie sous-marine / Multimodal fusion for underwater mapping

Méline, Arnaud 31 January 2013 (has links)
Le but de ce travail est d'analyser des scènes sous-marines naturelles et en particulier cartographier des environnements sous-marins en 3D. Il existe aujourd'hui de nombreuses techniques pour résoudre ce problème. L'originalité de ce travail se trouve dans la fusion de deux cartes obtenues avec des capteurs de différentes résolutions. Dans un premier temps, un engin autonome (ou un bateau) analyse les fonds marins avec un sonar multifaisceaux et crée une première carte globale de la zone. Cette carte est ensuite décomposée en petites cellules représentant une mosaïque du fond marin. Une deuxième analyse est ensuite réalisée sur certaines cellules particulières à l'aide d'un second capteur avec une résolution plus élevée. Cela permettra d'obtenir une carte détaillée 3D de la cellule. Un véhicule autonome sous-marin ou un plongeur muni d'un système de vision stéréoscopique effectuera cette acquisition.Ce projet se décompose en deux parties, la première s'intéressera à la reconstruction 3D de scènes sous-marines en milieu contraint à l'aide d'une paire stéréoscopique. La deuxième partie de l'étude portera sur l'aspect multimodal. Dans notre cas, nous utilisons cette méthode pour obtenir des reconstructions précises d'objets d'intérêts archéologiques (statues, amphores, etc.) détectés sur la carte globale.La première partie du travail concerne la reconstruction 3D de la scène sous-marine. Même si aujourd'hui le monde de la vision a permis de mieux appréhender ce type d'image, l'étude de scène sous-marine naturelle pose encore de nombreux problèmes. Nous avons pris en compte les bruits sous-marins lors de la création du modèle 3D vidéo ainsi que lors de la calibration des appareils photos. Une étude de robustesse à ces bruits a été réalisée sur deux méthodes de détections et d'appariements de points d'intérêt. Cela a permis d'obtenir des points caractéristiques précis et robustes pour le modèle 3D. La géométrie épipolaire nous a permis de projeter ces points en 3D. La texture a été ajoutée sur les surfaces obtenues par triangulation de Delaunay.La deuxième partie consiste à fusionner le modèle 3D obtenu précédemment et la carte acoustique. Dans un premier temps, afin d'aligner les deux modèles 3D (le modèle vidéo et le modèle acoustique), nous appliquons un recalage approximatif en sélectionnant manuellement quelques paires de points équivalents sur les deux nuages de points. Pour augmenter la précision de ce dernier, nous utilisons un algorithme ICP (Iterative Closest Points).Dans ce travail nous avons créé une carte 3D sous-marine multimodale réalisée à l'aide de modèles 3D « vidéo » et d'une carte acoustique globale. / This work aims to analyze natural underwater scenes and it focuses on mapping underwater environment in 3D. Today, many methods exist to solve this problem. The originality of this work lies in the fusion of two maps obtained from sensors of different resolutions. Initially, an autonomous vehicle (or boat) analyzes the seabed with multibeam sonar and creates a first global map of the area. This map is then divided into small cells representing a mosaic of the seabed. A second analysis is then performed on some particular cells using a second sensor with a higher resolution. This will provide a detailed map of the 3D cell. An autonomous underwater vehicle (AUV) or a diver with a stereoscopic vision system will make this acquisition. This project is divided into two parts; the first one focuses on the 3D reconstruction of underwater scenes in constrained environment using a stereoscopic pair. The second part investigates the multimodal aspect. In our study, we want to use this method to obtain accurate reconstructions of archaeological objects (statues, amphorae, etc.) detected on the global map. The first part of the work relates the 3D reconstruction of the underwater scene. Even if today the vision community has led to a better understanding of this type of images, the study of natural underwater scenes still poses many problems. We have taken into account the underwater noise during the creation of the 3D video model and during the calibration of cameras. A study of the noise robustness was performed on two methods of detection and matching of features points. This resulted into obtaining accurate and robust feature points for the 3D model. Epipolar geometry allowed us to project these points in 3D. The texture was added to the surfaces obtained by Delaunay triangulation.The second part consists of fusing the 3D model obtained previously with the acoustic map. To align the two 3D models (video and acoustic model), we use a first approximated registration by selecting manually few points on each cloud. To increase the accuracy of this registration, we use an algorithm ICP (Iterative Closest Point).In this work we created a 3D underwater multimodal map performed using 3D video model and an acoustic global map.

Page generated in 0.0884 seconds