• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

FPGA-Accelerated Dehazing by Visible and Near-infrared Image Fusion

Karlsson, Jonas January 2015 (has links)
Fog and haze can have a dramatic impact on vision systems for land and sea vehicles. The impact of such conditions on infrared images is not as severe as for standard images. By fusing images from two cameras, one ordinary and one near-infrared camera, a complete dehazing system with colour preservation can be achieved. Applying several different algorithms to an image set and evaluating the results, the most suitable image fusion algoritm has been identified. Using an FPGA, a programmable integrated circuit, a crucial part of the algorithm has been implemented. It is capable of producing processed images 30 times faster than a laptop computer. This implementation lays the foundation of a real-time dehazing system and provides a significant part of the full solution. The results show that such a system can be accomplished with an FPGA.
22

Towards Reliable Computer Vision in Aviation: An Evaluation of Sensor Fusion and Quality Assessment

Björklund, Emil, Hjorth, Johan January 2020 (has links)
Research conducted in the aviation industry includes two major areas, increased safety and a reduction of the environmental footprint. This thesis investigates the possibilities of increased situational awareness with computer vision in avionics systems. Image fusion methods are evaluated with appropriate pre-processing of three image sensors, one in the visual spectrum and two in the infra-red spectrum. The sensor setup is chosen to cope with the different weather and operational conditions of an aircraft, with a focus on the final approach and landing phases. Extensive image quality assessment metrics derived from a systematic review is applied to provide a precise evaluation of the image quality of the fusion methods. A total of four image fusion methods are evaluated, where two are convolutional network-based, using the networks for feature extraction in the detailed layers. Other approaches with visual saliency maps and sparse representation are also evaluated. With methods implemented in MATLAB, results show that a conventional method implementing a rolling guidance filter for layer separation and visual saliency map provides the best results. The results are further confirmed with a subjective ranking test, where the image quality of the fusion methods is evaluated further.
23

Color Correction and Contrast Enhancement for Natural Images and Videos / Correction des couleurs et amélioration du contraste pour images et vidéos naturelles

Tian, Qi-Chong 04 October 2018 (has links)
L'amélioration d'image est une sorte de technique pour améliorer la qualité visuelle d'image, qui joue un rôle très important dans les domaines du traitement d'image et de la vision d'ordinateur. En particulier, nous considérons la correction de couleur et l'amélioration de contraste pour améliorer la qualité d'image.Dans la première partie de cette thèse, nous nous concentrons sur la correction des couleurs pour les images naturelles. Tout d'abord, nous donnons un examen simple de la correction des couleurs. Deuxièmement, nous proposons une méthode efficace de correction des couleurs pour la couture d'images via la spécification d'histogramme et la cartographie globale. Troisièmement, nous présentons une approche de cohérence des couleurs pour les collections d'images, basée sur la spécification de la gamme conservation histogramme.Dans la deuxième partie, nous prêtons attention à l'amélioration du contraste pour les images et les vidéos naturelles. Tout d'abord, nous donnons un simple examen de l'amélioration du contraste. Deuxièmement, nous proposons une méthode de préservation du contraste global de naturalité, qui peut éviter une survalorisation. Troisièmement, nous présentons une méthode de fusion à base de variation pour l'amélioration de l'image d'illumination non uniforme, qui peut éviter la sur-amplification ou la sous-amélioration. Enfin, nous étendons le cadre basé sur la fusion pour améliorer les vidéos avec une stratégie temporellement cohérente, qui n'entraîne pas de scintillement des artefacts. / Image enhancement is a kind of technique to improve the image visual quality, which plays a very important role in the domains of image processing and computer vision. Specifically, we consider color correction and contrast enhancement to improve the image quality.In the first part of this thesis, we focus on color correction for natural images. Firstly, we give a simple review of color correction. Secondly, we propose an efficient color correction method for image stitching via histogram specification and global mapping. Thirdly, we present a color consistency approach for image collections, based on range preserving histogram specification.In the second part, we pay attention to contrast enhancement for natural images and videos. Firstly, we give a simple review of contrast enhancement. Secondly, we propose a naturalness preservation global contrast enhancement method, which can avoid over-enhancement. Thirdly, we present a variational-based fusion method for non-uniform illumination image enhancement, which can avoid overenhancement or under-enhancement. Finally, we extend the fusion-based framework to enhance videos with a temporally consistent strategy, which does not result in flickering artifacts.
24

Comparison and Fusion of space borne L-, C- and X- Band SAR Images for Damage Identification in the 2008 Sichuan Earthquake

LAU, SIN WAI January 2011 (has links)
Remote sensing has been widely used in disaster management. However, application of optical imageries in damage detection is not always feasible for immediate damage assessment. In the case of the Sichuan earthquake in 2008, the damaged areas were covered by cloud and fog for most of the time. The all weather SAR imageries could instead provide information of the damaged area. Therefore, more efforts are needed to explore the usability of SAR data. In regards to this purpose, this research focuses on studying the ability of using various SAR data in damage identification through image classification, and furthermore the effectiveness of fusion of various sensors in classification is evaluated.   Three different types of SAR imagery were acquired over the heavily damaged zone Qushan town in the Sichuan earthquake. The 3 types of SAR data are ALOS PALSAR L-band, RADARSAT-1 C-band and the TerraSAR-X X- band imageries.   Maximum likelihood classification method is applied on the imageries.  Four classes: Water, collapsed area, built-up area and landslide area are defined in the study area. The ability of each band in identifying these four classes is studied and the overall classification accuracy is analysed. Furthermore, fusion of these 3 types of imageries is performed and the effectiveness and accuracy of image fusion classification are evaluated.   The results show that classification accuracy from individual SAR imagery is not ideal. The overall accuracy which PALSAR gives is 30.383%, RADARSAT-1 is 31.268% while TerraSAR-X only achieves 37.168%. Accuracy statistics demonstrate that TerraSAR-X performs the best in classifying these four classes.   SAR image fusion shows a better classification result. Double image fusion of PALSAR and RADARSAT-1, PALSAR and TerraSAR-X, and RADARSAT-1 and TerraSAR-X give an overall classification accuracy of 41.88%, 42.478% and 37.758% respectively. The result from triple image fusion even reaches 52.507%. They are all higher than the result given by the individual images.   The study illustrates that the VHR TerraSAR X band SAR data has a higher ability in classification of damages, and fusion of different band can improve the classification accuracy.
25

Use of Thermal Imagery for Robust Moving Object Detection

Bergenroth, Hannah January 2021 (has links)
This work proposes a system that utilizes both infrared and visual imagery to create a more robust object detection and classification system. The system consists of two main parts: a moving object detector and a target classifier. The first stage detects moving objects in visible and infrared spectrum using background subtraction based on Gaussian Mixture Models. Low-level fusion is performed to combine the foreground regions in the respective domain. For the second stage, a Convolutional Neural Network (CNN), pre-trained on the ImageNet dataset is used to classify the detected targets into one of the pre-defined classes; human and vehicle. The performance of the proposed object detector is evaluated using multiple video streams recorded in different areas and under various weather conditions, which form a broad basis for testing the suggested method. The accuracy of the classifier is evaluated from experimentally generated images from the moving object detection stage supplemented with publicly available CIFAR-10 and CIFAR-100 datasets. The low-level fusion method shows to be more effective than using either domain separately in terms of detection results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
26

Multi-Source Fusion for Weak Target Images in the Industrial Internet of Things

Mao, Keming, Srivastava, Gautam, Parizi, Reza M., Khan, Mohammad S. 01 May 2021 (has links)
Due to the influence of information fusion in Industrial Internet of Things (IIoT) environments, there are many problems, such as weak intelligent visual target positioning, disappearing features, large error in visual positioning processes, and so on. Therefore, this paper proposes a weak target positioning method based on multi-information fusion, namely the “confidence interval method”. The basic idea is to treat the brightness and gray value of the target feature image area as a population with a certain average and standard deviation in IIoT environments. Based on the average and the standard deviation, and using a reasonable confidence level, a critical threshold is obtained. Compared with the threshold obtained by the maximum variance method, the obtained threshold is more suitable for the segmentation of key image features in an environment in which interference is present. After interpolation and de-noising, it is applied to mobile weak target location of complex IIoT systems. Using the metallurgical industry for experimental analysis, results show that the proposed method has better performance and stronger feature resolution.
27

Low-Resolution Infrared and High-Resolution Visible Image Fusion Based on U-NET

Lin, Hsuan 11 August 2022 (has links)
No description available.
28

A No-reference Image Enhancement Quality Metric and Fusion Technique

Headlee, Jonathan Michael 27 May 2015 (has links)
No description available.
29

Cognitive Analysis of Multi-sensor Information

Fox, Elizabeth Lynn January 2015 (has links)
No description available.
30

Automated Complexity-Sensitive Image Fusion

Jackson, Brian Patrick January 2014 (has links)
No description available.

Page generated in 0.0404 seconds