Spelling suggestions: "subject:"hazing"" "subject:"dehaene""
1 |
Generic Model-Agnostic Convolutional Neural Networks for Single Image DehazingLiu, Zheng January 2018 (has links)
Haze and smog are among the most common environmental factors impacting image quality and, therefore, image analysis. In this paper, I propose an end-to-end generative method for single image dehazing problem. It is based on fully convolutional network and effective network structures to recognize haze structure in input images and restore clear, haze-free ones. The proposed method is agnostic in the sense that it does not explore the atmosphere scattering model, it makes use of convolutional networks advantage in feature extraction and transfer instead. Somewhat surprisingly, it achieves superior performance relative to all existing state-of-the-art methods for image dehazing even on SOTS outdoor images, which are synthesized using the atmosphere scattering model. In order to improve its weakness in indoor hazy images and enhance the dehazed image's visual quality, a lightweight parallel network is put forward. It employs a different convolution strategy that extracts features with larger reception field to generate a complementary image. With the help of a parallel stream, the fusion of the two outputs performs better in PSNR and SSIM than other methods. / Thesis / Master of Applied Science (MASc)
|
2 |
GridDehazeNet: Attention-Based Multi-Scale Network for Image DehazingMa, Yongrui January 2019 (has links)
We propose an end-to-end trainable Convolutional Neural Network (CNN), named GridDehazeNet, for single image dehazing. The GridDehazeNet consists of three modules: pre-processing, backbone, and post-processing. The trainable pre-processing module can generate learned inputs with better diversity and more pertinent features as compared to those derived inputs produced by hand-selected pre-processing methods. The backbone module implements a novel attention-based multi-scale estimation on a grid network, which can effectively alleviate the bottleneck issue often encountered in the conventional multi-scale approach. The post-processing module helps to reduce the artifacts in the final output. Experimental results indicate that the GridDehazeNet outperforms the state-of-the-art on both synthetic and real-world images. The proposed hazing method does not rely on the atmosphere scattering model and we provide an explanation as to why it is not necessarily beneficial to take advantage of the dimension reduction offered by the atmosphere scattering model for image dehazing, even if only the dehazing results on synthetic images are concerned. / Thesis / Master of Applied Science (MASc)
|
3 |
A Dual-Branch Attention Guided Context Aggregation Network for NonHomogeneous DehazingSong, Xiang January 2021 (has links)
Image degradation arises from various environmental conditions due to the exis tence of aerosols such as fog, haze, and dust. These phenomena mitigate image vis ibility by creating color distortion, reducing contrast, and fainting object surfaces.
Although the end-to-end deep learning approach has made significant progress in
the field of homogeneous dehazing, the image quality of these algorithms in the
context of non-homogeneous real-world images has not yet been satisfactory. We
argue two main reasons that are responsible for the problem: 1) First, due to the
unbalanced information processing of the high-level and low-level information in
conventional dehazing algorithms, 2) due to lack of trainable data pairs. To ad dress the above two problems, we propose a parallel dual-branch design that aims
to balance the processing of high-level and low-level information, and through a
method of transfer learning, utilize the small data sets to their full potential. The
results from the two parallel branches are aggregated in a simple fusion tail, in
which the high-level and low-level information are fused, and the final result is
generated. To demonstrate the effectiveness of our proposed method, we present
extensive experimental results in the thesis. / Thesis / Master of Applied Science (MASc)
|
4 |
Underwater image enhancement: Using Wavelength Compensation and Image Dehazing (WCID)Chen, Ying-Ching 25 July 2011 (has links)
Light scattering and color shift are two major sources of distortion for underwater
photography. Light scattering is caused by light incident on objects reflected and
deflected multiple times by particles present in the water before reaching the camera.
This in turn lowers the visibility and contrast of the image captured. Color shift
corresponds to the varying degrees of attenuation encountered by light traveling in the
water with different wavelengths, rendering ambient underwater environments
dominated by bluish tone.
This paper proposes a novel approach to enhance underwater images by a
dehazing algorithm with wavelength compensation. Once the depth map, i.e., distances
between the objects and the camera, is estimated by dark channel prior, the light
intensities of foreground and background are compared to determine whether an
artificial light source is employed during image capturing process. After compensating
the effect of artifical light, the haze phenomenon from light scattering is removed by the
dehazing algorithm. Next, estimation of the image scene depth according to the residual
energy ratios of different wavelengths in the background is performed. Based on the
amount of attenuation corresponding to each light wavelength, color shift compensation
is conducted to restore color balance. A Super-Rsolution image can offer more details
that must be important and necessary in low resolution underwater image. In this paper
combine Gradient-Base Super Resolution and Iterative Back-Projection (IBP) to
propose Cocktail Super Resolution algorithm, with the bilateral filter to remove the
chessboard effect and ringing effect along image edges, and improve the image quality.
The underwater videos with diversified resolution downloaded from the Youtube
website are processed by employing WCID, histogram equalization, and a traditional
dehazing algorithm, respectively. Test results demonstrate that videos with significantly
enhanced visibility and superior color fidelity are obtained by the WCID proposed.
|
5 |
A Discrete Wavelet Transform GAN for NonHomogeneous DehazingFu, Minghan January 2021 (has links)
Hazy images are often subject to color distortion, blurring and other visible quality degradation. Some existing CNN-based methods have shown great performance on removing the homogeneous haze, but they are not robust in the non-homogeneous case. The reason is twofold. Firstly, due to the complicated haze distribution, texture details are easy to get lost during the dehazing process. Secondly, since the training pairs are hard to be collected, training on limited data can easily lead to the over-fitting problem. To tackle these two issues, we introduce a novel dehazing network using the 2D discrete wavelet transform, namely DW-GAN. Specifically, we propose a two-branch network to deal with the aforementioned problems. By utilizing the wavelet transform in the DWT branch, our proposed method can retain more high-frequency information in feature maps. To prevent over-fitting, ImageNet pre-trained Res2Net is adopted in the knowledge adaptation branch. Owing to the robust feature representations of ImageNet pre-training, the generalization ability of our network is improved dramatically. Finally, a patch-based discriminator is used to reduce artifacts of the restored images. Extensive experimental results demonstrate that the proposed method outperforms the state-of-the-art quantitatively and qualitatively. / Thesis / Master of Applied Science (MASc)
|
6 |
Single Image Dehazing based on Modified Dark Channel Prior and Fog Density DetectionLin, Cheng-Yang 10 September 2012 (has links)
In this thesis, a single image dehazing method based on modified dark channel prior and haze (fog) density detection is proposed. Dark channel prior dehazing algorithm is achieved good results for some haze images. However, we observed that haze images contain low and high haze density. Thus, the region of low haze density is unnecessary to dehaze. To solve this problem, we first defined the HSV distance, pixel-based dark channel prior and pixel-based bright channel prior to estimate the haze density. Further to enhance the dehazing performance of dark channel prior, the atmospheric light value and dehazing weighting is revised based on the HSV distance. Then the new transmission map is obtained. After that, a bilateral filter is applied to refine the transmission map, which can provide the higher accuracy of transmission map. Finally, the haze-free image is recovered by combining the input image and the refined transmission map. As a result, high-quality haze-free image can be recovered with lower computational complexity, which can be naturally extended to video dehazing.
|
7 |
Dehazing of Satellite ImagesHultberg, Johanna January 2018 (has links)
The aim of this work is to find a method for removing haze from satellite imagery. This is done by taking two algorithms developed for images taken from the sur- face of the earth and adapting them for satellite images. The two algorithms are Single Image Haze Removal Using Dark Channel Prior by He et al. and Color Im- age Dehazing Using the Near-Infrared by Schaul et al. Both algorithms, altered to fit satellite images, plus the combination are applied on four sets of satellite images. The results are compared with each other and the unaltered images. The evaluation is both qualitative, i.e. looking at the images, and quantitative using three properties: colorfulness, contrast and saturated pixels. Both the qualitative and the quantitative evaluation determined that using only the altered version of Dark Channel Prior gives the result with the least amount of haze and whose colors look most like reality.
|
8 |
Deep Learning Approaches to Low-level Vision ProblemsLiu, Huan January 2022 (has links)
Recent years have witnessed tremendous success in using deep learning approaches to handle low-level vision problems. Most of the deep learning based methods address the low-level vision problem by training a neural network to approximate the mapping from the inputs to the desired ground truths. However, directly learning this mapping is usually difficult and cannot achieve ideal performance. Besides, under the setting of unsupervised learning, the general deep learning approach cannot be used. In this thesis, we investigate and address several problems in low-level vision using the proposed approaches.
To learn a better mapping using the existing data, an indirect domain shift mechanism is proposed to add explicit constraints inside the neural network for single image dehazing. This allows the neural network to be optimized across several identified neighbours, resulting in a better performance.
Despite the success of the proposed approaches in learning an improved mapping from the inputs to the targets, three problems of unsupervised learning is also investigated. For unsupervised monocular depth estimation, a teacher-student network is introduced to strategically integrate both supervised and unsupervised learning benefits. The teacher network is formed by learning under the binocular depth estimation setting, and the student network is constructed as the primary network for monocular depth estimation. In observing that the performance of the teacher network is far better than that of the student network, a knowledge distillation approach is proposed to help improve the mapping learned by the student.
For single image dehazing, the current network cannot handle different types of haze patterns as it is trained on a particular dataset. The problem is formulated as a multi-domain dehazing problem. To address this issue, a test-time training approach is proposed to leverage a helper network in assisting the dehazing network adapting to a particular domain using self-supervision.
In lossy compression system, the target distribution can be different from that of the source and ground truths are not available for reference.
Thus, the objective is to transform the source to target under a rate constraint, which generalizes the optimal transport. To address this problem, theoretical analyses on the trade-off between compression rate and minimal achievable distortion are studied under the cases with and without common randomness. A deep learning approach is also developed using our theoretical results for addressing super-resolution and denoising tasks.
Extensive experiments and analyses have been conducted to prove the effectiveness of the proposed deep learning based methods in handling the problems in low-level vision. / Thesis / Doctor of Philosophy (PhD)
|
Page generated in 0.0569 seconds