• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 2
  • 1
  • Tagged with
  • 16
  • 16
  • 16
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Machine learning in multi-frame image super-resolution

Pickup, Lyndsey C. January 2007 (has links)
Multi-frame image super-resolution is a procedure which takes several noisy low-resolution images of the same scene, acquired under different conditions, and processes them together to synthesize one or more high-quality super-resolution images, with higher spatial frequency, and less noise and image blur than any of the original images. The inputs can take the form of medical images, surveillance footage, digital video, satellite terrain imagery, or images from many other sources. This thesis focuses on Bayesian methods for multi-frame super-resolution, which use a prior distribution over the super-resolution image. The goal is to produce outputs which are as accurate as possible, and this is achieved through three novel super-resolution schemes presented in this thesis. Previous approaches obtained the super-resolution estimate by first computing and fixing the imaging parameters (such as image registration), and then computing the super-resolution image with this registration. In the first of the approaches taken here, superior results are obtained by optimizing over both the registrations and image pixels, creating a complete simultaneous algorithm. Additionally, parameters for the prior distribution are learnt automatically from data, rather than being set by trial and error. In the second approach, uncertainty in the values of the imaging parameters is dealt with by marginalization. In a previous Bayesian image super-resolution approach, the marginalization was over the super-resolution image, necessitating the use of an unfavorable image prior. By integrating over the imaging parameters rather than the image, the novel method presented here allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. Finally, a domain-specific image prior, based upon patches sampled from other images, is presented. For certain types of super-resolution problems where it is applicable, this sample-based prior gives a significant improvement in the super-resolution image quality.
2

Multihypothesis Prediction for Compressed Sensing and Super-Resolution of Images

Chen, Chen 12 May 2012 (has links)
A process for the use of multihypothesis prediction in the reconstruction of images is proposed for use in both compressed-sensing reconstruction as well as single-image super-resolution. Specifically, for compressed-sensing reconstruction of a single still image, multiple predictions for an image block are drawn from spatially surrounding blocks within an initial non-predicted reconstruction. The predictions are used to generate a residual in the domain of the compressed-sensing random projections. This residual being typically more compressible than the original signal leads to improved compressed-sensing reconstruction quality. To appropriately weight the hypothesis predictions, a Tikhonov regularization to an ill-posed least-squares optimization is proposed. An extension of this framework is applied to the compressed-sensing reconstruction of hyperspectral imagery is also studied. Finally, the multihypothesis paradigm is employed for single-image superresolution wherein each patch of a low-resolution image is represented as a linear combination of spatially surrounding hypothesis patches.
3

A Collaborative Adaptive Wiener Filter for Image Restoration and Multi-frame Super-resolution

Mohamed, Khaled Mohamed Ahmied 27 May 2015 (has links)
No description available.
4

Single-Image Super-Resolution via Regularized Extreme Learning Regression for Imagery from Microgrid Polarimeters

Sargent, Garrett Craig 24 May 2017 (has links)
No description available.
5

Algorithms for super-resolution of images based on sparse representation and manifolds / Algorithmes de super-résolution pour des images basées sur représentation parcimonieuse et variété

Ferreira, Júlio César 06 July 2016 (has links)
La ''super-résolution'' est définie comme une classe de techniques qui améliorent la résolution spatiale d’images. Les méthodes de super-résolution peuvent être subdivisés en méthodes à partir d’une seule image et à partir de multiple images. Cette thèse porte sur le développement d’algorithmes basés sur des théories mathématiques pour résoudre des problèmes de super-résolution à partir d’une seule image. En effet, pour estimer un’image de sortie, nous adoptons une approche mixte : nous utilisons soit un dictionnaire de « patches » avec des contraintes de parcimonie (typique des méthodes basées sur l’apprentissage) soit des termes régularisation (typiques des méthodes par reconstruction). Bien que les méthodes existantes donnent déjà de bons résultats, ils ne prennent pas en compte la géométrie des données dans les différentes tâches. Par exemple, pour régulariser la solution, pour partitionner les données (les données sont souvent partitionnées avec des algorithmes qui utilisent la distance euclidienne comme mesure de dissimilitude), ou pour apprendre des dictionnaires (ils sont souvent appris en utilisant PCA ou K-SVD). Ainsi, les méthodes de l’état de l’art présentent encore certaines limites. Dans ce travail, nous avons proposé trois nouvelles méthodes pour dépasser ces limites. Tout d’abord, nous avons développé SE-ASDS (un terme de régularisation basé sur le tenseur de structure) afin d’améliorer la netteté des bords. SE-ASDS obtient des résultats bien meilleurs que ceux de nombreux algorithmes de l’état de l’art. Ensuite, nous avons proposé les algorithmes AGNN et GOC pour déterminer un sous-ensemble local de données d’apprentissage pour la reconstruction d’un certain échantillon d’entrée, où l’on prend en compte la géométrie sous-jacente des données. Les méthodes AGNN et GOC surclassent dans la majorité des cas la classification spectrale, le partitionnement de données de type « soft », et la sélection de sous-ensembles basée sur la distance géodésique. Ensuite, nous avons proposé aSOB, une stratégie qui prend en compte la géométrie des données et la taille du dictionnaire. La stratégie aSOB surpasse les méthodes PCA et PGA. Enfin, nous avons combiné tous nos méthodes dans un algorithme unique, appelé G2SR. Notre algorithme montre de meilleurs résultats visuels et quantitatifs par rapport aux autres méthodes de l’état de l’art. / Image super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super-resolution problems. Indeed, in order to estimate an output image, we adopt a mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already perform well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in order to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the-art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for reconstructing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
6

Application of L1 Minimization Technique to Image Super-Resolution and Surface Reconstruction

Talavatifard, Habiballah 03 October 2013 (has links)
A surface reconstruction and image enhancement non-linear finite element technique based on minimization of L1 norm of the total variation of the gradient is introduced. Since minimization in the L1 norm is computationally expensive, we seek to improve the performance of this algorithm in two fronts: first, local L1- minimization, which allows parallel implementation; second, application of the Augmented Lagrangian method to solve the minimization problem. We show that local solution of the minimization problem is feasible. Furthermore, the Augmented Lagrangian method can successfully be used to solve the L1 minimization problem. This result is expected to be useful for improving algorithms computing digital elevation maps for natural and urban terrain, fitting surfaces to point-cloud data, and image super-resolution.
7

Automatické rozpoznávání registračních značek aut z málo kvalitních videosekvencí / Automated number plate recognition from low quality video-sequences

Vašek, Vojtěch January 2018 (has links)
The commercially used automated number plate recognition (ANPR) sys- tems constitute a mature technology which relies on dedicated industrial cam- eras capable of capturing high-quality still images. In contrast, the problem of ANPR from low-quality video sequences has been so far severely under- explored. This thesis proposes a trainable convolutional neural network (CNN) with a novel architecture which can efficiently recognize number plates from low-quality videos of arbitrary length. The proposed network is experimentally shown to outperform several existing approaches dealing with video-sequences, state-of-the-art commercial ANPR system as well as the human ability to recog- nize number plates from low-resolution images. The second contribution of the thesis is a semi-automatic pipeline which was used to create a novel database containing annotated sequences of challenging low-resolution number plate im- ages. The third contribution is a novel CNN based generator of super-resolution number plate images. The generator translates the input low-resolution image into its high-quality counterpart which preserves the structure of the input and depicts the same string which was previously predicted from a video-sequence. 1
8

Method for Improving the Efficiency of Image Super-Resolution Algorithms Based on Kalman Filters

Dobson, William Keith 01 December 2009 (has links)
The Kalman Filter has many applications in control and signal processing but may also be used to reconstruct a higher resolution image from a sequence of lower resolution images (or frames). If the sequence of low resolution frames is recorded by a moving camera or sensor, where the motion can be accurately modeled, then the Kalman filter may be used to update pixels within a higher resolution frame to achieve a more detailed result. This thesis outlines current methods of implementing this algorithm on a scene of interest and introduces possible improvements for the speed and efficiency of this method by use of block operations on the low resolution frames. The effects of noise on camera motion and various blur models are examined using experimental data to illustrate the differences between the methods discussed.
9

Deep Learning Approaches to Low-level Vision Problems

Liu, Huan January 2022 (has links)
Recent years have witnessed tremendous success in using deep learning approaches to handle low-level vision problems. Most of the deep learning based methods address the low-level vision problem by training a neural network to approximate the mapping from the inputs to the desired ground truths. However, directly learning this mapping is usually difficult and cannot achieve ideal performance. Besides, under the setting of unsupervised learning, the general deep learning approach cannot be used. In this thesis, we investigate and address several problems in low-level vision using the proposed approaches. To learn a better mapping using the existing data, an indirect domain shift mechanism is proposed to add explicit constraints inside the neural network for single image dehazing. This allows the neural network to be optimized across several identified neighbours, resulting in a better performance. Despite the success of the proposed approaches in learning an improved mapping from the inputs to the targets, three problems of unsupervised learning is also investigated. For unsupervised monocular depth estimation, a teacher-student network is introduced to strategically integrate both supervised and unsupervised learning benefits. The teacher network is formed by learning under the binocular depth estimation setting, and the student network is constructed as the primary network for monocular depth estimation. In observing that the performance of the teacher network is far better than that of the student network, a knowledge distillation approach is proposed to help improve the mapping learned by the student. For single image dehazing, the current network cannot handle different types of haze patterns as it is trained on a particular dataset. The problem is formulated as a multi-domain dehazing problem. To address this issue, a test-time training approach is proposed to leverage a helper network in assisting the dehazing network adapting to a particular domain using self-supervision. In lossy compression system, the target distribution can be different from that of the source and ground truths are not available for reference. Thus, the objective is to transform the source to target under a rate constraint, which generalizes the optimal transport. To address this problem, theoretical analyses on the trade-off between compression rate and minimal achievable distortion are studied under the cases with and without common randomness. A deep learning approach is also developed using our theoretical results for addressing super-resolution and denoising tasks. Extensive experiments and analyses have been conducted to prove the effectiveness of the proposed deep learning based methods in handling the problems in low-level vision. / Thesis / Doctor of Philosophy (PhD)
10

En jämförelse av Deep Learning-modeller för Image Super-Resolution / A Comparison of Deep Learning Models for Image Super-Resolution

Bechara, Rafael, Israelsson, Max January 2023 (has links)
Image Super-Resolution (ISR) is a technology that aims to increase image resolution while preserving as much content and detail as possible. In this study, we evaluate four different Deep Learning models (EDSR, LapSRN, ESPCN, and FSRCNN) to determine their effectiveness in increasing the resolution of lowresolution images. The study builds on previous research in the field as well as the results of the comparison between the different deep learning models. The problem statement for this study is: “Which of the four Deep Learning-based models, EDSR, LapSRN, ESPCN, and FSRCNN, generates an upscaled image with the best quality from a low-resolution image on a dataset of Abyssinian cats, with a factor of four, based on quantitative results?” The study utilizes a dataset consisting of pictures of Abyssinian cats to evaluate the performance and results of these different models. Based on the quantitative results obtained from RMSE, PSNR, and Structural Similarity (SSIM) measurements, our study concludes that EDSR is the most effective Deep Learning-based model. / Bildsuperupplösning (ISR) är en teknik som syftar till att öka bildupplösningen samtidigt som så mycket innehåll och detaljer som möjligt bevaras. I denna studie utvärderar vi fyra olika Deep Learning modeller (EDSR, LapSRN, ESPCN och FSRCNN) för att bestämma deras effektivitet när det gäller att öka upplösningen på lågupplösta bilder. Studien bygger på tidigare forskning inom området samt resultatjämförelser mellan olika djupinlärningsmodeller. Problemet som studien tar upp är: “Vilken av de fyra Deep Learning-baserade modellerna, EDSR, LapSRN, ESPCN och FSRCNN generarar en uppskalad bild med bäst kvalité, från en lågupplöst bild på ett dataset med abessinierkatter, med skalningsfaktor fyra, baserat på kvantitativa resultat?” Studien använder en dataset av bilder på abyssinierkatter för att utvärdera prestandan och resultaten för dessa olika modeller. Baserat på de kvantitativa resultaten som erhölls från RMSE, PSNR och Structural Similarity (SSIM) mätningar, drar vår studie slutsatsen att EDSR är den mest effektiva djupinlärningsmodellen.

Page generated in 0.1095 seconds