• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 16
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 97
  • 97
  • 26
  • 14
  • 14
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Studies On Bayesian Approaches To Image Restoration And Super Resolution Image Reconstruction

Chandra Mohan, S 07 1900 (has links) (PDF)
High quality image /video has become an integral part in our day-to-day life ranging from many areas of science, engineering and medical diagnosis. All these imaging applications call for high resolution, properly focused and crisp images. However, in real situations obtaining such a high quality image is expensive, and in some cases it is not practical. In imaging systems such as digital camera, blur and noise degrade the image quality. The recorded images look blurred, noisy and unable to resolve the finer details of the scene, which are clearly notable under zoomed conditions. The post processing techniques based on computational methods extract the hidden information and thereby improve the quality of the captured images. The study in this thesis focuses on deconvolution and eventually blind de-convolution problem of a single frame captured at low light imaging conditions arising from digital photography/surveillance imaging applications. Our intention is to restore a sharp image from its blurred and noisy observation, when the blur is completely known/unknown and such inverse problems are ill-posed/twice ill-posed. This thesis consists of two major parts. The first part addresses deconvolution/blind deconvolution problem using Bayesian approach with fuzzy logic based gradient potential as a prior functional. In comparison with analog cameras, artifacts are visible in digital cameras when the images are enlarged and there is a demand to enhance the resolution. The increased resolution can be in spatial, temporal or even in both the dimensions. Super resolution reconstruction methods reconstruct images/video containing spectral information beyond that is available in the captured low resolution images. The second part of the thesis addresses resolution enhancement of observed monochromatic/color images using multiple frames of the same scene. This reconstruction problem is formulated in Bayesian domain with an aspiration of reducing blur, noise, aliasing and increasing the spatial resolution. The image is modeled as Markov random field and a fuzzy logic filter based gradient potential is used to differentiate between edge and noisy pixels. Suitable priors are adaptively applied to obtain artifact free/reduced images. In this work, all our approaches are experimentally validated using standard test images. The Matlab based programming tools are used for carrying out the validation. The performance of the approaches are qualitatively compared with results of recently proposed methods. Our results turn out to be visually pleasing and quantitatively competitive.
42

Bayesian methods for inverse problems in signal and image processing / Méthodes bayésiennes pour la résolution des problèmes inverses de grande dimension en traitement du signal et des images

Marnissi, Yosra 25 April 2017 (has links)
Les approches bayésiennes sont largement utilisées dans le domaine du traitement du signal. Elles utilisent des informations a priori sur les paramètres inconnus à estimer ainsi que des informations sur les observations, pour construire des estimateurs. L'estimateur optimal au sens du coût quadratique est l'un des estimateurs les plus couramment employés. Toutefois, comme la loi a posteriori exacte a très souvent une forme complexe, il faut généralement recourir à des outils d'approximation bayésiens pour l'approcher. Dans ce travail, nous nous intéressons particulièrement à deux types de méthodes: les algorithmes d'échantillonnage Monte Carlo par chaînes de Markov (MCMC) et les approches basées sur des approximations bayésiennes variationnelles (VBA).La thèse est composée de deux parties. La première partie concerne les algorithmes d'échantillonnage. Dans un premier temps, une attention particulière est consacrée à l'amélioration des méthodes MCMC basées sur la discrétisation de la diffusion de Langevin. Nous proposons une nouvelle méthode pour régler la composante directionnelle de tels algorithmes en utilisant une stratégie de Majoration-Minimisation ayant des propriétés de convergence garanties. Les résultats expérimentaux obtenus lors de la restauration d'un signal parcimonieux confirment la rapidité de cette nouvelle approche par rapport à l'échantillonneur usuel de Langevin. Dans un second temps, une nouvelle méthode d'échantillonnage basée sur une stratégie d'augmentation des données est proposée pour améliorer la vitesse de convergence et les propriétés de mélange des algorithmes d'échantillonnage standards. L'application de notre méthode à différents exemples en traitement d'images montre sa capacité à surmonter les difficultés liées à la présence de corrélations hétérogènes entre les coefficients du signal.Dans la seconde partie de la thèse, nous proposons de recourir aux techniques VBA pour la restauration de signaux dégradés par un bruit non-gaussien. Afin de contourner les difficultés liées à la forme compliquée de la loi a posteriori, une stratégie de majoration est employée pour approximer la vraisemblance des données ainsi que la densité de la loi a priori. Grâce à sa flexibilité, notre méthode peut être appliquée à une large classe de modèles et permet d'estimer le signal d'intérêt conjointement au paramètre de régularisation associé à la loi a priori. L'application de cette approche sur des exemples de déconvolution d'images en présence d'un bruit mixte Poisson-gaussien, confirme ses bonnes performances par rapport à des méthodes supervisées de l'état de l'art. / Bayesian approaches are widely used in signal processing applications. In order to derive plausible estimates of original parameters from their distorted observations, they rely on the posterior distribution that incorporates prior knowledge about the unknown parameters as well as informations about the observations. The posterior mean estimator is one of the most commonly used inference rule. However, as the exact posterior distribution is very often intractable, one has to resort to some Bayesian approximation tools to approximate it. In this work, we are mainly interested in two particular Bayesian methods, namely Markov Chain Monte Carlo (MCMC) sampling algorithms and Variational Bayes approximations (VBA).This thesis is made of two parts. The first one is dedicated to sampling algorithms. First, a special attention is devoted to the improvement of MCMC methods based on the discretization of the Langevin diffusion. We propose a novel method for tuning the directional component of such algorithms using a Majorization-Minimization strategy with guaranteed convergence properties.Experimental results on the restoration of a sparse signal confirm the performance of this new approach compared with the standard Langevin sampler. Second, a new sampling algorithm based on a Data Augmentation strategy, is proposed to improve the convergence speed and the mixing properties of standard MCMC sampling algorithms. Our methodological contributions are validated on various applications in image processing showing the great potentiality of the proposed method to manage problems with heterogeneous correlations between the signal coefficients.In the second part, we propose to resort to VBA techniques to build a fast estimation algorithm for restoring signals corrupted with non-Gaussian noise. In order to circumvent the difficulties raised by the intricate form of the true posterior distribution, a majorization technique is employed to approximate either the data fidelity term or the prior density. Thanks to its flexibility, the proposed approach can be applied to a broad range of data fidelity terms allowing us to estimate the target signal jointly with the associated regularization parameter. Illustration of this approach through examples of image deconvolution in the presence of mixed Poisson-Gaussian noise, show the good performance of the proposed algorithm compared with state of the art supervised methods.
43

Protecting Professional Football: A Case Study of Crisis Communication Tactics Demonstrated During the Concussion Crisis by the National Football League and the Introduction of Cultural Ingrainment as a Component in Crisis Communications Models

Mower, Jordan Todd 01 December 2015 (has links)
This research analyzes the crisis communications tactics employed by the National Football League at key points during the concussion crisis in relation to strategies recommended by models based on image restoration theory and situational crisis communications theory. The discrepancies between the NFL's tactics and recommended situational tactics, viewed in light of the financial and market increases for the league over the duration of the crisis, show the need for an additional component in accepted crisis communications models. Cultural ingrainment is posited as a component to be added to present models as a mitigating factor of organizational harm in cases of strong attribution of organizational responsibility. This addition of cultural ingrainment provides an explanation for the possibility of so-called “invincible brands.”
44

Algorithms for super-resolution of images based on sparse representation and manifolds / Algorithmes de super-résolution pour des images basées sur représentation parcimonieuse et variété

Ferreira, Júlio César 06 July 2016 (has links)
La ''super-résolution'' est définie comme une classe de techniques qui améliorent la résolution spatiale d’images. Les méthodes de super-résolution peuvent être subdivisés en méthodes à partir d’une seule image et à partir de multiple images. Cette thèse porte sur le développement d’algorithmes basés sur des théories mathématiques pour résoudre des problèmes de super-résolution à partir d’une seule image. En effet, pour estimer un’image de sortie, nous adoptons une approche mixte : nous utilisons soit un dictionnaire de « patches » avec des contraintes de parcimonie (typique des méthodes basées sur l’apprentissage) soit des termes régularisation (typiques des méthodes par reconstruction). Bien que les méthodes existantes donnent déjà de bons résultats, ils ne prennent pas en compte la géométrie des données dans les différentes tâches. Par exemple, pour régulariser la solution, pour partitionner les données (les données sont souvent partitionnées avec des algorithmes qui utilisent la distance euclidienne comme mesure de dissimilitude), ou pour apprendre des dictionnaires (ils sont souvent appris en utilisant PCA ou K-SVD). Ainsi, les méthodes de l’état de l’art présentent encore certaines limites. Dans ce travail, nous avons proposé trois nouvelles méthodes pour dépasser ces limites. Tout d’abord, nous avons développé SE-ASDS (un terme de régularisation basé sur le tenseur de structure) afin d’améliorer la netteté des bords. SE-ASDS obtient des résultats bien meilleurs que ceux de nombreux algorithmes de l’état de l’art. Ensuite, nous avons proposé les algorithmes AGNN et GOC pour déterminer un sous-ensemble local de données d’apprentissage pour la reconstruction d’un certain échantillon d’entrée, où l’on prend en compte la géométrie sous-jacente des données. Les méthodes AGNN et GOC surclassent dans la majorité des cas la classification spectrale, le partitionnement de données de type « soft », et la sélection de sous-ensembles basée sur la distance géodésique. Ensuite, nous avons proposé aSOB, une stratégie qui prend en compte la géométrie des données et la taille du dictionnaire. La stratégie aSOB surpasse les méthodes PCA et PGA. Enfin, nous avons combiné tous nos méthodes dans un algorithme unique, appelé G2SR. Notre algorithme montre de meilleurs résultats visuels et quantitatifs par rapport aux autres méthodes de l’état de l’art. / Image super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super-resolution problems. Indeed, in order to estimate an output image, we adopt a mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already perform well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in order to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the-art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for reconstructing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
45

Deblurring Algorithms for Out-of-focus Infrared Images

Zhu, Peter January 2010 (has links)
<p>An image that has been subject to the out-of-focus phenomenon has reducedsharpness, contrast and level of detail depending on the amount of defocus. Torestore out-of-focused images is a complex task due to the information loss thatoccurs. However there exist many restoration algorithms that attempt to revertthis defocus by estimating a noise model and utilizing the point spread function.The purpose of this thesis, proposed by FLIR Systems, was to find a robustalgorithm that can restore focus and from the customer’s perspective be userfriendly. The thesis includes three implemented algorithms that have been com-pared to MATLABs built-in. Three image series were used to evaluate the limitsand performance of each algorithm, based on deblurring quality, implementationcomplexity, computation time and usability.Results show that the Alternating Direction Method for total variation de-convolution proposed by Tao et al. [29] together with its the modified discretecosines transform version restores the defocused images with the highest qual-ity. These two algorithms include features such as, fast computational time, fewparameters to tune and a powerful noise reduction.</p>
46

Compressed Sensing Based Image Restoration Algorithm with Prior Information: Software and Hardware Implementations for Image Guided Therapy

Jian, Yuchuan January 2012 (has links)
<p>Based on the compressed sensing theorem, we present the integrated software and hardware platform for developing a total-variation based image restoration algorithm by applying prior image information and free-form deformation fields for image guided therapy. The core algorithm we developed solves the image restoration problem for handling missing structures in one image set with prior information, and it enhances the quality of the image and the anatomical information of the volume of the on-board computed tomographic (CT) with limited-angle projections. Through the use of the algorithm, prior anatomical CT scans were used to provide additional information to help reduce radiation doses associated with the improved quality of the image volume produced by on-board Cone-Beam CT, thus reducing the total radiation doses that patients receive and removing distortion artifacts in 3D Digital Tomosynthesis (DTS) and 4D-DTS. The proposed restoration algorithm enables the enhanced resolution of temporal image and provides more anatomical information than conventional reconstructed images.</p><p>The performance of the algorithm was determined and evaluated by two built-in parameters in the algorithm, i.e., B-spline resolution and the regularization factor. These parameters can be adjusted to meet different requirements in different imaging applications. Adjustments also can determine the flexibility and accuracy during the restoration of images. Preliminary results have been generated to evaluate the image similarity and deformation effect for phantoms and real patient's case using shifting deformation window. We incorporated a graphics processing unit (GPU) and visualization interface into the calculation platform, as the acceleration tools for medical image processing and analysis. By combining the imaging algorithm with a GPU implementation, we can make the restoration calculation within a reasonable time to enable real-time on-board visualization, and the platform potentially can be applied to solve complicated, clinical-imaging algorithms.</p> / Dissertation
47

Irregularly sampled image resortation and interpolation

Facciolo Furlan, Gabriele 03 March 2011 (has links)
The generation of urban digital elevation models from satellite images using stereo reconstruction techniques poses several challenges due to its precision requirements. In this thesis we study three problems related to the reconstruction of urban models using stereo images in a low baseline disposition. They were motivated by the MISS project, launched by the CNES (Centre National d'Etudes Spatiales), in order to develop a low baseline acquisition model. The first problem is the restoration of irregularly sampled images and image fusion using a band limited interpolation model. A novel restoration algorithm is proposed, which incorporates the image formation model as a set of local constraints, and uses of a family of regularizers that allow to control the spectral behavior of the solution. Secondly, the problem of interpolating sparsely sampled images is addressed using a self-similarity prior. The related problem of image inpainting is also considered, and a novel framework for exemplar-based image inpainting is proposed. This framework is then extended to consider the interpolation of sparsely sampled images. The third problem is the regularization and interpolation of digital elevation models imposing geometric restrictions. The geometric restrictions come from a reference image. For this problem three different regularization models are studied: an anisotropic minimal surface regularizer, the anisotropic total variation and a new piecewise affine interpolation algorithm. / La generación de modelos urbanos de elevación a partir de imágenes de satélite mediante técnicas de reconstrucción estereoscópica presenta varios retos debido a sus requisitos de precisión. En esta tesis se estudian tres problemas vinculados a la generación de estos modelos partiendo de pares estereoscópicos adquiridos por satélites en una configuración con baseline pequeño. Estos problemas fueron motivados por el proyecto MISS, lanzado por el CNES (Centre National d'Etudes Spatiales) con el objetivo de desarrollar las técnicas de reconstrucción para imágenes adquiridas con baseline pequeños. El primer problema es la restauración de imágenes muestreadas irregularmente y la fusión de imágenes usando un modelo de interpolación de banda limitada. Se propone un nuevo método de restauración, el cual usa una familia de regularizadores que permite controlar el decaimiento espectral de la solución e incorpora el modelo de formación de imagen como un conjunto de restricciones locales. El segundo problema es la interpolación de imágenes muestreadas en forma dispersa usando un prior de auto similitud, se considera también el problema relacionado de inpainting de imágenes. Se propone un nuevo framework para inpainting basado en ejemplares, el cual luego es extendido a la interpolación de imágenes muestreadas en forma dispersa. El tercer problema es la regularización e interpolación de modelos digitales de elevación imponiendo restricciones geométricas las cuales se extraen de una imagen de referencia. Para este problema se estudian tres modelos de regularización: un regularizador anisótropo de superficie mínima, la variación total anisótropa y un nuevo algoritmo de interpolación afín a trozos.
48

Vers l’anti-criminalistique en images numériques via la restauration d’images / Towards digital image anti-forensics via image restoration

Fan, Wei 30 April 2015 (has links)
La criminalistique en images numériques se développe comme un outil puissant pour l'authentification d'image, en travaillant de manière passive et aveugle sans l'aide d'informations d'authentification pré-intégrées dans l'image (contrairement au tatouage fragile d'image). En parallèle, l'anti-criminalistique se propose d'attaquer les algorithmes de criminalistique afin de maintenir une saine émulation susceptible d'aider à leur amélioration. En images numériques, l'anti-criminalistique partage quelques similitudes avec la restauration d'image : dans les deux cas, l'on souhaite approcher au mieux les informations perdues pendant un processus de dégradation d'image. Cependant, l'anti-criminalistique se doit de remplir au mieux un objectif supplémentaire, extit{i.e.} : être non détectable par la criminalistique actuelle. Dans cette thèse, nous proposons une nouvelle piste de recherche pour la criminalistique en images numériques, en tirant profit des concepts/méthodes avancés de la restauration d'image mais en intégrant des stratégies/termes spécifiquement anti-criminalistiques. Dans ce contexte, cette thèse apporte des contributions sur quatre aspects concernant, en criminalistique JPEG, (i) l'introduction du déblocage basé sur la variation totale pour contrer les méthodes de criminalistique JPEG et (ii) l'amélioration apportée par l'adjonction d'un lissage perceptuel de l'histogramme DCT, (iii) l'utilisation d'un modèle d'image sophistiqué et d'un lissage non paramétrique de l'histogramme DCT visant l'amélioration de la qualité de l'image falsifiée; et, en criminalistique du filtrage médian, (iv) l'introduction d'une méthode fondée sur la déconvolution variationnelle. Les résultats expérimentaux démontrent l'efficacité des méthodes anti-criminalistiques proposées, avec notamment une meilleure indétectabilité face aux détecteurs criminalistiques actuels ainsi qu'une meilleure qualité visuelle de l'image falsifiée par rapport aux méthodes anti-criminalistiques de l'état de l'art. / Image forensics enjoys its increasing popularity as a powerful image authentication tool, working in a blind passive way without the aid of any a priori embedded information compared to fragile image watermarking. On its opponent side, image anti-forensics attacks forensic algorithms for the future development of more trustworthy forensics. When image coding or processing is involved, we notice that image anti-forensics to some extent shares a similar goal with image restoration. Both of them aim to recover the information lost during the image degradation, yet image anti-forensics has one additional indispensable forensic undetectability requirement. In this thesis, we form a new research line for image anti-forensics, by leveraging on advanced concepts/methods from image restoration meanwhile with integrations of anti-forensic strategies/terms. Under this context, this thesis contributes on the following four aspects for JPEG compression and median filtering anti-forensics: (i) JPEG anti-forensics using Total Variation based deblocking, (ii) improved Total Variation based JPEG anti-forensics with assignment problem based perceptual DCT histogram smoothing, (iii) JPEG anti-forensics using JPEG image quality enhancement based on a sophisticated image prior model and non-parametric DCT histogram smoothing based on calibration, and (iv) median filtered image quality enhancement and anti-forensics via variational deconvolution. Experimental results demonstrate the effectiveness of the proposed anti-forensic methods with a better forensic undetectability against existing forensic detectors as well as a higher visual quality of the processed image, by comparisons with the state-of-the-art methods.
49

Estudo de métodos numéricos para eliminação de ruídos em imagens digitais /

D'Ippólito, Karina Miranda. January 2005 (has links)
Orientador: Heloisa Helena Marino Silva / Banca: Antonio Castelo Filho / Banca: Maurílio Boaventura / Resumo: O objetivo deste trabalho þe apresentar um estudo sobre a aplicação de métodos numéricos para a resolução do modelo proposto por Barcelos, Boaventura e Silva Jr. [7], para a eliminação de ruídos em imagens digitais por meio de uma equação diferencial parcial, e propor uma anþalise da estabilidade do mþetodo iterativo comumente aplicado a este modelo. Uma anþalise comparativa entre os vários mþetodos abordados þe realizada atravþes de resultados experimentais em imagens sintéticas e imagens da vida real. / Abstract: The purpose of this work is to present a study on the application of numerical methods for the resolution of model considered by Barcelos, Boaventura and Silva Jr [7], for image denoising through a partial di erential equation, and to consider a stability analysis of an iterative method usually applied to this model. A comparative analysis among various considered methods is carried out through experimental results for synthetic and real images. / Mestre
50

De la restauration d'image au rehaussement : formalisme EDP pour la fusion d'images bruitées

Ludusan, Cosmin 28 November 2011 (has links)
Cette thèse aborde les principaux aspects applicatifs en matière de restauration et amélioration d'images. A travers une approche progressive, deux nouveaux paradigmes sont introduits : la mise en place d'une déconvolution et d'un débruitage simultanés avec une amélioration de cohérence, et la fusion avec débruitage. Ces paradigmes sont définis dans un cadre théorique d'approches EDP - variationnelles. Le premier paradigme représente une étape intermédiaire dans la validation et l'analyse du concept de restauration et d'amélioration combinées, tandis que la deuxième proposition traitant du modèle conjoint fusion-débruitage illustre les avantages de l'utilisation d'une approche parallèle en traitement d'images, par opposition aux approches séquentielles. Ces deux propositions sont théoriquement et expérimentalement formalisées, analysées et comparées avec les approches les plus classiques, démontrant ainsi leur validité et soulignant leurs caractéristiques et avantages. / This thesis addresses key issues of current image restoration and enhancement methodology, and through a progressive approach introduces two new image processing paradigms, i.e., concurrent image deblurring and denoising with coherence enhancement, and joint image fusion and denoising, defined within a Partial Differential Equation -variational theoretical setting.The first image processing paradigm represents an intermediary step in validating and testing the concept of compound image restoration and enhancement, while the second proposition, i.e., the joint fusion-denoising model fully illustrates the advantages of using concurrent image processing as opposed to sequential approaches.Both propositions are theoretically formalized and experimentally analyzed and compared with the similar existing methodology, proving thus their validity and emphasizing their characteristics and advantages when considered an alternative to a sequential image processing chain.

Page generated in 0.1167 seconds