• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 6
  • 3
  • 2
  • 2
  • Tagged with
  • 27
  • 27
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Sparsity Motivated Auditory Wavelet Representation and Blind Deconvolution

Adiga, Aniruddha January 2017 (has links) (PDF)
In many scenarios, events such as singularities and transients that carry important information about a signal undergo spreading during acquisition or transmission and it is important to localize the events. For example, edges in an image, point sources in a microscopy or astronomical image are blurred by the point-spread function (PSF) of the acquisition system, while in a speech signal, the epochs corresponding to glottal closure instants are shaped by the vocal tract response. Such events can be extracted with the help of techniques that promote sparsity, which enables separation of the smooth components from the transient ones. In this thesis, we consider development of such sparsity promoting techniques. The contributions of the thesis are three-fold: (i) an auditory-motivated continuous wavelet design and representation, which helps identify singularities; (ii) a sparsity-driven deconvolution technique; and (iii) a sparsity-driven deconvolution technique for reconstruction of nite-rate-of-innovation (FRI) signals. We use the speech signal for illustrating the performance of the techniques in the first two parts and super-resolution microscopy (2-D) for the third part. In the rst part, we develop a continuous wavelet transform (CWT) starting from an auditory motivation. Wavelet analysis provides good time and frequency localization, which has made it a popular tool for time-frequency analysis of signals. The CWT is a multiresolution analysis tool that involves decomposition of a signal using a constant-Q wavelet filterbank, akin to the time-frequency analysis performed by basilar membrane in the peripheral human auditory system. This connection motivated us to develop wavelets that possess auditory localization capabilities. Gammatone functions are extensively used in the modeling of the basilar membrane, but the non-zero average of the functions poses a hurdle. We construct bona de wavelets from the Gammatone function called Gammatone wavelets and analyze their properties such as admissibility, time-bandwidth product, vanishing moments, etc.. Of particular interest is the vanishing moments property, which enables the wavelet to suppress smooth regions in a signal leading to sparsi cation. We show how this property of the Gammatone wavelets coupled with multiresolution analysis could be employed for singularity and transient detection. Using these wavelets, we also construct equivalent lterbank models and obtain cepstral feature vectors out of such a representation. We show that the Gammatone wavelet cepstral coefficients (GWCC) are effective for robust speech recognition compared with mel-frequency cepstral coefficients (MFCC). In the second part, we consider the problem of sparse blind deconvolution (SBD) starting from a signal obtained as the convolution of an unknown PSF and a sparse excitation. The BD problem is ill-posed and the goal is to employ sparsity to come up with an accurate solution. We formulate the SBD problem within a Bayesian framework. The estimation of lter and excitation involves optimization of a cost function that consists of an `2 data- fidelity term and an `p-norm (p 2 [0; 1]) regularizer, as the sparsity promoting prior. Since the `p-norm is not differentiable at the origin, we consider a smoothed version of the `p-norm as a proxy in the optimization. Apart from the regularizer being non-convex, the data term is also non-convex in the filter and excitation as they are both unknown. We optimize the non-convex cost using an alternating minimization strategy, and develop an alternating `p `2 projections algorithm (ALPA). We demonstrate convergence of the iterative algorithm and analyze in detail the role of the pseudo-inverse solution as an initialization for the ALPA and provide probabilistic bounds on its accuracy considering the presence of noise and the condition number of the linear system of equations. We also consider the case of bounded noise and derive tight tail bounds using the Hoe ding inequality. As an application, we consider the problem of blind deconvolution of speech signals. In the linear model for speech production, voiced speech is assumed to be the result of a quasi-periodic impulse train exciting a vocal-tract lter. The locations of the impulses or epochs indicate the glottal closure instants and the spacing between them the pitch. Hence, the excitation in the case of voiced speech is sparse and its deconvolution from the vocal-tract filter is posed as a SBD problem. We employ ALPA for SBD and show that excitation obtained is sparser than the excitations obtained using sparse linear prediction, smoothed `1=`2 sparse blind deconvolution algorithm, and majorization-minimization-based sparse deconvolution techniques. We also consider the problem of epoch estimation and show that epochs estimated by ALPA in both clean and noisy conditions are closer to the instants indicated by the electroglottograph when with to the estimates provided by the zero-frequency ltering technique, which is the state-of-the-art epoch estimation technique. In the third part, we consider the problem of deconvolution of a specific class of continuous-time signals called nite-rate-of-innovation (FRI) signals, which are not bandlimited, but specified by a nite number of parameters over an observation interval. The signal is assumed to be a linear combination of delayed versions of a prototypical pulse. The reconstruction problem is posed as a 2-D SBD problem. The kernel is assumed to have a known form but with unknown parameters. Given the sampled version of the FRI signal, the delays quantized to the nearest point on the sampling grid are rst estimated using proximal-operator-based alternating `p `2 algorithm (ALPAprox), and then super-resolved to obtain o -grid (O. G.) estimates using gradient-descent optimization. The overall technique is termed OG-ALPAprox. We show application of OG-ALPAprox to a particular modality of super-resolution microscopy (SRM), called stochastic optical reconstruction microscopy (STORM). The resolution of the traditional optical microscope is limited by di raction and is termed as Abbe's limit. The goal of SRM is to engineer the optical imaging system to resolve structures in specimens, such as proteins, whose dimensions are smaller than the di raction limit. The specimen to be imaged is tagged or labeled with light-emitting or uorescent chemical compounds called uorophores. These compounds speci cally bind to proteins and exhibit uorescence upon excitation. The uorophores are assumed to be point sources and the light emitted by them undergo spreading due to di raction. STORM employs a sequential approach, wherein each step only a few uorophores are randomly excited and the image is captured by a sensor array. The obtained image is di raction-limited, however, the separation between the uorophores allows for localizing the point sources with high precision. The localization is performed using Gaussian peak- tting. This process of random excitation coupled with localization is performed sequentially and subsequently consolidated to obtain a high-resolution image. We pose the localization as a SBD problem and employ OG-ALPAprox to estimate the locations. We also report comparisons with the de facto standard Gaussian peak- tting algorithm and show that the statistical performance is superior. Experimental results on real data show that the reconstruction quality is on par with the Gaussian peak- tting.
12

Time-Varying Modeling of Glottal Source and Vocal Tract and Sequential Bayesian Estimation of Model Parameters for Speech Synthesis

January 2018 (has links)
abstract: Speech is generated by articulators acting on a phonatory source. Identification of this phonatory source and articulatory geometry are individually challenging and ill-posed problems, called speech separation and articulatory inversion, respectively. There exists a trade-off between decomposition and recovered articulatory geometry due to multiple possible mappings between an articulatory configuration and the speech produced. However, if measurements are obtained only from a microphone sensor, they lack any invasive insight and add additional challenge to an already difficult problem. A joint non-invasive estimation strategy that couples articulatory and phonatory knowledge would lead to better articulatory speech synthesis. In this thesis, a joint estimation strategy for speech separation and articulatory geometry recovery is studied. Unlike previous periodic/aperiodic decomposition methods that use stationary speech models within a frame, the proposed model presents a non-stationary speech decomposition method. A parametric glottal source model and an articulatory vocal tract response are represented in a dynamic state space formulation. The unknown parameters of the speech generation components are estimated using sequential Monte Carlo methods under some specific assumptions. The proposed approach is compared with other glottal inverse filtering methods, including iterative adaptive inverse filtering, state-space inverse filtering, and the quasi-closed phase method. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
13

Blind inverse imaging with positivity constraints / Inversion aveugle d'images avec contraintes de positivité

Lecharlier, Loïc 09 September 2014 (has links)
Dans les problèmes inverses en imagerie, on suppose généralement connu l’opérateur ou matrice décrivant le système de formation de l’image. De façon équivalente pour un système linéaire, on suppose connue sa réponse impulsionnelle. Toutefois, ceci n’est pas une hypothèse réaliste pour de nombreuses applications pratiques pour lesquelles cet opérateur n’est en fait pas connu (ou n’est connu qu’approximativement). On a alors affaire à un problème d’inversion dite “aveugle”. Dans le cas de systèmes invariants par translation, on parle de “déconvolution aveugle” car à la fois l’image ou objet de départ et la réponse impulsionnelle doivent être estimées à partir de la seule image observée qui résulte d’une convolution et est affectée d’erreurs de mesure. Ce problème est notoirement difficile et pour pallier les ambiguïtés et les instabilités numériques inhérentes à ce type d’inversions, il faut recourir à des informations ou contraintes supplémentaires, telles que la positivité qui s’est avérée un levier de stabilisation puissant dans les problèmes d’imagerie non aveugle. La thèse propose de nouveaux algorithmes d’inversion aveugle dans un cadre discret ou discrétisé, en supposant que l’image inconnue, la matrice à inverser et les données sont positives. Le problème est formulé comme un problème d’optimisation (non convexe) où le terme d’attache aux données à minimiser, modélisant soit le cas de données de type Poisson (divergence de Kullback-Leibler) ou affectées de bruit gaussien (moindres carrés), est augmenté par des termes de pénalité sur les inconnues du problème. La stratégie d’optimisation consiste en des ajustements alternés de l’image à reconstruire et de la matrice à inverser qui sont de type multiplicatif et résultent de la minimisation de fonctions coût “surrogées” valables dans le cas positif. Le cadre assez général permet d’utiliser plusieurs types de pénalités, y compris sur la variation totale (lissée) de l’image. Une normalisation éventuelle de la réponse impulsionnelle ou de la matrice est également prévue à chaque itération. Des résultats de convergence pour ces algorithmes sont établis dans la thèse, tant en ce qui concerne la décroissance des fonctions coût que la convergence de la suite des itérés vers un point stationnaire. La méthodologie proposée est validée avec succès par des simulations numériques relatives à différentes applications telle que la déconvolution aveugle d'images en astronomie, la factorisation en matrices positives pour l’imagerie hyperspectrale et la déconvolution de densités en statistique. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
14

Modelování a analýza signálů v zobrazování perfúze magnetickou rezonancí / Modeling and Signal Processing in Dynamic Contrast Enhanced Magnetic Resonance Imaging

Kratochvíla, Jiří January 2018 (has links)
The theoretical part of this work describes perfusion analysis of dynamic contrast enhanced magnetic resonance imaging from data acquisition to estimation of perfusion parameters. The main application fields are oncology, cardiology and neurology. The thesis is focused on quantitative perfusion analysis, specifically it contributes to solving of the the main challenge of this method – correct estimation of the contrast-agent concentration sequence in the arterial input of the region of interest (arterial input function). The goals of the thesis are stated based on literature review and on the expertise of our group. Blind deconvolution is selected as the method of choice. In the practical part of this thesis, a new method for arterial input function identification based on blind deconvolution is proposed. The method is designed for both preclinical and clinical applications. It was validated on synthetic, preclinical and clinical data. Furthermore, possibilities of the longer temporal sampling provided by blind deconvolution were analyzed. This can be used for improved spatial resolution and possibly for higher SNR. For easier deployment of the proposed methods into clinical and preclinical use, a software tool for perfusion data processing was designed.
15

Kernel Estimation Approaches to Blind Deconvolution

Yash Sanghvi (18387693) 19 April 2024 (has links)
<p dir="ltr">The past two decades have seen photography shift from the hands of professionals to that of the average smartphone user. However, fitting a camera module in the palm of your hand has come with its own cost. The reduced sensor size, and hence the smaller pixels, has made the image inherently noisier due to fewer photons being captured. To compensate for fewer photons, we can increase the exposure of the camera but this may exaggerate the effect of hand shake, making the image blurrier. The presence of both noise and blur has made the post-processing algorithms necessary to produce a clean and sharp image. </p><p dir="ltr">In this thesis, we discuss various methods of deblurring images in the presence of noise. Specifically, we address the problem of photon-limited deconvolution, both with and without the underlying blur kernel being known i.e. non-blind and blind deconvolution respectively. For the problem of blind deconvolution, we discuss the flaws of the conventional approach of joint estimation of the image and blur kernel. This approach, despite its drawbacks, has been the go-to method for solving blind deconvolution for decades. We then discuss the relatively unexplored kernel-first approach to solving the problem which is numerically stable than the alternating minimization counterpart. We show how to implement this framework using deep neural networks in practice for both photon-limited and noiseless deconvolution problems. </p>
16

Déconvolution aveugle parcimonieuse en imagerie échographique avec un algorithme CLEAN adaptatif / Sparse blind deconvolution in ultrasound imaging using an adaptative CLEAN algorithm

Chira, Liviu-Teodor 17 October 2013 (has links)
L'imagerie médicale ultrasonore est une modalité en perpétuelle évolution et notamment en post-traitement où il s'agit d'améliorer la résolution et le contraste des images. Ces améliorations devraient alors aider le médecin à mieux distinguer les tissus examinés améliorant ainsi le diagnostic médical. Il existe déjà une large palette de techniques "hardware" et "software". Dans ce travail nous nous sommes focalisés sur la mise en oeuvre de techniques dites de "déconvolution aveugle", ces techniques temporelles utilisant l'enveloppe du signal comme information de base. Elles sont capables de reconstruire des images parcimonieuses, c'est-à-dire des images de diffuseurs dépourvues de bruit spéculaire. Les principales étapes de ce type de méthodes consistent en i) l'estimation aveugle de la fonction d'étalement du point (PSF), ii) l'estimation des diffuseurs en supposant l'environnement exploré parcimonieux et iii) la reconstruction d'images par reconvolution avec une PSF "idéale". La méthode proposée a été comparée avec des techniques faisant référence dans le domaine de l'imagerie médicale en utilisant des signaux synthétiques, des séquences ultrasonores réelles (1D) et images ultrasonores (2D) ayant des statistiques différentes. La méthode, qui offre un temps d'exécution très réduit par rapport aux techniques concurrentes, est adaptée pour les images présentant une quantité réduite ou moyenne des diffuseurs. / The ultrasonic imaging knows a continuous advance in the aspect of increasing the resolution for helping physicians to better observe and distinguish the examined tissues. There is already a large range of techniques to get the best results. It can be found also hardware or signal processing techniques. This work was focused on the post-processing techniques of blind deconvolution in ultrasound imaging and it was implemented an algorithm that works in the time domain and uses the envelope signal as input information for it. It is a blind deconvolution technique that is able to reconstruct reflectors and eliminate the diffusive speckle noise. The main steps are: the estimation of the point spread function (PSF) in a blind way, the estimation of reflectors using the assumption of sparsity for the examined environment and the reconstruction of the image by reconvolving the sparse tissue with an ideal PSF. The proposed method was tested in comparison with some classical techniques in medical imaging reconstruction using synthetic signals, real ultrasound sequences (1D) and ultrasound images (2D) and also using two types of statistically different images. The method is suitable for images that represent tissue with a reduced amount or average scatters. Also, the technique offers a lower execution time than direct competitors.
17

Sobre a desconvolução multiusuário e a separação de fontes. / On multiuser deconvolution and source separation.

Pavan, Flávio Renê Miranda 22 July 2016 (has links)
Os problemas de separação cega de fontes e desconvolução cega multiusuário vêm sendo intensamente estudados nas últimas décadas, principalmente devido às inúmeras possibilidades de aplicações práticas. A desconvolução multiusuário pode ser compreendida como um problema particular de separação de fontes em que o sistema misturador é convolutivo, e as estatísticas das fontes, que possuem alfabeto finito, são bem conhecidas. Dentre os desafios atuais nessa área, cabe destacar que a obtenção de soluções adaptativas para o problema de separação cega de fontes com misturas convolutivas não é trivial, pois envolve ferramentas matemáticas avançadas e uma compreensão aprofundada das técnicas estatísticas a serem utilizadas. No caso em que não se conhece o tipo de mistura ou as estatísticas das fontes, o problema é ainda mais desafiador. Na área de Processamento Estatístico de Sinais, soluções vêm sendo propostas para resolver casos específicos. A obtenção de algoritmos adaptativos eficientes e numericamente robustos para realizar separação cega de fontes, tanto envolvendo misturas instantâneas quanto convolutivas, ainda é um desafio. Por sua vez, a desconvolução cega de canais de comunicação vem sendo estudada desde os anos 1960 e 1970. A partir de então, várias soluções adaptativas eficientes foram propostas nessa área. O bom entendimento dessas soluções pode sugerir um caminho para a compreensão aprofundada das soluções existentes para o problema mais amplo de separação cega de fontes e para a obtenção de algoritmos eficientes nesse contexto. Sendo assim, neste trabalho (i) revisitam-se a formulação dos problemas de separação cega de fontes e desconvolução cega multiusuário, bem como as relações existentes entre esses problemas, (ii) abordam-se as soluções existentes para a desconvolução cega multiusuário, verificando-se suas limitações e propondo-se modificações, resultando na obtenção de algoritmos com boa capacidade de separação e robustez numérica, e (iii) relacionam-se os critérios de desconvolução cega multiusuário baseados em curtose com os critérios de separação cega de fontes. / Blind source separation and blind deconvolution of multiuser systems have been intensively studied over the last decades, mainly due to the countless possibilities of practical applications. Blind deconvolution in the multiuser case can be understood as a particular case of blind source separation in which the mixing system is convolutive, and the sources, which exhibit a finite alphabet, have well known statistics. Among the current challenges in this area, it is worth noting that obtaining adaptive solutions for the blind source separation problem with convolutive mixtures is not trivial, as it requires advanced mathematical tools and a thorough comprehension of the statistical techniques to be used. When the kind of mixture or source statistics are unknown, the problem is even more challenging. In the field of statistical signal processing, solutions aimed at specific cases have been proposed. The development of efficient and numerically robust adaptive algorithms in blind source separation, for either instantaneous or convolutive mixtures, remains an open challenge. On the other hand, blind deconvolution of communication channels has been studied since the 1960s and 1970s. Since then, various types of efficient adaptive solutions have been proposed in this field. The proper understanding of these solutions can suggest a path to further understand the existing solutions for the broader problem of blind source separation and to obtain efficient algorithms in this context. Consequently, in this work we (i) revisit the problem formulation of blind source separation and blind deconvolution of multiuser systems, and the existing relations between these problems, (ii) address the existing solutions for blind deconvolution in the multiuser case, verifying their limitations and proposing modifications, resulting in the development of algorithms with proper separation performance and numeric robustness, and (iii) relate the kurtosis based criteria of blind multiuser deconvolution and blind source separation.
18

Sobre a desconvolução multiusuário e a separação de fontes. / On multiuser deconvolution and source separation.

Flávio Renê Miranda Pavan 22 July 2016 (has links)
Os problemas de separação cega de fontes e desconvolução cega multiusuário vêm sendo intensamente estudados nas últimas décadas, principalmente devido às inúmeras possibilidades de aplicações práticas. A desconvolução multiusuário pode ser compreendida como um problema particular de separação de fontes em que o sistema misturador é convolutivo, e as estatísticas das fontes, que possuem alfabeto finito, são bem conhecidas. Dentre os desafios atuais nessa área, cabe destacar que a obtenção de soluções adaptativas para o problema de separação cega de fontes com misturas convolutivas não é trivial, pois envolve ferramentas matemáticas avançadas e uma compreensão aprofundada das técnicas estatísticas a serem utilizadas. No caso em que não se conhece o tipo de mistura ou as estatísticas das fontes, o problema é ainda mais desafiador. Na área de Processamento Estatístico de Sinais, soluções vêm sendo propostas para resolver casos específicos. A obtenção de algoritmos adaptativos eficientes e numericamente robustos para realizar separação cega de fontes, tanto envolvendo misturas instantâneas quanto convolutivas, ainda é um desafio. Por sua vez, a desconvolução cega de canais de comunicação vem sendo estudada desde os anos 1960 e 1970. A partir de então, várias soluções adaptativas eficientes foram propostas nessa área. O bom entendimento dessas soluções pode sugerir um caminho para a compreensão aprofundada das soluções existentes para o problema mais amplo de separação cega de fontes e para a obtenção de algoritmos eficientes nesse contexto. Sendo assim, neste trabalho (i) revisitam-se a formulação dos problemas de separação cega de fontes e desconvolução cega multiusuário, bem como as relações existentes entre esses problemas, (ii) abordam-se as soluções existentes para a desconvolução cega multiusuário, verificando-se suas limitações e propondo-se modificações, resultando na obtenção de algoritmos com boa capacidade de separação e robustez numérica, e (iii) relacionam-se os critérios de desconvolução cega multiusuário baseados em curtose com os critérios de separação cega de fontes. / Blind source separation and blind deconvolution of multiuser systems have been intensively studied over the last decades, mainly due to the countless possibilities of practical applications. Blind deconvolution in the multiuser case can be understood as a particular case of blind source separation in which the mixing system is convolutive, and the sources, which exhibit a finite alphabet, have well known statistics. Among the current challenges in this area, it is worth noting that obtaining adaptive solutions for the blind source separation problem with convolutive mixtures is not trivial, as it requires advanced mathematical tools and a thorough comprehension of the statistical techniques to be used. When the kind of mixture or source statistics are unknown, the problem is even more challenging. In the field of statistical signal processing, solutions aimed at specific cases have been proposed. The development of efficient and numerically robust adaptive algorithms in blind source separation, for either instantaneous or convolutive mixtures, remains an open challenge. On the other hand, blind deconvolution of communication channels has been studied since the 1960s and 1970s. Since then, various types of efficient adaptive solutions have been proposed in this field. The proper understanding of these solutions can suggest a path to further understand the existing solutions for the broader problem of blind source separation and to obtain efficient algorithms in this context. Consequently, in this work we (i) revisit the problem formulation of blind source separation and blind deconvolution of multiuser systems, and the existing relations between these problems, (ii) address the existing solutions for blind deconvolution in the multiuser case, verifying their limitations and proposing modifications, resulting in the development of algorithms with proper separation performance and numeric robustness, and (iii) relate the kurtosis based criteria of blind multiuser deconvolution and blind source separation.
19

Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring / Contributions pour la restauration d'images : des stratégies d'optimisation numérique à la déconvolution aveugle et à la correction de flous spatialement variables

Mourya, Rahul Kumar 01 February 2016 (has links)
L’introduction de dégradations lors du processus de formation d’images est un phénomène inévitable: les images souffrent de flou et de la présence de bruit. Avec les progrès technologiques et les outils numériques, ces dégradations peuvent être compensées jusqu’à un certain point. Cependant, la qualité des images acquises est insuffisante pour de nombreuses applications. Cette thèse contribue au domaine de la restauration d’images. La thèse est divisée en cinq chapitres, chacun incluant une discussion détaillée sur différents aspects de la restauration d’images. La thèse commence par une présentation générale des systèmes d’imagerie et pointe les dégradations qui peuvent survenir ainsi que leurs origines. Dans certains cas, le flou peut être considéré stationnaire dans tout le champ de vue et est alors simplement modélisé par un produit de convolution. Néanmoins, dans de nombreux cas de figure, le flou est spatialement variable et sa modélisation est plus difficile, un compromis devant être réalisé entre la précision de modélisation et la complexité calculatoire. La première partie de la thèse présente une discussion détaillée sur la modélisation des flous spatialement variables et différentes approximations efficaces permettant de les simuler. Elle décrit ensuite un modèle de formation de l’image générique. Puis, la thèse montre que la restauration d’images peut s’interpréter comme un problème d’inférence bayésienne et ainsi être reformulé en un problème d’optimisation en grande dimension. La deuxième partie de la thèse considère alors la résolution de problèmes d’optimisation génériques, en grande dimension, tels que rencontrés dans de nombreux domaines applicatifs. Une nouvelle classe de méthodes d’optimisation est proposée pour la résolution des problèmes inverses en imagerie. Les algorithmes proposés sont aussi rapides que l’état de l’art (d’après plusieurs comparaisons expérimentales) tout en supprimant la difficulté du réglage de paramètres propres à l’algorithme d’optimisation, ce qui est particulièrement utile pour les utilisateurs. La troisième partie de la thèse traite du problème de la déconvolution aveugle (estimation conjointe d’un flou invariant et d’une image plus nette) et suggère différentes façons de contraindre ce problème d’estimation. Une méthode de déconvolution aveugle adaptée à la restauration d’images astronomiques est développée. Elle se base sur une décomposition de l’image en sources ponctuelles et sources étendues et alterne des étapes de restauration de l’image et d’estimation du flou. Les résultats obtenus en simulation suggèrent que la méthode peut être un bon point de départ pour le développement de traitements dédiés à l’astronomie. La dernière partie de la thèse étend les modèles de flous spatialement variables pour leur mise en oeuvre pratique. Une méthode d’estimation du flou est proposée dans une étape d’étalonnage. Elle est appliquée à un système expérimental, démontrant qu’il est possible d’imposer des contraintes de régularité et d’invariance lors de l’estimation du flou. L’inversion du flou estimé permet ensuite d’améliorer significativement la qualité des images. Les deux étapes d’estimation du flou et de restauration forment les deux briques indispensables pour mettre en oeuvre, à l’avenir, une méthode de restauration aveugle (c’est à dire, sans étalonnage préalable). La thèse se termine par une conclusion ouvrant des perspectives qui pourront être abordées lors de travaux futurs / Degradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
20

Slepá dekonvoluce obrazů kalibračních vzorků z elektronového mikroskopu / Blind Image Deconvolution of Electron Microscopy Images

Schlorová, Hana January 2017 (has links)
V posledních letech se metody slepé dekonvoluce rozšířily do celé řady technických a vědních oborů zejména, když nejsou již limitovány výpočetně. Techniky zpracování signálu založené na slepé dekonvoluci slibují možnosti zlepšení kvality výsledků dosažených zobrazením pomocí elektronového mikroskopu. Hlavním úkolem této práce je formulování problému slepé dekonvoluce obrazů z elektronového mikroskopu a hledání vhodného řešení s jeho následnou implementací a porovnáním s dostupnou funkcí Matlab Image Processing Toolboxu. Úplným cílem je tedy vytvoření algoritmu korigujícícho vady vzniklé v procesu zobrazení v programovém prostředí Matlabu. Navržený přístup je založen na regularizačních technikách slepé dekonvoluce.

Page generated in 0.0993 seconds