Spelling suggestions: "subject:"nonblind convolution"" "subject:"monoblind convolution""
11 |
Modelování a analýza signálů v zobrazování perfúze magnetickou rezonancí / Modeling and Signal Processing in Dynamic Contrast Enhanced Magnetic Resonance ImagingKratochvíla, Jiří January 2018 (has links)
The theoretical part of this work describes perfusion analysis of dynamic contrast enhanced magnetic resonance imaging from data acquisition to estimation of perfusion parameters. The main application fields are oncology, cardiology and neurology. The thesis is focused on quantitative perfusion analysis, specifically it contributes to solving of the the main challenge of this method – correct estimation of the contrast-agent concentration sequence in the arterial input of the region of interest (arterial input function). The goals of the thesis are stated based on literature review and on the expertise of our group. Blind deconvolution is selected as the method of choice. In the practical part of this thesis, a new method for arterial input function identification based on blind deconvolution is proposed. The method is designed for both preclinical and clinical applications. It was validated on synthetic, preclinical and clinical data. Furthermore, possibilities of the longer temporal sampling provided by blind deconvolution were analyzed. This can be used for improved spatial resolution and possibly for higher SNR. For easier deployment of the proposed methods into clinical and preclinical use, a software tool for perfusion data processing was designed.
|
12 |
Kernel Estimation Approaches to Blind DeconvolutionYash Sanghvi (18387693) 19 April 2024 (has links)
<p dir="ltr">The past two decades have seen photography shift from the hands of professionals to that of the average smartphone user. However, fitting a camera module in the palm of your hand has come with its own cost. The reduced sensor size, and hence the smaller pixels, has made the image inherently noisier due to fewer photons being captured. To compensate for fewer photons, we can increase the exposure of the camera but this may exaggerate the effect of hand shake, making the image blurrier. The presence of both noise and blur has made the post-processing algorithms necessary to produce a clean and sharp image. </p><p dir="ltr">In this thesis, we discuss various methods of deblurring images in the presence of noise. Specifically, we address the problem of photon-limited deconvolution, both with and without the underlying blur kernel being known i.e. non-blind and blind deconvolution respectively. For the problem of blind deconvolution, we discuss the flaws of the conventional approach of joint estimation of the image and blur kernel. This approach, despite its drawbacks, has been the go-to method for solving blind deconvolution for decades. We then discuss the relatively unexplored kernel-first approach to solving the problem which is numerically stable than the alternating minimization counterpart. We show how to implement this framework using deep neural networks in practice for both photon-limited and noiseless deconvolution problems. </p>
|
13 |
Déconvolution aveugle parcimonieuse en imagerie échographique avec un algorithme CLEAN adaptatif / Sparse blind deconvolution in ultrasound imaging using an adaptative CLEAN algorithmChira, Liviu-Teodor 17 October 2013 (has links)
L'imagerie médicale ultrasonore est une modalité en perpétuelle évolution et notamment en post-traitement où il s'agit d'améliorer la résolution et le contraste des images. Ces améliorations devraient alors aider le médecin à mieux distinguer les tissus examinés améliorant ainsi le diagnostic médical. Il existe déjà une large palette de techniques "hardware" et "software". Dans ce travail nous nous sommes focalisés sur la mise en oeuvre de techniques dites de "déconvolution aveugle", ces techniques temporelles utilisant l'enveloppe du signal comme information de base. Elles sont capables de reconstruire des images parcimonieuses, c'est-à-dire des images de diffuseurs dépourvues de bruit spéculaire. Les principales étapes de ce type de méthodes consistent en i) l'estimation aveugle de la fonction d'étalement du point (PSF), ii) l'estimation des diffuseurs en supposant l'environnement exploré parcimonieux et iii) la reconstruction d'images par reconvolution avec une PSF "idéale". La méthode proposée a été comparée avec des techniques faisant référence dans le domaine de l'imagerie médicale en utilisant des signaux synthétiques, des séquences ultrasonores réelles (1D) et images ultrasonores (2D) ayant des statistiques différentes. La méthode, qui offre un temps d'exécution très réduit par rapport aux techniques concurrentes, est adaptée pour les images présentant une quantité réduite ou moyenne des diffuseurs. / The ultrasonic imaging knows a continuous advance in the aspect of increasing the resolution for helping physicians to better observe and distinguish the examined tissues. There is already a large range of techniques to get the best results. It can be found also hardware or signal processing techniques. This work was focused on the post-processing techniques of blind deconvolution in ultrasound imaging and it was implemented an algorithm that works in the time domain and uses the envelope signal as input information for it. It is a blind deconvolution technique that is able to reconstruct reflectors and eliminate the diffusive speckle noise. The main steps are: the estimation of the point spread function (PSF) in a blind way, the estimation of reflectors using the assumption of sparsity for the examined environment and the reconstruction of the image by reconvolving the sparse tissue with an ideal PSF. The proposed method was tested in comparison with some classical techniques in medical imaging reconstruction using synthetic signals, real ultrasound sequences (1D) and ultrasound images (2D) and also using two types of statistically different images. The method is suitable for images that represent tissue with a reduced amount or average scatters. Also, the technique offers a lower execution time than direct competitors.
|
14 |
Sobre a desconvolução multiusuário e a separação de fontes. / On multiuser deconvolution and source separation.Pavan, Flávio Renê Miranda 22 July 2016 (has links)
Os problemas de separação cega de fontes e desconvolução cega multiusuário vêm sendo intensamente estudados nas últimas décadas, principalmente devido às inúmeras possibilidades de aplicações práticas. A desconvolução multiusuário pode ser compreendida como um problema particular de separação de fontes em que o sistema misturador é convolutivo, e as estatísticas das fontes, que possuem alfabeto finito, são bem conhecidas. Dentre os desafios atuais nessa área, cabe destacar que a obtenção de soluções adaptativas para o problema de separação cega de fontes com misturas convolutivas não é trivial, pois envolve ferramentas matemáticas avançadas e uma compreensão aprofundada das técnicas estatísticas a serem utilizadas. No caso em que não se conhece o tipo de mistura ou as estatísticas das fontes, o problema é ainda mais desafiador. Na área de Processamento Estatístico de Sinais, soluções vêm sendo propostas para resolver casos específicos. A obtenção de algoritmos adaptativos eficientes e numericamente robustos para realizar separação cega de fontes, tanto envolvendo misturas instantâneas quanto convolutivas, ainda é um desafio. Por sua vez, a desconvolução cega de canais de comunicação vem sendo estudada desde os anos 1960 e 1970. A partir de então, várias soluções adaptativas eficientes foram propostas nessa área. O bom entendimento dessas soluções pode sugerir um caminho para a compreensão aprofundada das soluções existentes para o problema mais amplo de separação cega de fontes e para a obtenção de algoritmos eficientes nesse contexto. Sendo assim, neste trabalho (i) revisitam-se a formulação dos problemas de separação cega de fontes e desconvolução cega multiusuário, bem como as relações existentes entre esses problemas, (ii) abordam-se as soluções existentes para a desconvolução cega multiusuário, verificando-se suas limitações e propondo-se modificações, resultando na obtenção de algoritmos com boa capacidade de separação e robustez numérica, e (iii) relacionam-se os critérios de desconvolução cega multiusuário baseados em curtose com os critérios de separação cega de fontes. / Blind source separation and blind deconvolution of multiuser systems have been intensively studied over the last decades, mainly due to the countless possibilities of practical applications. Blind deconvolution in the multiuser case can be understood as a particular case of blind source separation in which the mixing system is convolutive, and the sources, which exhibit a finite alphabet, have well known statistics. Among the current challenges in this area, it is worth noting that obtaining adaptive solutions for the blind source separation problem with convolutive mixtures is not trivial, as it requires advanced mathematical tools and a thorough comprehension of the statistical techniques to be used. When the kind of mixture or source statistics are unknown, the problem is even more challenging. In the field of statistical signal processing, solutions aimed at specific cases have been proposed. The development of efficient and numerically robust adaptive algorithms in blind source separation, for either instantaneous or convolutive mixtures, remains an open challenge. On the other hand, blind deconvolution of communication channels has been studied since the 1960s and 1970s. Since then, various types of efficient adaptive solutions have been proposed in this field. The proper understanding of these solutions can suggest a path to further understand the existing solutions for the broader problem of blind source separation and to obtain efficient algorithms in this context. Consequently, in this work we (i) revisit the problem formulation of blind source separation and blind deconvolution of multiuser systems, and the existing relations between these problems, (ii) address the existing solutions for blind deconvolution in the multiuser case, verifying their limitations and proposing modifications, resulting in the development of algorithms with proper separation performance and numeric robustness, and (iii) relate the kurtosis based criteria of blind multiuser deconvolution and blind source separation.
|
15 |
Sobre a desconvolução multiusuário e a separação de fontes. / On multiuser deconvolution and source separation.Flávio Renê Miranda Pavan 22 July 2016 (has links)
Os problemas de separação cega de fontes e desconvolução cega multiusuário vêm sendo intensamente estudados nas últimas décadas, principalmente devido às inúmeras possibilidades de aplicações práticas. A desconvolução multiusuário pode ser compreendida como um problema particular de separação de fontes em que o sistema misturador é convolutivo, e as estatísticas das fontes, que possuem alfabeto finito, são bem conhecidas. Dentre os desafios atuais nessa área, cabe destacar que a obtenção de soluções adaptativas para o problema de separação cega de fontes com misturas convolutivas não é trivial, pois envolve ferramentas matemáticas avançadas e uma compreensão aprofundada das técnicas estatísticas a serem utilizadas. No caso em que não se conhece o tipo de mistura ou as estatísticas das fontes, o problema é ainda mais desafiador. Na área de Processamento Estatístico de Sinais, soluções vêm sendo propostas para resolver casos específicos. A obtenção de algoritmos adaptativos eficientes e numericamente robustos para realizar separação cega de fontes, tanto envolvendo misturas instantâneas quanto convolutivas, ainda é um desafio. Por sua vez, a desconvolução cega de canais de comunicação vem sendo estudada desde os anos 1960 e 1970. A partir de então, várias soluções adaptativas eficientes foram propostas nessa área. O bom entendimento dessas soluções pode sugerir um caminho para a compreensão aprofundada das soluções existentes para o problema mais amplo de separação cega de fontes e para a obtenção de algoritmos eficientes nesse contexto. Sendo assim, neste trabalho (i) revisitam-se a formulação dos problemas de separação cega de fontes e desconvolução cega multiusuário, bem como as relações existentes entre esses problemas, (ii) abordam-se as soluções existentes para a desconvolução cega multiusuário, verificando-se suas limitações e propondo-se modificações, resultando na obtenção de algoritmos com boa capacidade de separação e robustez numérica, e (iii) relacionam-se os critérios de desconvolução cega multiusuário baseados em curtose com os critérios de separação cega de fontes. / Blind source separation and blind deconvolution of multiuser systems have been intensively studied over the last decades, mainly due to the countless possibilities of practical applications. Blind deconvolution in the multiuser case can be understood as a particular case of blind source separation in which the mixing system is convolutive, and the sources, which exhibit a finite alphabet, have well known statistics. Among the current challenges in this area, it is worth noting that obtaining adaptive solutions for the blind source separation problem with convolutive mixtures is not trivial, as it requires advanced mathematical tools and a thorough comprehension of the statistical techniques to be used. When the kind of mixture or source statistics are unknown, the problem is even more challenging. In the field of statistical signal processing, solutions aimed at specific cases have been proposed. The development of efficient and numerically robust adaptive algorithms in blind source separation, for either instantaneous or convolutive mixtures, remains an open challenge. On the other hand, blind deconvolution of communication channels has been studied since the 1960s and 1970s. Since then, various types of efficient adaptive solutions have been proposed in this field. The proper understanding of these solutions can suggest a path to further understand the existing solutions for the broader problem of blind source separation and to obtain efficient algorithms in this context. Consequently, in this work we (i) revisit the problem formulation of blind source separation and blind deconvolution of multiuser systems, and the existing relations between these problems, (ii) address the existing solutions for blind deconvolution in the multiuser case, verifying their limitations and proposing modifications, resulting in the development of algorithms with proper separation performance and numeric robustness, and (iii) relate the kurtosis based criteria of blind multiuser deconvolution and blind source separation.
|
16 |
Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring / Contributions pour la restauration d'images : des stratégies d'optimisation numérique à la déconvolution aveugle et à la correction de flous spatialement variablesMourya, Rahul Kumar 01 February 2016 (has links)
L’introduction de dégradations lors du processus de formation d’images est un phénomène inévitable: les images souffrent de flou et de la présence de bruit. Avec les progrès technologiques et les outils numériques, ces dégradations peuvent être compensées jusqu’à un certain point. Cependant, la qualité des images acquises est insuffisante pour de nombreuses applications. Cette thèse contribue au domaine de la restauration d’images. La thèse est divisée en cinq chapitres, chacun incluant une discussion détaillée sur différents aspects de la restauration d’images. La thèse commence par une présentation générale des systèmes d’imagerie et pointe les dégradations qui peuvent survenir ainsi que leurs origines. Dans certains cas, le flou peut être considéré stationnaire dans tout le champ de vue et est alors simplement modélisé par un produit de convolution. Néanmoins, dans de nombreux cas de figure, le flou est spatialement variable et sa modélisation est plus difficile, un compromis devant être réalisé entre la précision de modélisation et la complexité calculatoire. La première partie de la thèse présente une discussion détaillée sur la modélisation des flous spatialement variables et différentes approximations efficaces permettant de les simuler. Elle décrit ensuite un modèle de formation de l’image générique. Puis, la thèse montre que la restauration d’images peut s’interpréter comme un problème d’inférence bayésienne et ainsi être reformulé en un problème d’optimisation en grande dimension. La deuxième partie de la thèse considère alors la résolution de problèmes d’optimisation génériques, en grande dimension, tels que rencontrés dans de nombreux domaines applicatifs. Une nouvelle classe de méthodes d’optimisation est proposée pour la résolution des problèmes inverses en imagerie. Les algorithmes proposés sont aussi rapides que l’état de l’art (d’après plusieurs comparaisons expérimentales) tout en supprimant la difficulté du réglage de paramètres propres à l’algorithme d’optimisation, ce qui est particulièrement utile pour les utilisateurs. La troisième partie de la thèse traite du problème de la déconvolution aveugle (estimation conjointe d’un flou invariant et d’une image plus nette) et suggère différentes façons de contraindre ce problème d’estimation. Une méthode de déconvolution aveugle adaptée à la restauration d’images astronomiques est développée. Elle se base sur une décomposition de l’image en sources ponctuelles et sources étendues et alterne des étapes de restauration de l’image et d’estimation du flou. Les résultats obtenus en simulation suggèrent que la méthode peut être un bon point de départ pour le développement de traitements dédiés à l’astronomie. La dernière partie de la thèse étend les modèles de flous spatialement variables pour leur mise en oeuvre pratique. Une méthode d’estimation du flou est proposée dans une étape d’étalonnage. Elle est appliquée à un système expérimental, démontrant qu’il est possible d’imposer des contraintes de régularité et d’invariance lors de l’estimation du flou. L’inversion du flou estimé permet ensuite d’améliorer significativement la qualité des images. Les deux étapes d’estimation du flou et de restauration forment les deux briques indispensables pour mettre en oeuvre, à l’avenir, une méthode de restauration aveugle (c’est à dire, sans étalonnage préalable). La thèse se termine par une conclusion ouvrant des perspectives qui pourront être abordées lors de travaux futurs / Degradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
|
17 |
Slepá dekonvoluce obrazů kalibračních vzorků z elektronového mikroskopu / Blind Image Deconvolution of Electron Microscopy ImagesSchlorová, Hana January 2017 (has links)
V posledních letech se metody slepé dekonvoluce rozšířily do celé řady technických a vědních oborů zejména, když nejsou již limitovány výpočetně. Techniky zpracování signálu založené na slepé dekonvoluci slibují možnosti zlepšení kvality výsledků dosažených zobrazením pomocí elektronového mikroskopu. Hlavním úkolem této práce je formulování problému slepé dekonvoluce obrazů z elektronového mikroskopu a hledání vhodného řešení s jeho následnou implementací a porovnáním s dostupnou funkcí Matlab Image Processing Toolboxu. Úplným cílem je tedy vytvoření algoritmu korigujícícho vady vzniklé v procesu zobrazení v programovém prostředí Matlabu. Navržený přístup je založen na regularizačních technikách slepé dekonvoluce.
|
18 |
Computational Advancements for Solving Large-scale Inverse ProblemsCho, Taewon 10 June 2021 (has links)
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification. / Doctor of Philosophy / For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.
|
19 |
Škálování arteriální vstupní funkce v DCE-MRI / Scaling of arterial input function in DCE-MRIHoleček, Tomáš Unknown Date (has links)
Perfusion magnetic resonance imaging is modern diagnostic method used mainly in oncology. In this method, contrast agent is injected to the subject and then is continuously monitored the progress of its concentration in the affected area in time. Correct determination of the arterial input function (AIF) is very important for perfusion analysis. One possibility is to model AIF by multichannel blind deconvolution but the estimated AIF is necessary to be scaled. This master´s thesis is focused on description of scaling methods and their influence on perfussion parameters in dependence on used model of AIF in different tissues.
|
20 |
Slepá Dekonvoluce Obrazu ve STEM Módu Elektronového Mikroskopu / Blind Image Deconvolution in STEM mode of Electron MicroscopeValterová, Eva January 2018 (has links)
Slepá dekonvoluce je metoda, při které je rozptylová funkce a skutečný obraz rekonstruován zároveň. Cílem této práce je představit různé metody slepé dekonvoluce a najít optimální metodu rekonstrukce původního obrazu a rozptylové funkce. Jako nejvhodnější metoda slepé dekonvoluce byl zvolen algoritmus střídavé minimalizace, který byl upraven a testován. Vlastnosti navrženého algoritmu byly testovány na uměle degradovaných datech a na reálných datech pořízených skenovacím transmisním elektronovým mikroskopem. Účinnost algoritmu byla hodnocena hned několika hodnotícími kritérii. Byla zjištěna omezení algoritmu a tím specifikováno jeho využití.
|
Page generated in 0.1412 seconds