• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 16
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 97
  • 97
  • 26
  • 14
  • 14
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Méthodes et structures non locales pour la restaurationd'images et de surfaces 3D / Non local methods and structures for images and 3D surfaces restoration

Guillemot, Thierry 03 February 2014 (has links)
Durant ces dernières années, les technologies d’acquisition numériques n’ont cessé de se perfectionner, permettant d’obtenir des données d’une qualité toujours plus fine. Néanmoins, le signal acquis reste corrompu par des défauts qui ne peuvent être corrigés matériellement et nécessitent l’utilisation de méthodes de restauration adaptées. J'usqu’au milieu des années 2000, ces approches s’appuyaient uniquement sur un traitement local du signal détérioré. Avec l’amélioration des performances de calcul, le support du filtre a pu être étendu à l’ensemble des données acquises en exploitant leur caractère autosimilaire. Ces approches non locales ont principalement été utilisées pour restaurer des données régulières et structurées telles que des images. Mais dans le cas extrême de données irrégulières et non structurées comme les nuages de points 3D, leur adaptation est peu étudiée à l’heure actuelle. Avec l’augmentation de la quantité de données échangées sur les réseaux de communication, de nouvelles méthodes non locales ont récemment été proposées. Elles utilisent un modèle a priori extrait à partir de grands ensembles d’échantillons pour améliorer la qualité de la restauration. Néanmoins, ce type de méthode reste actuellement trop coûteux en temps et en mémoire. Dans cette thèse, nous proposons, tout d’abord, d’étendre les méthodes non locales aux nuages de points 3D, en définissant une surface de points capable d’exploiter leur caractère autosimilaire. Nous introduisons ensuite une nouvelle structure de données, le CovTree, flexible et générique, capable d’apprendre les distributions d’un grand ensemble d’échantillons avec une capacité de mémoire limitée. Finalement, nous généralisons les méthodes de restauration collaboratives appliquées aux données 2D et 3D, en utilisant notre CovTree pour apprendre un modèle statistique a priori à partir d’un grand ensemble de données. / In recent years, digital technologies allowing to acquire real world objects or scenes have been significantly improved in order to obtain high quality datasets. However, the acquired signal is corrupted by defects which can not be rectified materially and require the use of adapted restoration methods. Until the middle 2000s, these approaches were only based on a local process applyed on the damaged signal. With the improvement of computing performance, the neighborhood used by the filter has been extended to the entire acquired dataset by exploiting their self-similar nature. These non-local approaches have mainly been used to restore regular and structured data such as images. But in the extreme case of irregular and unstructured data as 3D point sets, their adaptation is few investigated at this time. With the increase amount of exchanged data over the communication networks, new non-local methods have recently been proposed. These can improve the quality of the restoration by using an a priori model extracted from large data sets. However, this kind of method is time and memory consuming. In this thesis, we first propose to extend the non-local methods for 3D point sets by defining a surface of points which exploits their self-similar of the point cloud. We then introduce a new flexible and generic data structure, called the CovTree, allowing to learn the distribution of a large set of samples with a limited memory capacity. Finally, we generalize collaborative restoration methods applied to 2D and 3D data by using our CovTree to learn a statistical a priori model from a large dataset.
72

Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction

Buchholz, Tim-Oliver 12 August 2022 (has links)
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data. Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations. Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT). In the next paragraphs I will summarize the individual contributions briefly. Electron microscopy is the go to method for high-resolution images in biological research. Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses. However, slow scanning speeds are required to obtain SEM images of sufficient quality. In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks. Once such a network is trained, it can be applied to noisy data to restore high quality images. With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in $40$- to $50$-fold imaging speedups for SEM imaging. In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions. However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images. Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually. To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series. An implementation of cryoCARE is publicly available as Scipion (de la Rosa-Trevín et al. 2016) plugin. Next, I will discuss the problem of self-supervised image denoising. With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied. However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system. In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach. Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012). In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks. I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE). I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction. The missing wedge artefacts in tomographic imaging originate in sparse-view imaging. Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images. However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space. I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients. Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents Summary iii Acknowledgements v 1 Introduction 1 1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3 1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4 1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5 1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8 1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11 2 Denoising in Electron Microscopy 15 2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19 2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19 2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21 2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23 2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25 2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27 2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29 2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31 2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32 2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33 3 Noise2Void: Self-Supervised Denoising 35 3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37 3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44 3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47 3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48 3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50 4 Fourier Image Transformer 53 4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55 4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57 4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59 4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60 4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60 4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61 4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64 4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Conclusions and Outlook 71
73

"Är det H&M eller luvtröjan?" : En kvalitativ fallstudie om H&Ms kriskommunikation och deras luvtröja ”Coolest monkey in the Jungle” / "Was it H&M or the hoodie?" : A qualitative case study about H&Ms crisis communicaton and their hoodie "Coolest monkey in the Jungle"

Seuwani, Amir, Måård, Anton January 2022 (has links)
The case study ”Är det H&M eller luvtröjan?” intends to analyze the crisis communication that took place during H&Ms crisis in early 2018. The crisis situation stems from a controversial choice of letting a dark-skinned child wear the hoodie with the print “Coolest monkey in the jungle”. H&M was quickly criticized by the media for making racial remarks. To address the situation H&M published an official press release on their website and several public statements on their social media channels. With the main purpose of the case study being to analyze H&Ms official statements about the crisis situation, we intend to identify which strategies were applied to reduce eventual reputational damage to H&M. The case study focused on answering the different questions based on the crisis that occurred. The questions are “Which rhetorical appeals were used in the official statements published by H&M?”, “Which identifiable strategies from SCCT and IRT were used in the official statements published by H&M”, and “Which actions were implemented by H&M after the crisis situation, and what effect does that have on the brand?”. A rhetorical analysis was used to identify which rhetorical appeals including Ethos, Logos, and Pathos were used in the official statements by H&M, and furthermore, understand what the usage of certain appeals means to the crisis situation and its communicational part. The theories within crisis communication that were used are situational crisis communication theory and image restoration theory. These theories present strategies to use in cases of crisis. To answer the second question, the official statements were analyzed and categorized after which strategi they could identify as. The last question was not based on theories, and was answered by following research about H&Ms actions after the crisis. The results shows that H&M mostly used pathos to persuade the organization's stakeholders to show regret about the situation, and that pattern can be found in all the official statements that were analyzed. Furthermore, the theories that were most identified in the official statements were sympathy, regret, and apologetic, which overall shows regret from theorganization. To answer the last question about reputational damage. H&M has since the crisis followed by hiring a global head of diversity and inclusion, to prevent similar events from occurring.
74

Game publishers and their constant need for damage control : A Content Analysis of Activision Blizzard's communication during their public-relations crises / Game publishers and their constant need for damage control : A Content Analysis of Activision Blizzard's communication during their public-relations crises

Moberg, Albin January 2024 (has links)
Big organisations and companies in game development and distribution have in recent years become involved in public-relations crises. In the past you could see that dissatisfied consumers have an impact on the image for organisations. To find out how an organisation can stop or mitigate a bad image, an analysis was conducted of Activision Blizzard’s communication to the public during their Public-relations crises, with a connection to strategic communications theories. The method used in the study was a Content analysis with a deductive approach to critically analyse the messages that was communicated during the public-relations crises the organisation had. The results of the analysis gave information for both Successful communication and communication that could have been improved with the strategic communications theories. With this information there is another source for crisis managers to construct messages during a crisis to either stop or mitigate a bad image. The study also gives an example on how to analyse an organisations crisis communication critically.
75

Deep Learning Approaches to Low-level Vision Problems

Liu, Huan January 2022 (has links)
Recent years have witnessed tremendous success in using deep learning approaches to handle low-level vision problems. Most of the deep learning based methods address the low-level vision problem by training a neural network to approximate the mapping from the inputs to the desired ground truths. However, directly learning this mapping is usually difficult and cannot achieve ideal performance. Besides, under the setting of unsupervised learning, the general deep learning approach cannot be used. In this thesis, we investigate and address several problems in low-level vision using the proposed approaches. To learn a better mapping using the existing data, an indirect domain shift mechanism is proposed to add explicit constraints inside the neural network for single image dehazing. This allows the neural network to be optimized across several identified neighbours, resulting in a better performance. Despite the success of the proposed approaches in learning an improved mapping from the inputs to the targets, three problems of unsupervised learning is also investigated. For unsupervised monocular depth estimation, a teacher-student network is introduced to strategically integrate both supervised and unsupervised learning benefits. The teacher network is formed by learning under the binocular depth estimation setting, and the student network is constructed as the primary network for monocular depth estimation. In observing that the performance of the teacher network is far better than that of the student network, a knowledge distillation approach is proposed to help improve the mapping learned by the student. For single image dehazing, the current network cannot handle different types of haze patterns as it is trained on a particular dataset. The problem is formulated as a multi-domain dehazing problem. To address this issue, a test-time training approach is proposed to leverage a helper network in assisting the dehazing network adapting to a particular domain using self-supervision. In lossy compression system, the target distribution can be different from that of the source and ground truths are not available for reference. Thus, the objective is to transform the source to target under a rate constraint, which generalizes the optimal transport. To address this problem, theoretical analyses on the trade-off between compression rate and minimal achievable distortion are studied under the cases with and without common randomness. A deep learning approach is also developed using our theoretical results for addressing super-resolution and denoising tasks. Extensive experiments and analyses have been conducted to prove the effectiveness of the proposed deep learning based methods in handling the problems in low-level vision. / Thesis / Doctor of Philosophy (PhD)
76

線性動態模糊影像之研究 / A study of linear motion blurred image

吳諭忠, Wu, Yu Chung Unknown Date (has links)
生活中在使用相機時,由於機器晃動或物體移動所造成的模糊影像時常可見。當影像模糊的成因是影像曝光時間內相機與拍攝物體相對線性移動時,則我們稱為線性動態模糊。理論上,模糊影像可以表示成原始影像與點擴散函數的旋積,本文的研究重點為點擴散函數中模糊參數的估計,雷登轉換將被運用在此問題上。我們首先介紹兩個現有方法,我們將探討這些方法中用來消除雜訊的步驟之適用性及必要性。另一方面,在模糊參數的估計過程中,我們在雷登轉換加入圓限制以及採用移動平均法。我們透過實驗證實,本篇提出的方法可以獲得更準確的估計結果以及更好的模糊影像還原效果。 / Nowadays, collecting a digital image becomes convenient and low-cost due to rapid progress in digital camera technology. Blurred images frequently appear because of camera shake or moving objects. There are several different types of blur. When the blur is caused by the linear motion between the object and the camera during the light exposure, it’s called a linear motion blur. Mathematically, a blurred image is expressed as a convolution of a point spread function and the original image. Our study considers Radon transform for the estimation of the point spread function. To improve the existing methods, a circle restriction and the moving average method are applied in the estimating procedure. Through intensive experiments, the proposed method is found enable to produce more accurate estimation and better performance in image restoration.
77

Restauração cega de imagens: soluções baseadas em algoritmos adaptativos. / Blind image restoration: solutions based on adaptive algorithms.

Silva, Daniela Brasil 24 May 2018 (has links)
O objetivo da desconvolução cega de imagens é restaurar uma imagem degradada sem usar informação da imagem real ou da função de degradação. O mapeamento dos níveis de cinza de uma imagem em um sinal de comunicação possibilita o uso de técnicas de equalização cega de canais para a restauração de imagens. Neste trabalho, propõe-se o uso de um esquema para desconvolução cega de imagens baseado na combinação convexa de um equalizador cego com um equalizador no modo de decisão direta. A combinação também é adaptada de forma cega, o que possibilita o chaveamento automático entre os filtros componentes. Dessa forma, o esquema proposto é capaz de atingir o desempenho de um algoritmo de filtragem adaptativa supervisionada sem o conhecimento prévio da imagem original. O desempenho da combinação é ilustrado por meio de simulações, que comprovam a eficiência desse esquema quando comparado a outras soluções da literatura. / The goal of blind image deconvolution is to restore a degraded image without using information from the actual image or from the point spread function. The mapping of the gray levels of an image into a communication signal enables the use of blind equalization techniques for image restoration. In this work, we use a blind image deconvolution scheme based on the convex combination of a blind equalizer with an equalizer in the decision-directed mode. The combination is also blindly adapted, which enables automatic switching between the component filters. Thus, the proposed scheme is able to achieve the performance of a supervised adaptive filtering algorithm without prior knowledge of the original image. The performance of the combination is illustrated by simulations, which show the efficiency of this scheme when compared to other solutions in the literature.
78

Dose savings in digital breast tomosynthesis through image processing / Redução da dose de radiação em tomossíntese mamária através de processamento de imagens

Borges, Lucas Rodrigues 14 June 2017 (has links)
In x-ray imaging, the x-ray radiation must be the minimum necessary to achieve the required diagnostic objective, to ensure the patients safety. However, low-dose acquisitions yield images with low quality, which affect the radiologists image interpretation. Therefore, there is a compromise between image quality and radiation dose. This work proposes an image restoration framework capable of restoring low-dose acquisitions to achieve the quality of full-dose acquisitions. The contribution of the new method includes the capability of restoring images with quantum and electronic noise, pixel offset and variable detector gain. To validate the image processing chain, a simulation algorithm was proposed. The simulation generates low-dose DBT projections, starting from fulldose images. To investigate the feasibility of reducing the radiation dose in breast cancer screening programs, a simulated pre-clinical trial was conducted using the simulation and the image processing pipeline proposed in this work. Digital breast tomosynthesis (DBT) images from 72 patients were selected, and 5 human observers were invited for the experiment. The results suggested that a reduction of up to 30% in radiation dose could not be perceived by the human reader after the proposed image processing pipeline was applied. Thus, the image processing algorithm has the potential to decrease radiation levels in DBT, also decreasing the cancer induction risks associated with the exam. / Em programas de rastreamento de câncer de mama, a dose de radiação deve ser mantida o mínimo necessário para se alcançar o diagnóstico, para garantir a segurança dos pacientes. Entretanto, imagens adquiridas com dose de radiação reduzida possuem qualidade inferior. Assim, existe um equilíbrio entre a dose de radiação e a qualidade da imagem. Este trabalho propõe um algoritmo de restauração de imagens capaz de recuperar a qualidade das imagens de tomossíntese digital mamária, adquiridas com doses reduzidas de radiação, para alcançar a qualidade de imagens adquiridas com a dose de referência. As contribuições do trabalho incluem a melhoria do modelo de ruído, e a inclusão das características do detector, como o ganho variável do ruído quântico. Para a validação a cadeia de processamento, um método de simulação de redução de dose de radiação foi proposto. Para investigar a possibilidade de redução de dose de radiação utilizada na tomossíntese, um estudo pré-clínico foi conduzido utilizando o método de simulação proposto e a cadeia de processamento. Imagens clínicas de tomossíntese mamária de 72 pacientes foram selecionadas e cinco observadores foram convidados para participar do estudo. Os resultados sugeriram que, após a utilização do processamento proposto, uma redução de 30% de dose de radiação pôde ser alcançada sem que os observadores percebessem diferença nos níveis de ruído e borramento. Assim, o algoritmo de processamento tem o potencial de reduzir os níveis de radiação na tomossíntese mamária, reduzindo também os riscos de indução do câncer de mama.
79

Uma técnica multimalhas para eliminação de ruídos e retoque digita\" / An-edge preserving multigrid-like for image denoising and inpainting

Ferraz, Carolina Toledo 14 September 2006 (has links)
Técnicas baseadas na Equação de Fluxo Bem-Balanceada têm sido muitas vezes empregadas como eficientes ferramentas para eliminação de ruídos e preservação de arestas em imagens digitais. Embora efetivas, essas técnicas demandam alto custo computacional. Este trabalho objetiva propor uma técnica baseada na abordagem multigrid para acelerar a solução numérica da Equação de Fluxo Bem-Balanceada. A equação de difusão é resolvida em uma malha grossa e uma correção do erro na malha grossa para as mais finas é aplicada para gerar a solução desejada. A transferência entre malhas grossas e finas é feita pelo filtro de Mitchell, um esquema bem conhecido que é projetado para preservação de arestas. Além disso, a equação do transporte e a Equação do Fluxo de Curvatura são adaptadas à nossa técnica para retoque em imagens e eliminação de ruí?dos. Resultados numéricos são comparados quantitativamente e qualitativamente com outras abordagens, mostrando que o método aqui introduzido produz qualidade de imagens similares com muito menos tempo computacional. / Techniques based on the Well-Balanced Flow Equation have been employed as an efficient tool for edge preserving noise removal. Although effective, this technique demands high computational effort, rendering it not practical in several applications. This work aims at proposing a multigrid-like technique for speeding up the solution of the Well- Balanced Flow equation. In fact, the diffusion equation is solved in a coarse grid and a coarse-to-fine error correction is applied in order to generate the desired solution. The transfer between coarser and finer grids is made by the Mitchell-Filter, a well known interpolation scheme that is designed for preserving edges. Furthermore, the solution of the transport and the Mean Curvature Flow equations is adapted to the multigrid like technique for image inpainting and denoising. Numerical results are compared quantitative and qualitatively with other approaches, showing that our method produces similar image quality with much lower computational time.
80

Restauração cega de imagens: soluções baseadas em algoritmos adaptativos. / Blind image restoration: solutions based on adaptive algorithms.

Daniela Brasil Silva 24 May 2018 (has links)
O objetivo da desconvolução cega de imagens é restaurar uma imagem degradada sem usar informação da imagem real ou da função de degradação. O mapeamento dos níveis de cinza de uma imagem em um sinal de comunicação possibilita o uso de técnicas de equalização cega de canais para a restauração de imagens. Neste trabalho, propõe-se o uso de um esquema para desconvolução cega de imagens baseado na combinação convexa de um equalizador cego com um equalizador no modo de decisão direta. A combinação também é adaptada de forma cega, o que possibilita o chaveamento automático entre os filtros componentes. Dessa forma, o esquema proposto é capaz de atingir o desempenho de um algoritmo de filtragem adaptativa supervisionada sem o conhecimento prévio da imagem original. O desempenho da combinação é ilustrado por meio de simulações, que comprovam a eficiência desse esquema quando comparado a outras soluções da literatura. / The goal of blind image deconvolution is to restore a degraded image without using information from the actual image or from the point spread function. The mapping of the gray levels of an image into a communication signal enables the use of blind equalization techniques for image restoration. In this work, we use a blind image deconvolution scheme based on the convex combination of a blind equalizer with an equalizer in the decision-directed mode. The combination is also blindly adapted, which enables automatic switching between the component filters. Thus, the proposed scheme is able to achieve the performance of a supervised adaptive filtering algorithm without prior knowledge of the original image. The performance of the combination is illustrated by simulations, which show the efficiency of this scheme when compared to other solutions in the literature.

Page generated in 0.1461 seconds