• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Traitement d'images de radiographie à faible dose : Débruitage et rehaussement de contraste conjoints et détection automatique de points de repère anatomiques pour l'estimation de la qualité des images / Low dose x-ray image processing : Joint denoising and contrast enhancement, and automatic detection of anatomical landmarks for image quality estimation

Irrera, Paolo 17 June 2015 (has links)
Nos travaux portent sur la réduction de la dose de rayonnement lors d'examens réalisés avec le Système de radiologie EOS. Deux approches complémentaires sont étudiées. Dans un premier temps, nous proposons une méthode de débruitage et de rehaussement de contraste conjoints pour optimiser le compromis entre la qualité des images et la dose de rayons X. Nous étendons le filtre à moyennes non locales pour restaurer les images EOS. Nous étudions ensuite comment combiner ce filtre à une méthode de rehaussement de contraste multi-échelles. La qualité des images cliniques est optimisée grâce à des fonctions limitant l'augmentation du bruit selon la quantité d’information locale redondante captée par le filtre. Dans un deuxième temps, nous estimons des indices d’exposition (EI) sur les images EOS afin de donner aux utilisateurs un retour immédiat sur la qualité de l'image acquise. Nous proposons ainsi une méthode reposant sur la détection de points de repère qui, grâce à l'exploitation de la redondance de mesures locales, est plus robuste à la présence de données aberrantes que les méthodes existantes. En conclusion, la méthode de débruitage et de rehaussement de contraste conjoints donne des meilleurs résultats que ceux obtenus par un algorithme exploité en routine clinique. La qualité des images EOS peut être quantifiée de manière robuste par des indices calculés automatiquement. Étant donnée la cohérence des mesures sur des images de pré-affichage, ces indices pourraient être utilisés en entrée d'un système de gestion automatique des expositions. / We aim at reducing the ALARA (As Low As Reasonably Achievable) dose limits for images acquired with EOS full-body system by means of image processing techniques. Two complementary approaches are studied. First, we define a post-processing method that optimizes the trade-off between acquired image quality and X-ray dose. The Non-Local means filter is extended to restore EOS images. We then study how to combine it with a multi-scale contrast enhancement technique. The image quality for the diagnosis is optimized by defining non-parametric noise containment maps that limit the increase of noise depending on the amount of local redundant information captured by the filter. Secondly, we estimate exposure index (EI) values on EOS images which give an immediate feedback on image quality to help radiographers to verify the correct exposure level of the X-ray examination. We propose a landmark detection based approach that is more robust to potential outliers than existing methods as it exploits the redundancy of local estimates. Finally, the proposed joint denoising and contrast enhancement technique significantly increases the image quality with respect to an algorithm used in clinical routine. Robust image quality indicators can be automatically associated with clinical EOS images. Given the consistency of the measures assessed on preview images, these indices could be used to drive an exposure management system in charge of defining the optimal radiation exposure.
192

Deep learning methods for reverberant and noisy speech enhancement

Zhao, Yan 15 September 2020 (has links)
No description available.
193

Noise Characteristics And Edge-Enhancing Denoisers For The Magnitude Mri Imagery

Alwehebi, Aisha A 01 May 2010 (has links)
Most of PDE-based restoration models and their numerical realizations show a common drawback: loss of fine structures. In particular, they often introduce an unnecessary numerical dissipation on regions where the image content changes rapidly such as on edges and textures. This thesis studies the magnitude data/imagery of magnetic resonance imaging (MRI) which follows Rician distribution. It analyzes statistically that the noise in the magnitude MRI data is approximately Gaussian of mean zero and of the same variance as in the frequency-domain measurements. Based on the analysis, we introduce a novel partial differential equation (PDE)-based denoising model which can restore fine structures satisfactorily and simultaneously sharpen edges as needed. For an efficient simulation we adopt an incomplete Crank-Nicolson (CN) time-stepping procedure along with the alternating direction implicit (ADI) method. The algorithm is analyzed for stability. It has been numerically verified that the new model can reduce the noise satisfactorily, outperforming the conventional PDE-based restoration models in 3-4 alternating direction iterations, with the residual (the difference between the original image and the restored image) being nearly edgeree. It has also been verified that the model can perform edge-enhancement effectively during the denoising of the magnitude MRI imagery. Numerical examples are provided to support the claim.
194

Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction

Buchholz, Tim-Oliver 12 August 2022 (has links)
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data. Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations. Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT). In the next paragraphs I will summarize the individual contributions briefly. Electron microscopy is the go to method for high-resolution images in biological research. Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses. However, slow scanning speeds are required to obtain SEM images of sufficient quality. In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks. Once such a network is trained, it can be applied to noisy data to restore high quality images. With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in $40$- to $50$-fold imaging speedups for SEM imaging. In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions. However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images. Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually. To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series. An implementation of cryoCARE is publicly available as Scipion (de la Rosa-Trevín et al. 2016) plugin. Next, I will discuss the problem of self-supervised image denoising. With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied. However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system. In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach. Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012). In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks. I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE). I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction. The missing wedge artefacts in tomographic imaging originate in sparse-view imaging. Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images. However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space. I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients. Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents Summary iii Acknowledgements v 1 Introduction 1 1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3 1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4 1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5 1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8 1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11 2 Denoising in Electron Microscopy 15 2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19 2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19 2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21 2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23 2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25 2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27 2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29 2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31 2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32 2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33 3 Noise2Void: Self-Supervised Denoising 35 3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37 3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44 3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47 3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48 3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50 4 Fourier Image Transformer 53 4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55 4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57 4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59 4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60 4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60 4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61 4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64 4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Conclusions and Outlook 71
195

[en] DENOISING AND SIMPLIFICATION IN THE CONSTRUCTION OF 3D DIGITAL MODELS OF COMPLEX OBJECTS / [pt] REMOÇÃO DE RUÍDO E SIMPLIFICAÇÃO NA CONSTRUÇÃO DE MODELOS DIGITAIS 3D DE OBJETOS COMPLEXOS

JAN JOSE HURTADO JAUREGUI 01 February 2022 (has links)
[pt] À medida que o processo de digitalização avança em diversos setores, a criação de modelos digitais 3D torna-se cada vez mais necessária. Normalmente, esses modelos são construídos por designers 3D, exigindo um esforço manual considerável quando o objeto modelado é complexo. Além disso, como o designer não tem uma referência precisa na maioria dos casos, o modelo resultante está sujeito a erros de medição. No entanto, é possível minimizar o esforço de construção e o erro de medição usando técnicas de aquisição 3D e modelos CAD previamente construídos. A saída típica de uma técnica de aquisição 3D é uma nuvem de pontos 3D bruta, que precisa de processamento para reduzir o ruído inerente e a falta de informações topológicas. Os modelos CAD são normalmente usados para documentar um processo de projeto de engenharia, apresentando alta complexidade e muitos detalhes irrelevantes para muitos processos de visualização. Portanto, dependendo da aplicação, devemos simplificar bastante o modelo CAD para atender aos seus requisitos. Nesta tese, nos concentramos na construção de modelos digitais 3D a partir dessas fontes. Mais precisamente, apresentamos um conjunto de algoritmos de processamento de geometria para automatizar diferentes etapas de um fluxo de trabalho típico usado para esta construção. Primeiro, apresentamos um algoritmo de redução de ruído de nuvem de pontos que visa preservar as feições nítidas da superfície subjacente. Este algoritmo inclui soluções para a estimativa normal e problemas de detecção de feições nítidas. Em segundo lugar, apresentamos uma extensão do algoritmo de redução de ruído de nuvem de pontos para processar malhas triangulares, onde tiramos proveito da topologia explícita definida pela malha. Por fim, apresentamos um algoritmo para a simplificação extrema de modelos CAD complexos, que tendem a se aproximar da superfície externa do objeto modelado. Os algoritmos propostos são comparados com métodos de última geração, apresentando resultados competitivos e superando-os na maioria dos casos de teste. / [en] As the digitalization process advances in several industries, the creation of 3D digital models is becoming more and more required. Commonly, these models are constructed by 3D designers, requiring considerable manual effort when the modeled object is complex. In addition, since the designer does not have an accurate reference in most cases, the resulting model is prone to measurement errors. However, it is possible to minimize the construction effort and the measurement error by using 3D acquisition techniques and previously constructed CAD models. The typical output of a 3D acquisition technique is a raw 3D point cloud, which needs processing to reduce the inherent noise and lack of topological information. CAD models are typically used to document an engineering design process, presenting high complexity and too many details irrelevant to many visualization processes. So, depending on the application, we must severely simplify the CAD model to meet its requirements. In this thesis, we focus on the construction of 3D digital models from these sources. More precisely, we present a set of geometry processing algorithms to automatize different stages of a typical workflow used for this construction. First, we present a point cloud denoising algorithm that seeks to preserve the sharp features of the underlying surface. This algorithm includes solutions for the normal estimation and sharp feature detection problems. Second, we present an extension of the point cloud denoising algorithm to process triangle meshes, where we take advantage of the explicit topology defined by the mesh. Finally, we present an algorithm for the extreme simplification of complex CAD models, which tends to approximate the outer surface of the modeled object. The proposed algorithms are compared with state-of-the-art methods, showing competitive results and outperforming them in most test cases.
196

Improving Satellite Data Quality and Availability: A Deep Learning Approach

Mukherjee, Rohit January 2020 (has links)
No description available.
197

Deep Learning Approaches to Low-level Vision Problems

Liu, Huan January 2022 (has links)
Recent years have witnessed tremendous success in using deep learning approaches to handle low-level vision problems. Most of the deep learning based methods address the low-level vision problem by training a neural network to approximate the mapping from the inputs to the desired ground truths. However, directly learning this mapping is usually difficult and cannot achieve ideal performance. Besides, under the setting of unsupervised learning, the general deep learning approach cannot be used. In this thesis, we investigate and address several problems in low-level vision using the proposed approaches. To learn a better mapping using the existing data, an indirect domain shift mechanism is proposed to add explicit constraints inside the neural network for single image dehazing. This allows the neural network to be optimized across several identified neighbours, resulting in a better performance. Despite the success of the proposed approaches in learning an improved mapping from the inputs to the targets, three problems of unsupervised learning is also investigated. For unsupervised monocular depth estimation, a teacher-student network is introduced to strategically integrate both supervised and unsupervised learning benefits. The teacher network is formed by learning under the binocular depth estimation setting, and the student network is constructed as the primary network for monocular depth estimation. In observing that the performance of the teacher network is far better than that of the student network, a knowledge distillation approach is proposed to help improve the mapping learned by the student. For single image dehazing, the current network cannot handle different types of haze patterns as it is trained on a particular dataset. The problem is formulated as a multi-domain dehazing problem. To address this issue, a test-time training approach is proposed to leverage a helper network in assisting the dehazing network adapting to a particular domain using self-supervision. In lossy compression system, the target distribution can be different from that of the source and ground truths are not available for reference. Thus, the objective is to transform the source to target under a rate constraint, which generalizes the optimal transport. To address this problem, theoretical analyses on the trade-off between compression rate and minimal achievable distortion are studied under the cases with and without common randomness. A deep learning approach is also developed using our theoretical results for addressing super-resolution and denoising tasks. Extensive experiments and analyses have been conducted to prove the effectiveness of the proposed deep learning based methods in handling the problems in low-level vision. / Thesis / Doctor of Philosophy (PhD)
198

¿¿¿¿¿¿¿¿¿¿¿¿PROGNOSIS: A WEARABLE SYSTEM FOR HEALTH MONITORING OF PEOPLE AT RISK

Pantelopoulos, Alexandros A. 28 October 2010 (has links)
No description available.
199

Deep Convolutional Denoising for MicroCT : A Self-Supervised Approach / Brusreducering för mikroCT med djupa faltningsnätverk : En självövervakad metod

Karlström, Daniel January 2024 (has links)
Microtomography, or microCT, is an x-ray imaging modality that provides volumetric data of an object's internal structure with microscale resolution, making it suitable for scanning small, highly detailed objects. The microCT image quality is limited by quantum noise, which can be reduced by increasing the scan time. This complicates the scanning both of dynamic processes and, due to the increased radiation dose, dose-sensitive samples. A recently proposed method for improved dose- or time-limited scanning is Noise2Inverse, a framework for denoising data in tomography and linear inverse problems by training a self-supervised convolutional neural network. This work implements Noise2Inverse for denoising lab-based cone-beam microCT data and compares it to both supervised neural networks and more traditional filtering methods. While some trade-off in spatial resolution is observed, the method outperforms traditional filtering methods and matches supervised denoising in quantitative and qualitative evaluations of image quality. Additionally, a segmentation task is performed to show that denoising the data can aid in practical tasks. / Mikrotomografi, eller mikroCT, är en röntgenmetod som avbildar små objekt i tre dimensioner med upplösning på mikrometernivå, vilket möjligör avbildning av små och högdetaljerade objekt. Bildkvaliteten vid mikroCT begränsas av kvantbrus, vilket kan minskas genom att öka skanningstiden. Detta försvårar avbildning av dynamiska processer och, på grund av den ökade stråldosen, doskänsliga objekt. En metod som tros kunna förbättra dos- eller tidsbegränsad avbildning är Noise2Inverse, ett ramverk för brusreducering av tomografisk data genom träning av ett självövervakat faltningsnätverk, och jämförs med både övervakade neuronnät och mer traditionella filtermetoder. Noise2Inverse implementaras i detta arbete för brusreducering av data från ett labb-baserat mikroCT-system med cone beam-geometri. En viss reducering i spatiell upplösning observeras, men metoden överträffar traditionella filtermetoder och matchar övervakade neuronnät i kvantitativa och kvalitativa utvärderingar av bildkvalitet. Dessutom visas att metoden går att använda för att förbätta resultat från bildsegmentering.
200

[en] CONTINUOUS SPEECH RECOGNITION BY COMBINING MFCC AND PNCC ATTRIBUTES WITH SS, WD, MAP AND FRN METHODS OF ROBUSTNESS / [pt] RECONHECIMENTO DE VOZ CONTINUA COMBINANDO OS ATRIBUTOS MFCC E PNCC COM METODOS DE ROBUSTEZ SS, WD, MAP E FRN

CHRISTIAN DAYAN ARCOS GORDILLO 09 June 2014 (has links)
[pt] O crescente interesse por imitar o modelo que rege o processo cotidiano de comunicação humana através de maquinas tem se convertido em uma das áreas do conhecimento mais pesquisadas e de grande importância nas ultimas décadas. Esta área da tecnologia, conhecida como reconhecimento de voz, em como principal desafio desenvolver sistemas robustos que diminuam o ruído aditivo dos ambientes de onde o sinal de voz é adquirido, antes de que se esse sinal alimente os reconhecedores de voz. Por esta razão, este trabalho apresenta quatro formas diferentes de melhorar o desempenho do reconhecimento de voz contınua na presença de ruído aditivo, a saber: Wavelet Denoising e Subtração Espectral, para realce de fala e Mapeamento de Histogramas e Filtro com Redes Neurais, para compensação de atributos. Esses métodos são aplicados isoladamente e simultaneamente, afim de minimizar os desajustes causados pela inserção de ruído no sinal de voz. Alem dos métodos de robustez propostos, e devido ao fato de que os e conhecedores de voz dependem basicamente dos atributos de voz utilizados, examinam-se dois algoritmos de extração de atributos, MFCC e PNCC, através dos quais se representa o sinal de voz como uma sequência de vetores que contêm informação espectral de curtos períodos de tempo. Os métodos considerados são avaliados através de experimentos usando os software HTK e Matlab, e as bases de dados TIMIT (de vozes) e NOISEX-92 (de ruído). Finalmente, para obter os resultados experimentais, realizam-se dois tipos de testes. No primeiro caso, é avaliado um sistema de referência baseado unicamente em atributos MFCC e PNCC, mostrando como o sinal é fortemente degradado quando as razões sinal-ruıdo são menores. No segundo caso, o sistema de referência é combinado com os métodos de robustez aqui propostos, analisando-se comparativamente os resultados dos métodos quando agem isolada e simultaneamente. Constata-se que a mistura simultânea dos métodos nem sempre é mais atraente. Porem, em geral o melhor resultado é obtido combinando-se MAP com atributos PNCC. / [en] The increasing interest in imitating the model that controls the daily process of human communication trough machines has become one of the most researched areas of knowledge and of great importance in recent decades. This technological area known as voice recognition has as a main challenge to develop robust systems that reduce the noisy additive environment where the signal voice was acquired. For this reason, this work presents four different ways to improve the performance of continuous speech recognition in presence of additive noise, known as Wavelet Denoising and Spectral Subtraction for enhancement of voice, and Mapping of Histograms and Filter with Neural Networks to compensate for attributes. These methods are applied separately and simultaneously two by two, in order to minimize the imbalances caused by the inclusion of noise in voice signal. In addition to the proposed methods of robustness and due to the fact that voice recognizers depend mainly on the attributes voice used, two algorithms are examined for extracting attributes, MFCC, and PNCC, through which represents the voice signal as a sequence of vectors that contain spectral information for short periods of time. The considered methods are evaluated by experiments using the HTK and Matlab software, and databases of TIMIT (voice) and Noisex-92 (noise). Finally, for the experimental results, two types of tests were carried out. In the first case a reference system was assessed based on MFCC and PNCC attributes, only showing how the signal degrades strongly when signal-noise ratios are higher. In the second case, the reference system is combined with robustness methods proposed here, comparatively analyzing the results of the methods when they act alone and simultaneously. It is noted that simultaneous mix of methods is not always more attractive. However, in general, the best result is achieved by the combination of MAP with PNCC attributes.

Page generated in 0.0608 seconds