• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 112
  • 52
  • 52
  • 47
  • 42
  • 38
  • 33
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Contribution à la détection et à l'analyse des signaux EEG épileptiques : débruitage et séparation de sources / Contribution to the detection and analysis of epileptic EEG signals : denoising and source separation

Romo Vazquez, Rebeca del Carmen 24 February 2010 (has links)
L'objectif principal de cette thèse est le pré-traitement des signaux d'électroencéphalographie (EEG). En particulier, elle vise à développer une méthodologie pour obtenir un EEG dit "propre" à travers l'identification et l'élimination des artéfacts extra-cérébraux (mouvements oculaires, clignements, activité cardiaque et musculaire) et du bruit. Après identification, les artéfacts et le bruit doivent être éliminés avec une perte minimale d'information, car dans le cas d'EEG, il est de grande importance de ne pas perdre d'information potentiellement utile à l'analyse (visuelle ou automatique) et donc au diagnostic médical. Plusieurs étapes sont nécessaires pour atteindre cet objectif : séparation et identification des sources d'artéfacts, élimination du bruit de mesure et reconstruction de l'EEG "propre". A travers une approche de type séparation aveugle de sources (SAS), la première partie vise donc à séparer les signaux EEG dans des sources informatives cérébrales et des sources d'artéfacts extra-cérébraux à éliminer. Une deuxième partie vise à classifier et éliminer les sources d'artéfacts et elle consiste en une étape de classification supervisée. Le bruit de mesure, quant à lui, il est éliminé par une approche de type débruitage par ondelettes. La mise en place d'une méthodologie intégrant d'une manière optimale ces trois techniques (séparation de sources, classification supervisée et débruitage par ondelettes) constitue l'apport principal de cette thèse. La méthodologie développée, ainsi que les résultats obtenus sur une base de signaux d'EEG réels (critiques et inter-critiques) importante, sont soumis à une expertise médicale approfondie, qui valide l'approche proposée / The goal of this research is the electroencephalographic (EEG) signals preprocessing. More precisely, we aim to develop a methodology to obtain a "clean" EEG through the extra- cerebral artefacts (ocular movements, eye blinks, high frequency and cardiac activity) and noise identification and elimination. After identification, the artefacts and noise must be eliminated with a minimal loss of cerebral activity information, as this information is potentially useful to the analysis (visual or automatic) and therefore to the medial diagnosis. To accomplish this objective, several pre-processing steps are needed: separation and identification of the artefact sources, noise elimination and "clean" EEG reconstruction. Through a blind source separation (BSS) approach, the first step aims to separate the EEG signals into informative and artefact sources. Once the sources are separated, the second step is to classify and to eliminate the identified artefacts sources. This step implies a supervised classification. The EEG is reconstructed only from informative sources. The noise is finally eliminated using a wavelet denoising approach. A methodology ensuring an optimal interaction of these three techniques (BSS, classification and wavelet denoising) is the main contribution of this thesis. The methodology developed here, as well the obtained results from an important real EEG data base (ictal and inter-ictal) is subjected to a detailed analysis by medical expertise, which validates the proposed approach
162

Déconvolution multicanale et détection de sources en utilisant des représentations parcimonieuses : application au projet Fermi / Multichannel deconvolution and source detection using sparse representations : application to Fermi project

Schmitt, Jeremy 07 December 2011 (has links)
Ce mémoire de thèse présente de nouvelles méthodologies pour l’analyse de données Poissoniennes sur la sphère, dans le cadre de la mission Fermi. Les objectifs principaux de la mission Fermi, l’étude du fond diffus galactique et l’établissement du catalogue de source, sont com pliqués par la faiblesse du flux de photons et les effets de l’instrument de mesure. Ce mémoire introduit une nouvelle représentation mutli-échelles des données Poissoniennes sur la sphère, la Transformée Stabilisatrice de Variance Multi-Echelle sur la Sphère (MS-VSTS), consistant à combiner une transformée multi-échelles sur la sphère (ondelettes, curvelets), avec une transformée stabilisatrice de variance (VST). Cette méthode est appliquée à la suppression du bruit de Poisson mono et multicanale, à l’interpolation de données manquantes, à l’extraction d’un modèle de fond et à la déconvolution multicanale. Enfin, ce mémoire aborde le problème de la séparation de composantes en utilisant des représentations parcimonieuses (template fitting). / This thesis presents new methods for spherical Poisson data analysis for the Fermi mission. Fermi main scientifical objectives, the study of diffuse galactic background et the building of the source catalog, are complicated by the weakness of photon flux and the point spread function of the instrument. This thesis proposes a new multi-scale representation for Poisson data on the sphere, the Multi-Scale Variance Stabilizing Transform on the Sphere (MS-VSTS), consisting in the combination of a spherical multi-scale transform (wavelets, curvelets) with a variance stabilizing transform (VST). This method is applied to mono- and multichannel Poisson noise removal, missing data interpolation, background extraction and multichannel deconvolution. Finally, this thesis deals with the problem of component separation using sparse representations (template fitting ).
163

Méthodologie d'analyse de levés électromagnétiques aéroportés en domaine temporel pour la caractérisation géologique et hydrogéologique / Methodology of analysis of airborne time domain electromagnetic surveys for geological and hydrogeological characterization

Reninger, Pierre-Alexandre 24 October 2012 (has links)
Cette thèse doctorale aborde divers aspects méthodologiques de l’analyse de levés électromagnétiques aéroportés en domaine temporel (TDEM) pour une interprétation détaillée à finalités géologique et hydrogéologique. Ce travail s’est appuyé sur un levé réalisé dans la région de Courtenay (Nord-Est de la région Centre) caractérisée par un plateau de craie karstifié (karst des Trois Fontaines) recouvert par des argiles d’altération et des alluvions. Tout d’abord, une méthode de filtrage des données TDEM utilisant la Décomposition en Valeurs Singulières (SVD) a été développée. L’adaptation rigoureuse de cette technique aux mesures TDEM a permis de séparer avec succès les bruits, qui ont pu être cartographiés, et le « signal géologique », diminuant grandement le temps nécessaire à leur traitement. De plus, la méthode s’est avérée efficace pour obtenir, rapidement, des informations géologiques préliminaires sur la zone. Ensuite, une analyse croisée entre le modèle de résistivité obtenu en inversant les données filtrées et les forages disponibles a été effectuée. Celle-ci a mené à une amélioration de la connaissance géologique et hydrogéologique de la zone. Une figure d’ondulation, séparant deux dépôts de craie, et le réseau de failles en subsurface ont pu être imagés, apportant un cadre géologique au karst des Trois Fontaines. Enfin, une nouvelle méthode combinant l’information aux forages et les pentes issues du modèle de résistivité EM a permis d’obtenir un modèle d‟une précision inégalée du toit de la craie. L’ensemble de ces travaux fournit un cadre solide pour de futures études géo-environnementales utilisant des données TDEM aéroportées, et ce, même en zone anthropisée. / This PhD thesis addresses various methodological aspects of the analysis of airborne Time Domain ElectroMagnetic (TDEM) surveys for a detailed interpretation in geological and hydrogeological purposes. This work was based on a survey conducted in the region of Courtenay (north-east of the Région Centre, France) characterized by a plateau of karstified chalk (karst des Trois Fontaines) covered by weathering clays and alluvium. First, a TDEM data filtering method using the Singular Value Decomposition (SVD) was developed. The rigorous adaptation of this technique to TDEM data has successfully separated the noise, which was mapped, and the “geological signal”, greatly reducing the time required for processing. Furthermore, the method has proved to be effective to obtain quick preliminary geological information on the area. Then, a cross analysis between the resistivity model obtained by inverting the filtered data and the available boreholes was conducted. This has led to the improvement of the geological and hydrogeological knowledge of the area. An undulating feature, separating two chalk deposits, and a fault network were imaged in subsurface, providing a geological framework for the Trois Fontaines karst. Finally, a new 3D modelling method combining the information at boreholes and the slopes derived from the EM resistivity model yielded an accurate model of the top of the chalk. All of this work provides a solid framework for future geo-environmental studies using airborne TDEM data, even in anthropized area.
164

Proposta de redução da dose de radiação na mamografia digital utilizando novos algoritmos de filtragem de ruído Poisson / Proposal of radiation dose reduction in digital mammography using new algorithms for Poisson noise filtering

Helder Cesar Rodrigues de Oliveira 19 February 2016 (has links)
O objetivo deste trabalho é apresentar um novo método para a remoção do ruído Poisson em imagens de mamografia digital adquiridas com baixa dosagem de radiação. Sabe-se que a mamografia por raios X é o exame mais eficiente para a detecção precoce do câncer de mama, aumentando consideravelmente as chances de cura da doença. No entanto, a radiação absorvida pela paciente durante o exame ainda é um problema a ser tratado. Estudos indicam que a exposição à radiação pode induzir a formação do câncer em algumas mulheres radiografadas. Apesar desse número ser significativamente baixo em relação ao número de mulheres que são salvas pelo exame, existe a necessidade do desenvolvimento de meios que viabilizem a diminuição da dose de radiação empregada. No entanto, uma redução na dose de radiação piora a qualidade da imagem pela diminuição da relação sinal-ruído, prejudicando o diagnóstico médico e a detecção precoce da doença. Nesse sentido, a proposta deste trabalho é apresentar um método para a filtragem do ruído Poisson que é adicionado às das imagens mamográficas quando adquiridas com baixa dosagem de radiação, fazendo com que ela apresente qualidade equivalente àquela adquirida com a dose padrão de radiação. O algoritmo proposto foi desenvolvido baseado em adaptações de algoritmos bem estabelecidos na literatura, como a filtragem no domínio Wavelet, aqui usando o Shrink-thresholding (WTST), e o Block-matching and 3D Filtering (BM3D). Os resultados obtidos com imagens mamográficas adquiridas com phantom e também imagens clínicas, mostraram que o método proposto é capaz de filtrar o ruído adicional incorporado nas imagens sem perda aparente de informação. / The aim of this work is to present a novel method for removing the Poisson noise in digital mammography images acquired with reduced radiation dose. It is known that the X-ray mammography is the most effective exam for early detection of breast cancer, greatly increasing the chances of healing the disease. However, the radiation absorbed by the patient during the exam is still a problem to be treated. Some studies showed that mammography can induce breast cancer in a few women. Although this number is significantly low compared to the number of women who are saved by the exam, it is important to develop methods to enable the reduction of the radiation dose used in the exam. However, dose reduction led to a decrease in image quality by means of the signal to noise ratio, impairing medical diagnosis and the early detection of the disease. In this sense, the purpose of this study is to propose a new method to reduce Poisson noise in mammographic images acquired with low radiation dose, in order to achive the same quality as those acquired with the standard dose. The method is based on well established algorithms in the literature as the filtering in Wavelet domain, here using Shrink-thresholding (WTST) and the Block-matching and 3D Filtering (BM3D). Results using phantom and clinical images showed that the proposed algorithm is capable of filtering the additional noise in images without apparent loss of information.
165

Advanced methods for diffusion MRI data analysis and their application to the healthy ageing brain

Neto Henriques, Rafael January 2018 (has links)
Diffusion of water molecules in biological tissues depends on several microstructural properties. Therefore, diffusion Magnetic Resonance Imaging (dMRI) is a useful tool to infer and study microstructural brain changes in the context of human development, ageing and neuropathology. In this thesis, the state-of-the-art of advanced dMRI techniques is explored and strategies to overcome or reduce its pitfalls are developed and validated. Firstly, it is shown that PCA denoising and Gibbs artefact suppression algorithms provide an optimal compromise between increased precision of diffusion measures and the loss of tissue's diffusion non-Gaussian information. Secondly, the spatial information provided by the diffusion kurtosis imaging (DKI) technique is explored and used to resolve crossing fibres and generalize diffusion measures to cases not limited to well-aligned white matter fibres. Thirdly, as an alternative to diffusion microstructural modelling techniques such as the neurite orientation dispersion and density imaging (NODDI), it is shown that spherical deconvolution techniques can be used to characterize fibre crossing and dispersion simultaneously. Fourthly, free water volume fraction estimates provided by the free water diffusion tensor imaging (fwDTI) are shown to be useful to detect and remove voxels corrupted by cerebrospinal fluid (CSF) partial volume effects. Finally, dMRI techniques are applied to the diffusion data from the large collaborative Cambridge Centre for Ageing and Neuroscience (CamCAN) study. From these data, the inference provided by diffusion anisotropy measures on maturation and degeneration processes is shown to be biased by age-related changes of fibre organization. Inconsistencies of previous NODDI ageing studies are also revealed to be associated with the different age ranges covered. The CamCAN data is also processed using a novel non-Gaussian diffusion characterization technique which is invariant to different fibre configurations. Results show that this technique can provide indices specific to axonal water fraction which can be linked to age-related fibre density changes.
166

Advanced Computational Methods for Power System Data Analysis in an Electricity Market

Ke Meng Unknown Date (has links)
The power industry has undergone significant restructuring throughout the world since the 1990s. In particular, its traditional, vertically monopolistic structures have been reformed into competitive markets in pursuit of increased efficiency in electricity production and utilization. However, along with market deregulation, power systems presently face severe challenges. One is power system stability, a problem that has attracted widespread concern because of severe blackouts experienced in the USA, the UK, Italy, and other countries. Another is that electricity market operation warrants more effective planning, management, and direction techniques due to the ever expanding large-scale interconnection of power grids. Moreover, many exterior constraints, such as environmental protection influences and associated government regulations, now need to be taken into consideration. All these have made existing challenges even more complex. One consequence is that more advanced power system data analysis methods are required in the deregulated, market-oriented environment. At the same time, the computational power of modern computers and the application of databases have facilitated the effective employment of new data analysis techniques. In this thesis, the reported research is directed at developing computational intelligence based techniques to solve several power system problems that emerge in deregulated electricity markets. Four major contributions are included in the thesis: a newly proposed quantum-inspired particle swarm optimization and self-adaptive learning scheme for radial basis function neural networks; online wavelet denoising techniques; electricity regional reference price forecasting methods in the electricity market; and power system security assessment approaches for deregulated markets, including fault analysis, voltage profile prediction under contingencies, and machine learning based load shedding scheme for voltage stability enhancement. Evolutionary algorithms (EAs) inspired by biological evolution mechanisms have had great success in power system stability analysis and operation planning. Here, a new quantum-inspired particle swarm optimization (QPSO) is proposed. Its inspiration stems from quantum computation theory, whose mechanism is totally different from those of original EAs. The benchmark data sets and economic load dispatch research results show that the QPSO improves on other versions of evolutionary algorithms in terms of both speed and accuracy. Compared to the original PSO, it greatly enhances the searching ability and efficiently manages system constraints. Then, fuzzy C-means (FCM) and QPSO are applied to train radial basis function (RBF) neural networks with the capacity to auto-configure the network structures and obtain the model parameters. The benchmark data sets test results suggest that the proposed training algorithms ensure good performance on data clustering, also improve training and generalization capabilities of RBF neural networks. Wavelet analysis has been widely used in signal estimation, classification, and compression. Denoising with traditional wavelet transforms always exhibits visual artefacts because of translation-variant. Furthermore, in most cases, wavelet denoising of real-time signals is actualized via offline processing which limits the efficacy of such real-time applications. In the present context, an online wavelet denoising method using a moving window technique is proposed. Problems that may occur in real-time wavelet denoising, such as border distortion and pseudo-Gibbs phenomena, are effectively solved by using window extension and window circle spinning methods. This provides an effective data pre-processing technique for the online application of other data analysis approaches. In a competitive electricity market, price forecasting is one of the essential functions required of a generation company and the system operator. It provides critical information for building up effective risk management plans by market participants, especially those companies that generate and retail electrical power. Here, an RBF neural network is adopted as a predictor of the electricity market regional reference price in the Australian national electricity market (NEM). Furthermore, the wavelet denoising technique is adopted to pre-process the historical price data. The promising network prediction performance with respect to price data demonstrates the efficiency of the proposed method, with real-time wavelet denoising making feasible the online application of the proposed price prediction method. Along with market deregulation, power system security assessment has attracted great concern from both academic and industry analysts, especially after several devastating blackouts in the USA, the UK, and Russia. This thesis goes on to propose an efficient composite method for cascading failure prevention comprising three major stages. Firstly, a hybrid method based on principal component analysis (PCA) and specific statistic measures is used to detect system faults. Secondly, the RBF neural network is then used for power network bus voltage profile prediction. Tests are carried out by means of the “N-1” and “N-1-1” methods applied in the New England power system through PSS/E dynamic simulations. Results show that system faults can be reliably detected and voltage profiles can be correctly predicted. In contrast to traditional methods involving phase calculation, this technique uses raw data from time domains and is computationally inexpensive in terms of both memory and speed for practical applications. This establishes a connection between power system fault analysis and cascading analysis. Finally, a multi-stage model predictive control (MPC) based load shedding scheme for ensuring power system voltage stability is proposed. It has been demonstrated that optimal action in the process of load shedding for voltage stability during emergencies can be achieved as a consequence. Based on above discussions, a framework for analysing power system voltage stability and ensuring its enhancement is proposed, with such a framework able to be used as an effective means of cascading failure analysis. In summary, the research reported in this thesis provides a composite framework for power system data analysis in a market environment. It covers advanced techniques of computational intelligence and machine learning, also proposes effective solutions for both the market operation and the system stability related problems facing today’s power industry.
167

SSIM-Inspired Quality Assessment, Compression, and Processing for Visual Communications

Rehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance. Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
168

HARDI Denoising using Non-local Means on the ℝ³ x 𝕊² Manifold

Kuurstra, Alan 20 December 2011 (has links)
Magnetic resonance imaging (MRI) has long become one of the most powerful and accurate tools of medical diagnostic imaging. Central to the diagnostic capabilities of MRI is the notion of contrast, which is determined by the biochemical composition of examined tissue as well as by its morphology. Despite the importance of the prevalent T₁, T₂, and proton density contrast mechanisms to clinical diagnosis, none of them has demonstrated effectiveness in delineating the morphological structure of the white matter - the information which is known to be related to a wide spectrum of brain-related disorders. It is only with the recent advent of diffusion-weighted MRI that scientists have been able to perform quantitative measurements of the diffusivity of white matter, making possible the structural delineation of neural fibre tracts in the human brain. One diffusion imaging technique in particular, namely high angular resolution diffusion imaging (HARDI), has inspired a substantial number of processing methods capable of obtaining the orientational information of multiple fibres within a single voxel while boasting minimal acquisition requirements. HARDI characterization of fibre morphology can be enhanced by increasing spatial and angular resolutions. However, doing so drastically reduces the signal-to-noise ratio. Since pronounced measurement noise tends to obscure and distort diagnostically relevant details of diffusion-weighted MR signals, increasing spatial or angular resolution necessitates application of the efficient and reliable tools of image denoising. The aim of this work is to develop an effective framework for the filtering of HARDI measurement noise which takes into account both the manifold to which the HARDI signal belongs and the statistical nature of MRI noise. These goals are accomplished using an approach rooted in non-local means (NLM) weighted averaging. The average includes samples, and therefore dependencies, from the entire manifold and the result of the average is used to deduce an estimate of the original signal value in accordance with MRI statistics. NLM averaging weights are determined adaptively based on a neighbourhood similarity measure. The novel neighbourhood comparison proposed in this thesis is one of spherical neighbourhoods, which assigns large weights to samples with similar local orientational diffusion characteristics. Moreover, the weights are designed to be invariant to both spatial rotations as well as to the particular sampling scheme in use. This thesis provides a detailed description of the proposed filtering procedure as well as experimental results with synthetic and real-life data. It is demonstrated that the proposed filter has substantially better denoising capabilities as compared to a number of alternative methods.
169

Simultaneous Bottom-up/top-down Processing In Early And Mid Level Vision

Erdem, Mehmet Erkut 01 November 2008 (has links) (PDF)
The prevalent view in computer vision since Marr is that visual perception is a data-driven bottom-up process. In this view, image data is processed in a feed-forward fashion where a sequence of independent visual modules transforms simple low-level cues into more complex abstract perceptual units. Over the years, a variety of techniques has been developed using this paradigm. Yet an important realization is that low-level visual cues are generally so ambiguous that they could make purely bottom-up methods quite unsuccessful. These ambiguities cannot be resolved without taking account of high-level contextual information. In this thesis, we explore different ways of enriching early and mid-level computer vision modules with a capacity to extract and use contextual knowledge. Mainly, we integrate low-level image features with contextual information within uni&amp / #64257 / ed formulations where bottom-up and top-down processing take place simultaneously.
170

Data-driven transform optimization for next generation multimedia applications

Sezer, Osman Gokhan 25 August 2011 (has links)
The objective of this thesis is to formulate a generic dictionary learning method with the guiding principle that states: Efficient representations lead to efficient estimations. The fundamental idea behind using transforms or dictionaries for signal representation is to exploit the regularity within data samples such that the redundancy of the representation is minimized subject to a level of fidelity. This observation translates to rate-distortion cost in compression literature, where a transform that has the lowest rate-distortion cost provides a more efficient representation than the others. In our work, rather than using as an analysis tool, the rate-distortion cost is utilized to improve the efficiency of transforms. For this, an iterative optimization method is proposed, which seeks an orthonormal transform that reduces the expected value of rate-distortion cost of an ensemble of data. Due to the generic nature of the new optimization method, one can design a set of orthonormal transforms either in the original signal domain or on the top of a transform-domain representation. To test this claim, several image codecs are designed, which use block-, lapped- and wavelet-transform structures. Significant increases in compression performances are observed compared to original methods. An extension of the proposed optimization method for video coding gave us state-of-the-art compression results with separable transforms. Also using the robust statistics, an explanation to the superiority of new design over other learning-based methods such as Karhunen-Loeve transform is provided. Finally, the new optimization method and the minimization of the "oracle" risk of diagonal estimators in signal estimation is shown to be equal. With the design of new diagonal estimators and the risk-minimization-based adaptation, a new image denoising algorithm is proposed. While these diagonal estimators denoise local image patches, by formulation the optimal fusion of overlapping local denoised estimates, the new denoising algorithm is scaled to operate on large images. In our experiments, the state-of-the-art results for transform-domain denoising are achieved.

Page generated in 0.0576 seconds