• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 7
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 27
  • 19
  • 19
  • 18
  • 16
  • 16
  • 16
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

VARIATIONAL METHODS FOR IMAGE DEBLURRING AND DISCRETIZED PICARD'S METHOD

Money, James H. 01 January 2006 (has links)
In this digital age, it is more important than ever to have good methods for processing images. We focus on the removal of blur from a captured image, which is called the image deblurring problem. In particular, we make no assumptions about the blur itself, which is called a blind deconvolution. We approach the problem by miniming an energy functional that utilizes total variation norm and a fidelity constraint. In particular, we extend the work of Chan and Wong to use a reference image in the computation. Using the shock filter as a reference image, we produce a superior result compared to existing methods. We are able to produce good results on non-black background images and images where the blurring function is not centro-symmetric. We consider using a general Lp norm for the fidelity term and compare different values for p. Using an analysis similar to Strong and Chan, we derive an adaptive scale method for the recovery of the blurring function. We also consider two numerical methods in this disseration. The first method is an extension of Picards method for PDEs in the discrete case. We compare the results to the analytical Picard method, showing the only difference is the use of the approximation versus exact derivatives. We relate the method to existing finite difference schemes, including the Lax-Wendroff method. We derive the stability constraints for several linear problems and illustrate the stability region is increasing. We conclude by showing several examples of the method and how the computational savings is substantial. The second method we consider is a black-box implementation of a method for solving the generalized eigenvalue problem. By utilizing the work of Golub and Ye, we implement a routine which is robust against existing methods. We compare this routine against JDQZ and LOBPCG and show this method performs well in numerical testing.
22

Compressive Sensing for 3D Data Processing Tasks: Applications, Models and Algorithms

January 2012 (has links)
Compressive sensing (CS) is a novel sampling methodology representing a paradigm shift from conventional data acquisition schemes. The theory of compressive sensing ensures that under suitable conditions compressible signals or images can be reconstructed from far fewer samples or measurements than what are required by the Nyquist rate. So far in the literature, most works on CS concentrate on one-dimensional or two-dimensional data. However, besides involving far more data, three-dimensional (3D) data processing does have particularities that require the development of new techniques in order to make successful transitions from theoretical feasibilities to practical capacities. This thesis studies several issues arising from the applications of the CS methodology to some 3D image processing tasks. Two specific applications are hyperspectral imaging and video compression where 3D images are either directly unmixed or recovered as a whole from CS samples. The main issues include CS decoding models, preprocessing techniques and reconstruction algorithms, as well as CS encoding matrices in the case of video compression. Our investigation involves three major parts. (1) Total variation (TV) regularization plays a central role in the decoding models studied in this thesis. To solve such models, we propose an efficient scheme to implement the classic augmented Lagrangian multiplier method and study its convergence properties. The resulting Matlab package TVAL3 is used to solve several models. Computational results show that, thanks to its low per-iteration complexity, the proposed algorithm is capable of handling realistic 3D image processing tasks. (2) Hyperspectral image processing typically demands heavy computational resources due to an enormous amount of data involved. We investigate low-complexity procedures to unmix, sometimes blindly, CS compressed hyperspectral data to directly obtain material signatures and their abundance fractions, bypassing the high-complexity task of reconstructing the image cube itself. (3) To overcome the "cliff effect" suffered by current video coding schemes, we explore a compressive video sampling framework to improve scalability with respect to channel capacities. We propose and study a novel multi-resolution CS encoding matrix, and a decoding model with a TV-DCT regularization function. Extensive numerical results are presented, obtained from experiments that use not only synthetic data, but also real data measured by hardware. The results establish feasibility and robustness, to various extent, of the proposed 3D data processing schemes, models and algorithms. There still remain many challenges to be further resolved in each area, but hopefully the progress made in this thesis will represent a useful first step towards meeting these challenges in the future.
23

Estimating Seasonal Drivers in Childhood Infectious Diseases with Continuous Time Models

Abbott, George H. 2010 May 1900 (has links)
Many important factors affect the spread of childhood infectious disease. To understand better the fundamental drivers of infectious disease spread, several researchers have estimated seasonal transmission coefficients using discrete-time models. This research addresses several shortcomings of the discrete-time approaches, including removing the need for the reporting interval to match the serial interval of the disease using infectious disease data from three major cities: New York City, London, and Bangkok. Using a simultaneous approach for optimization of differential equation systems with a Radau collocation discretization scheme and total variation regularization for the transmission parameter profile, this research demonstrates that seasonal transmission parameters can be effectively estimated using continuous-time models. This research further correlates school holiday schedules with the transmission parameter for New York City and London where previous work has already been done, and demonstrates similar results for a relatively unstudied city in childhood infectious disease research, Bangkok, Thailand.
24

Compressed Sensing Based Computerized Tomography Imaging

Bicer, Aydin 01 February 2012 (has links) (PDF)
There is no doubt that computerized tomography (CT) is highly beneficial for patients when used appropriately for diagnostic purposes. However, worries have been raised concerning the possible risk of cancer induction from CT because of the dramatic increase of CT usage in medicine. It is crucial to keep the radiation dose as low as reasonably achievable to reduce this probable risk. This thesis is about to reduce X-ray radiation exposure to patients and/or CT operators via a new imaging modality that exploits the recent compressed sensing (CS) theory. Two efficient reconstruction algorithms based on total variation (TV) minimization of estimated images are proposed. Using fewer measurements than the traditional filtered back projection based algorithms or algebraic reconstruction techniques require, the proposed algorithms allow reducing the radiation dose without sacrificing the CT image quality even in the case of noisy measurements. Employing powerful methods to solve the TV minimization problem, both schemes have higher reconstruction speed than the recently introduced CS based algorithms.
25

On the Autoconvolution Equation and Total Variation Constraints

Fleischer, G., Gorenflo, R., Hofmann, B. 30 October 1998 (has links) (PDF)
This paper is concerned with the numerical analysis of the autoconvolution equation $x*x=y$ restricted to the interval [0,1]. We present a discrete constrained least squares approach and prove its convergence in $L^p(0,1),1<p<\infinite$ , where the regularization is based on a prescribed bound for the total variation of admissible solutions. This approach includes the case of non-smooth solutions possessing jumps. Moreover, an adaption to the Sobolev space $H^1(0,1)$ and some remarks on monotone functions are added. The paper is completed by a numerical case study concerning the determination of non-monotone smooth and non-smooth functions x from the autoconvolution equation with noisy data y.
26

Signal extractions with applications in finance / Extractions de signaux et applications en finance

Goulet, Clément 05 December 2017 (has links)
Le sujet principal de cette thèse est de proposer de nouvelles méthodes d'extractions de signaux avec applications en finance. Par signaux, nous entendons soit un signal sur lequel repose une stratégie d'investissement; soit un signal perturbé par un bruit, que nous souhaitons retrouver. Ainsi, la première partie de la thèse étudie la contagion en volatilité historique autours des annonces de résultats des entreprises du Nasdaq. Nous trouvons qu'autours de l'annonce, l'entreprise reportant ses résultats, génère une contagion persistante en volatilité à l’encontre des entreprises appartenant au même secteur. Par ailleurs, nous trouvons que la contagion en volatilité varie, selon le type de nouvelles reportées, l'effet de surprise, ou encore par le sentiment de marché à l'égard de l'annonceur. La deuxième partie de cette thèse adapte des techniques de dé-bruitage venant de l'imagerie, à des formes de bruits présentent en finance. Ainsi, un premier article, co-écrit avec Matthieu Garcin, propose une technique de dé-bruitage innovante, permettant de retrouver un signal perturbé par un bruit à variance non-constante. Cet algorithme est appliqué en finance à la modélisation de la volatilité. Un second travail s'intéresse au dé-bruitage d'un signal perturbé par un bruit asymétrique et leptokurtique. En effet, nous adaptons un modèle de Maximum A Posteriori, couramment employé en imagerie, à des bruits suivant des lois de probabilité de Student, Gaussienne asymétrique et Student asymétrique. Cet algorithme est appliqué au dé-bruitage de prix d'actions haute-fréquences. L'objectif étant d'appliquer un algorithme de reconnaissance de formes sur les extrema locaux du signal dé-bruité. / The main objective of this PhD dissertation is to set up new signal extraction techniques with applications in Finance. In our setting, a signal is defined in two ways. In the framework of investement strategies, a signal is a function which generates buy/sell orders. In denoising theory, a signal, is a function disrupted by some noise, that we want to recover. A first part of this PhD studies historical volatility spillovers around corporate earning announcements. Notably, we study whether a move by one point in the announcer historical volatility in time t will generate a move by beta percents in time t+1. We find evidences of volatility spillovers and we study their intensity across variables such as : the announcement outcome, the surprise effect, the announcer capitalization, the market sentiment regarding the announcer, and other variables. We illustrate our finding by a volatility arbitrage strategy. The second part of the dissertation adapts denoising techniques coming from imagery : wavelets and total variation methods, to forms of noise observed in finance. A first paper proposes an denoising algorithm for a signal disrupted by a noise with a spatially varying standard-deviation. A financial application to volatility modelling is proposed. A second paper adapts the Bayesian representation of the Rudin, Osher and Fatemi approach to asymmetric and leptokurtic noises. A financial application is proposed to the denoising of intra-day stock prices in order to implement a pattern recognition trading strategy.
27

Odstranění šumu z obrazů kalibračních vzorků získaných elektronovým mikroskopem / Denoising of Images from Electron Microscope

Holub, Zbyněk January 2017 (has links)
Tato Diplomová práce je zaměřena na odstranění šumu ze snímků získaných pomocí Transmisního elektronového mikroskopu. V práci jsou popsány principy digitalizace výsledných snímků a popis jednotlivých šumových složek, které vznikají při digitalizaci snímků. Tyto nechtěné složky ovlivňují kvalitu výsledného snímku. Proto byly vybrány filtrační metody založené na minimalizaci totální variace, jejichž principy jsou v této práci popsány. Jako referenční filtrační metoda byla vybrána filtrace pomocí Non-local means filtru. Tento filtr byl vybrán, jelikož v dnešní dobře patří mezi nejvíce využívané metody, které mají vysokou účinnost. Pro objektivní hodnocení kvality filtrací byly použity tyto hodnotící kritéria – SNR, PSNR a SSIM. V závěru této práce, jsou všechny získané výsledky zobrazeny a jsou diskutovány účinnosti jednotlivých filtrační metod.
28

Cartoon-Residual Image Decompositions with Application in Fingerprint Recognition

Richter, Robin 06 November 2019 (has links)
No description available.
29

Iterative methods for the solution of the electrical impedance tomography inverse problem.

Alruwaili, Eman January 2023 (has links)
No description available.
30

Regularization of inverse problems in image processing

Jalalzai, Khalid 09 March 2012 (has links) (PDF)
Les problèmes inverses consistent à retrouver une donnée qui a été transformée ou perturbée. Ils nécessitent une régularisation puisque mal posés. En traitement d'images, la variation totale en tant qu'outil de régularisation a l'avantage de préserver les discontinuités tout en créant des zones lisses, résultats établis dans cette thèse dans un cadre continu et pour des énergies générales. En outre, nous proposons et étudions une variante de la variation totale. Nous établissons une formulation duale qui nous permet de démontrer que cette variante coïncide avec la variation totale sur des ensembles de périmètre fini. Ces dernières années les méthodes non-locales exploitant les auto-similarités dans les images ont connu un succès particulier. Nous adaptons cette approche au problème de complétion de spectre pour des problèmes inverses généraux. La dernière partie est consacrée aux aspects algorithmiques inhérents à l'optimisation des énergies convexes considérées. Nous étudions la convergence et la complexité d'une famille récente d'algorithmes dits Primal-Dual.

Page generated in 0.11 seconds