• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 23
  • 20
  • 17
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Regularization of inverse problems in image processing

Jalalzai, Khalid 09 March 2012 (has links) (PDF)
Les problèmes inverses consistent à retrouver une donnée qui a été transformée ou perturbée. Ils nécessitent une régularisation puisque mal posés. En traitement d'images, la variation totale en tant qu'outil de régularisation a l'avantage de préserver les discontinuités tout en créant des zones lisses, résultats établis dans cette thèse dans un cadre continu et pour des énergies générales. En outre, nous proposons et étudions une variante de la variation totale. Nous établissons une formulation duale qui nous permet de démontrer que cette variante coïncide avec la variation totale sur des ensembles de périmètre fini. Ces dernières années les méthodes non-locales exploitant les auto-similarités dans les images ont connu un succès particulier. Nous adaptons cette approche au problème de complétion de spectre pour des problèmes inverses généraux. La dernière partie est consacrée aux aspects algorithmiques inhérents à l'optimisation des énergies convexes considérées. Nous étudions la convergence et la complexité d'une famille récente d'algorithmes dits Primal-Dual.
72

Εντοπισμός θέσης υπομικροσυστοιχιών και spots σε ψηφιακές εικόνες μικροσυστοιχιών

Μαστρογιάννη, Αικατερίνη 05 January 2011 (has links)
Η τεχνολογία των DNA μικροσυστοιχιών είναι μια υψηλής απόδοσης τεχνική που καθορίζει το κατά πόσο ένα κύτταρο μπορεί να ελέγξει, ταυτόχρονα, την έκφραση ενός πολύ μεγάλου αριθμού γονιδίων. Οι DNA μικροσυστοιχίες χρησιμοποιούνται για την παρακολούθηση και τον έλεγχο των αλλαγών που υφίστανται τα επίπεδα της γονιδιακής έκφρασης λόγω περιβαλλοντικών συνθηκών ή των αλλαγών που λαμβάνουν χώρα σε ασθενή κύτταρα σε σχέση με τα υγιή, χρησιμοποιώντας εξελιγμένες μεθόδους επεξεργασίας πληροφοριών. Εξαιτίας του τρόπου με τον οποίον παράγονται οι μικροσυστοιχίες, κατά την πειραματική επεξεργασία τους, εμφανίζεται ένας μεγάλος αριθμός διαδικασιών που εισάγουν σφάλματα, γεγονός που αναπόφευκτα οδηγεί στην δημιουργία υψηλού επιπέδου θορύβου και σε κατασκευαστικά προβλήματα στα προκύπτοντα δεδομένα. Κατά την διάρκεια των τελευταίων δεκαπέντε ετών, έχουν προταθεί από αρκετούς ερευνητές, πολλές και ικανές μέθοδοι που δίνουν λύσεις στο πρόβλημα της ενίσχυσης και της βελτίωσης των εικόνων μικροσυστοιχίας. Παρά το γεγονός της ευρείας ενασχόλησης των ερευνητών με τις μεθόδους επεξεργασίας των εικόνων μικροσυστοιχίας, η διαδικασία βελτίωσης τους αποτελεί ακόμη, ένα θέμα που προκαλεί ενδιαφέρον καθώς η ανάγκη για καλύτερα αποτελέσματα δεν έχει μειωθεί. Στόχος της διδακτορικής διατριβής είναι να συνεισφέρει σημαντικά στην προσπάθεια βελτίωσης των αποτελεσμάτων προτείνοντας μεθόδους ψηφιακής επεξεργασίας εικόνας που επιφέρουν βελτίωση της ποιότητας των εικόνων μέσω της μείωσης των συνιστωσών του θορύβου και της τεμαχιοποίησης της εικόνας. Πιο συγκεκριμένα, στα πλαίσια εκπόνησης της διατριβής παρουσιάζεται μια νέα αυτόματη μέθοδος εντοπισμού της θέσης των υπομικροσυστοιχιών σκοπός της οποίας είναι να καλυφθεί εν μέρει το κενό που υπάρχει στην βιβλιογραφία των μικροσυστοιχιών για το βήμα της προεπεξεργασίας που αφορά στην αυτόματη εύρεση της θέσης των υπομικροσυστοιχιών σε μια μικροσυστοιχία. Το βήμα αυτό της προεπεξεργασίας, σπανίως, λαμβάνεται υπόψιν καθώς στις περισσότερες εργασίες σχετικές με τις μικροσυστοιχίες, γίνεται μια αυθαίρετη υπόθεση ότι οι υπομικροσυστοιχίες έχουν με κάποιον τρόπο ήδη εντοπιστεί. Στα πραγματικά συστήματα αυτόματης ανάλυσης της εικόνας μικροσυστοιχίας, την αρχική εκτίμηση της θέσης των υπομικροσυστοιχιών, συνήθως, ακολουθεί η διόρθωση που πραγματοποιείται σε κάθε μια από τις θέσεις αυτές από τους χειριστές των συστημάτων. Η αυτοματοποίηση της εύρεσης θέσης των υπομικροσυστοιχιών οδηγεί σε πιο γρήγορους και ακριβείς υπολογισμούς που αφορούν στην πληροφορία που προσδιορίζεται από την εικόνα μικροσυστοιχίας. Στην συνέχεια της διατριβής, παρουσιάζεται μια συγκριτική μελέτη για την αποθορυβοποίηση των εικόνων μικροσυστοιχίας χρησιμοποιώντας τον μετασχηματισμό κυματιδίου και τα χωρικά φίλτρα ενώ επιπλέον με την βοήθεια τεχνικών της μαθηματικής μορφολογίας πραγματοποιείται δραστική μείωση του θορύβου που έχει την μορφή «αλάτι και πιπέρι». Τέλος, στα πλαίσια της εκπόνησης της διδακτορικής διατριβής, παρουσιάζεται μια μέθοδος κατάτμησης των περιοχών των spot των μικροσυστοιχιών, βασιζόμενη στον αλγόριθμο Random Walker. Κατά την πειραματική διαδικασία επιτυγχάνεται επιτυχής κατηγοριοποίηση των spot, ακόμα και στην περίπτωση εικόνων μικροσυστοιχίας με σοβαρά προβλήματα (θόρυβος, κατασκευαστικά λάθη, λάθη χειρισμού κατά την διαδικασία κατασκευής της μικροσυστοιχίας κ.α.), απαιτώντας σαν αρχική γνώση μόνο ένα μικρό αριθμό από εικονοστοιχεία προκειμένου να επιτευχθεί υψηλής ποιότητας κατάτμηση εικόνας. Τα πειραματικά αποτελέσματα συγκρίνονται ποιοτικά με αυτά που προκύπτουν με την εφαρμογή του μοντέλου κατάτμησης Chan-Vese το οποίο χρησιμοποιεί μια αρχική υπόθεση των συνόρων που υπάρχουν μεταξύ των ομάδων προς ταξινόμηση, αποδεικνύοντας ότι η ακρίβεια με την οποία η προτεινόμενη μέθοδος ταξινομεί τις περιοχές των spot στην σωστή κατηγορία σε μια μικροσυστοιχία, είναι σαφώς καλύτερη και πιο ακριβής. / DNA microarray technology is a high-throughput technique that determines how a cell can control the expression of large numbers of genes simultaneously. Microarrays are used to monitor changes in the expression levels of genes in response to changes in environmental conditions or in healthy versus diseased cells by using advanced information processing methods. Due to the nature of the acquisition process, microarray experiments involve a large number of error-prone procedures that lead to a high level of noise and structural problems in the resulting data. During the last fifteen years, robust methods have been proposed by many researchers resulting in several solutions for the enhancement of the microarray images. Though microarray image analysis has been elaborated quite enough, the enhancement process is still an intriguing issue as the need for even better results has not decreased. The goal of this PhD thesis is to significantly contribute to the above effort by proposing enhancing methods (denoising, segmentation) for the microarray image analysis. More specifically, a novel automated subgrid detection method is presented introducing a pre-processing step of the subgrid detection. This step is rarely taken into consideration as in most microarray enhancing methods it is arbitrarily assumed that the subgrids have been already identified. The automation of the subgrid detection leads to faster and more accurate information extraction from microarray images. Consequently, the PhD thesis presents a comparative denoising framework for microarray image denoising that includes wavelets and spatial filters while on the other hand uses mathematical morphology methods to reduce the “salt&pepper”-like noise in microarray images. Finally, a method for microarray spot segmentation is proposed, based on the Random Walker algorithm. During the experimental process, accurate spot segmentation is obtained even in case of relatively-high distorted images, using only an initial annotation for a small number of pixels for high-quality image segmentation. The experimental results are qualitatively compared to the Chan-Vese segmentation model, showing that the accuracy of the proposed microarray spot detection method is more accurate than the spot borders defined by the compared method.
73

Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais

Casaca, Wallace Correa de Oliveira [UNESP] 25 February 2010 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:56Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-02-25Bitstream added on 2014-06-13T19:06:36Z : No. of bitstreams: 1 casaca_wco_me_sjrp.pdf: 5215634 bytes, checksum: 291e2a21fdb4d46a11de22f18cc97f93 (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura. / In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature.
74

Savitzky-Golay Filters and Application to Image and Signal Denoising

Menon, Seeram V January 2015 (has links) (PDF)
We explore the applicability of local polynomial approximation of signals for noise suppression. In the context of data regression, Savitzky and Golay showed that least-squares approximation of data with a polynomial of fixed order, together with a constant window length, is identical to convolution with a finite impulse response filter, whose characteristics depend entirely on two parameters, namely, the order and window length. Schafer’s recent article in IEEE Signal Processing Magazine provides a detailed account of one-dimensional Savitzky-Golay (SG) filters. Drawing motivation from this idea, we present an elaborate study of two-dimensional SG filters and employ them for image denoising by optimizing the filter response to minimize the mean-squared error (MSE) between the original image and the filtered output. The key contribution of this thesis is a method for optimal selection of order and window length of SG filters for denoising images. First, we apply the denoising technique for images contaminated by additive Gaussian noise. Owing to the absence of ground truth in practice, direct minimization of the MSE is infeasible. However, the classical work of C. Stein provides a statistical method to overcome the hurdle. Based on Stein’s lemma, an estimate of the MSE, namely Stein’s unbiased risk estimator (SURE), is derived, and the two critical parameters of the filter are optimized to minimize the cost. The performance of the technique improves when a regularization term, which penalizes fast variations in the estimate, is added to the optimization cost. In the next three chapters, we focus on non-Gaussian noise models. In Chapter 3, image degradation in the presence of a compound noise model, where images are corrupted by mixed Poisson-Gaussian noise, is addressed. Inspired by Hudson’s identity, an estimate of MSE, namely Poisson unbiased risk estimator (PURE), which is analogous to SURE, is developed. Combining both lemmas, Poisson-Gaussian unbiased risk estimator (PGURE) minimization is performed to obtain the optimal filter parameters. We also show that SG filtering provides better lowpass approximation for a multiresolution denoising framework. In Chapter 4, we employ SG filters for reducing multiplicative noise in images. The standard SG filter frequency response can be controlled along horizontal or vertical directions. This limits its ability to capture oriented features and texture that lie at other angles. Here, we introduce the idea of steering the SG filter kernel and perform mean-squared error minimization based on the new concept of multiplicative noise unbiased risk estimation (MURE). Finally, we propose a method to robustify SG filters, robustness to deviation from Gaussian noise statistics. SG filters work on the principle of least-squares error minimization, and are hence compatible with maximum-likelihood (ML) estimation in the context of Gaussian statistics. However, for heavily-tailed noise such as the Laplacian, where ML estimation requires mean-absolute error minimization in lieu of MSE minimization, standard SG filter performance deteriorates. `1 minimization is a challenge since there is no closed-form solution. We solve the problem by inducing the `1-norm criterion using the iteratively reweighted least-squares (IRLS) method. At every iteration, we solve an l`2 problem, which is equivalent to optimizing a weighted SG filter, but, as iterations progress, the solution converges to that corresponding to `1 minimization. The results thus obtained are superior to those obtained using the standard SG filter.
75

Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais /

Casaca, Wallace Correa de Oliveira. January 2010 (has links)
Orientador: Maurílio Boaventura / Banca: Evanildo Castro Silva Júnior / Banca: Alagacone Sri Ranga / Resumo: Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura. / Abstract: In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature. / Mestre
76

Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration

Heinrich, André 21 March 2013 (has links)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
77

Comparative Denoising Study Deep Learning & Collaborative Filter / Jämförande Brusreducerande Studie Djup Maskininlärning & Kollaborativa Filter

Kamoun, Sami January 2024 (has links)
This thesis addresses the challenge of denoising microscopy images captured under low-light conditionswith varying intensity levels. The study compares three deep learning models — N2V, CARE, andRCAN — against the collaborative filter BM4D, which serves as a reference point. The models weretrained on two distinct datasets: Endoplasmic Reticulum and Mitochondria datasets, both acquired witha lattice light-sheet microscope.Results show that BM4D maintains stable performance metrics and delivers superior visual quality,when compared to the noisy input. In contrast, the deep learning models exhibit poor performance onnoisy test images when trained on datasets with non-uniform noise levels. Additionally, a sensitivitycomparison of neural parameter between the same models was made. Revealing that supervised modelsare data-specific to some extent, whereas the self-supervised N2V demonstrates consistent neuralparameters, suggesting lower data specificity. / Denna uppsats tar upp problemet med att reducera brus i mikroskopibilder tagna under svagaljusförhållanden med varierande intensitetsnivåer. Studien jämför tre djupinlärningsmodeller – N2V,CARE och RCAN – mot det kollaborativa filtret BM4D, vilket agerar som en referenspunkt.Modellerna tränades på två olika dataset: Endoplasmic Reticulum och Mitochondria, båda tagna meden selektiv planbelysningsmikroskop (lattice light-sheet microscope).Resultaten visar att BM4D behåller stabila prestationsmått och levererar bättre visuell kvalitet, jämförtmed den brusiga input. Däremot visar djupinlärningsmodellerna bristande prestanda på brusigatestbilder när de tränats på data med icke-enhetliga brusnivåer. Dessutom gjordes enkänslighetsjämförelse av neurala parametrar mellan samma modeller. Detta visade att de övervakademodellerna är specifika för data i viss utsträckning, medan den självövervakade N2V-modellen visarlika neurala parametrar, vilket tyder på lägre dataspecificitet

Page generated in 0.0932 seconds