• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 7
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 27
  • 19
  • 19
  • 18
  • 16
  • 16
  • 16
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Numerical simulation of shallow water equations and some physical models in image processing

Haro Ortega, Glòria 11 July 2005 (has links)
There are two main subjects in this thesis: the first one deals with the numerical simulation of shallow water equations, the other one is the resolution of some problems in image processingThe first part of this dissertation is devoted to the shallow waters. We propose a combined scheme which uses the Marquina's double flux decomposition (extended to the non homogeneous case) when adjacent states are not close and a single decomposition otherwise. This combined scheme satisfies the exact C property. Furthermore, we propose a special treatment of the numerical scheme at dry zones.The second subject is the digital simulation of the Day for Night (or American Night in Europe). The proposed algorithm simulates a night image coming from a day image and considers some aspects of night perception. In order to simulate the loss of visual acuity we introduce a partial differential equation that simulates the spatial summation principle of the photoreceptors in the retina.The gap restoration (inpainting) on surfaces is the object of the third part. For that, we propose some geometrical approaches based on the mean curvature. Then, we also use two interpolation methods: the resolution of the Laplace equation, and an Absolutely Minimizing Lipschitz Extension (AMLE). Finally, we solve the restoration problem of satellite images. The variational problem that we propose manages to do irregular to regular sampling, denoising, deconvolution and zoom at the same time. / Los temas tratados en esta tesis son, por un lado, la simulación numérica de las ecuaciones de aguas someras ("shallow waters") y por otro, la resolución de algunos problemas de procesamiento de imágenes. En la primera parte de la tesis, dedicada a las aguas someras, proponemos un esquema combinado que usa la técnica de doble descomposición de flujos de Marquina (extendida al caso no homogéneo) cuando los dos estados adyacentes no están próximos y una única descomposición en caso contrario. El esquema combinado verifica la propiedad C exacta. Por otro lado, proponemos un tratamiento especial en las zonas secas.El segundo tema tratado es la simulación digital de la Noche Americana ("Day for Night"). El algoritmo propuesto simula una imagen nocturna a partir de una imagen diurna considerando varios aspectos de la percepción visual nocturna. Para simular la pérdida de agudeza visual se propone una ecuación en derivadas parciales que simula el principio de sumación espacial de los fotoreceptores situados en la retina.La restauración de agujeros ("inpainting") en superficies es objeto de la tercera parte. Para ello se proponen varios enfoques geométricos basados en la curvatura media. También se utilizan dos métodos de interpolación: la resolución de la ecuación de Laplace y el método AMLE (Absolutely Minimization Lipschitz Extension).Por último, tratamos la restauración de imágenes satelitales. El método propuesto consigue obtener una colección de muestras regulares a partir de un muestreo irregular, eliminando a la vez el ruido, deconvolucinando la imagen y haciendo un zoom.
72

Μέθοδοι βελτίωσης της χωρικής ανάλυσης ψηφιακής εικόνας

Παναγιωτοπούλου, Αντιγόνη 12 April 2010 (has links)
Η αντιμετώπιση της περιορισμένης χωρικής ανάλυσης των εικόνων, η οποία οφείλεται στους φυσικούς περιορισμούς που εμφανίζουν οι αισθητήρες σύλληψης εικόνας, αποτελεί το αντικείμενο μελέτης της παρούσας διδακτορικής διατριβής. Στη διατριβή αυτή αρχικά γίνεται προσπάθεια μοντελοποίησης της λειτουργίας του ψηφιοποιητή εικόνας κατά τη δημιουργία αντίγραφου ενός εγγράφου μέσω απλών μοντέλων. Στην εξομοίωση της λειτουργίας του ψηφιοποιητή, το προτεινόμενο μοντέλο θα πρέπει να προτιμηθεί έναντι των μοντέλων Gaussian και Cauchy, που συναντώνται στη βιβλιογραφία, καθώς είναι ισοδύναμο στην απόδοση, απλούστερο στην υλοποίηση και δεν παρουσιάζει εξάρτηση από συγκεκριμένα χαρακτηριστικά λειτουργίας του ψηφιοποιητή. Έπειτα, μορφοποιούνται νέες μέθοδοι για τη βελτίωση της χωρικής ανάλυσης σε εικόνες. Προτείνεται μέθοδος μη ομοιόμορφης παρεμβολής για ανακατασκευή εικόνας Super-Resolution (SR). Αποδεικνύεται πειραματικά πως η προτεινόμενη μέθοδος η οποία χρησιμοποιεί την παρεμβολή Kriging υπερτερεί της μεθόδου η οποία δημιουργεί το πλέγμα υψηλής ανάλυσης μέσω της σταθμισμένης παρεμβολής κοντινότερου γείτονα που αποτελεί συμβατική τεχνική. Επίσης, παρουσιάζονται τρεις νέες μέθοδοι για στοχαστική ανακατασκευή εικόνας SR regularized. Ο εκτιμητής Tukey σε συνδυασμό με το Bilateral Total Variation (BTV) regularization, ο εκτιμητής Lorentzian σε συνδυασμό με το BTV regularization και ο εκτιμητής Huber συνδυασμένος με το BTV regularization είναι οι τρεις μέθοδοι που προτείνονται. Μία πρόσθετη καινοτομία αποτελεί η απευθείας σύγκριση των τριών εκτιμητών Tukey, Lorentzian και Huber στην ανακατασκευή εικόνας super-resolution, άρα στην απόρριψη outliers. Η απόδοση των προτεινόμενων μεθόδων συγκρίνεται απευθείας με εκείνη μίας τεχνικής SR regularized που υπάρχει στη βιβλιογραφία, η οποία αποδεικνύεται κατώτερη. Σημειώνεται πως τα πειραματικά αποτελέσματα οδηγούν σε επαλήθευση της θεωρίας εύρωστης στατιστικής συμπεριφοράς. Επίσης, εκπονείται μία πρωτότυπη μελέτη σχετικά με την επίδραση που έχει κάθε ένας από τους όρους έκφρασης πιστότητας στα δεδομένα και regularization στη διαμόρφωση του αποτελέσματος της ανακατασκευής εικόνας SR. Τα συμπεράσματα που προκύπτουν βοηθούν στην επιλογή μίας αποτελεσματικής μεθόδου για ανακατασκευή εικόνας SR ανάμεσα σε διάφορες υποψήφιες μεθόδους για κάποια δεδομένη ακολουθία εικόνων χαμηλής ανάλυσης. Τέλος, προτείνεται μία μέθοδος παρεμβολής σε εικόνα μέσω νευρωνικού δικτύου. Χάρη στην προτεινόμενη τεχνική εκπαίδευσης το νευρωνικό δίκτυο μαθαίνει το point spread function του ψηφιοποιητή εικόνας. Τα πειραματικά αποτελέσματα αποδεικνύουν πως η προτεινόμενη μέθοδος υπερτερεί σε σχέση με τους κλασικούς αλγόριθμους δικυβικής παρεμβολής και παρεμβολής spline. Η τεχνική που προτείνεται εξετάζει για πρώτη φορά το ζήτημα της σειράς της παρουσίασης των δεδομένων εκπαίδευσης στην είσοδο του νευρωνικού δικτύου. / Coping with the limited spatial resolution of images, which is caused by the physical limitations of image sensors, is the objective of this thesis. Initially, an effort to model the scanner function when generating a document copy by means of simple models is made. In a task of scanner function simulation the proposed model should be preferred over the Gaussian and Cauchy models met in bibliography as it is equivalent in performance, simpler in implementation and does not present any dependence on certain scanner characteristics. Afterwards, new methods for improving images spatial resolution are formulated. A nonuniform interpolation method for Super-Resolution (SR) image reconstruction is proposed. Experimentation proves that the proposed method employing Kriging interpolation predominates over the method which creates the high-resolution grid by means of the weighted nearest neighbor interpolation that is a conventional interpolation technique. Also, three new methods for stochastic regularized SR image reconstruction are presented. The Tukey error norm in combination with the Bilateral Total Variation (BTV) regularization, the Lorentzian error norm in combination with the BTV regularization and the Huber error norm combined with the BTV regularization are the three proposed methods. An additional novelty is the direct comparison of the three estimators Tukey, Lorentzian and Huber in the task of super-resolution image reconstruction, thus in rejecting outliers. The performance of the proposed methods proves superior to that of a regularized SR technique met in bibliography. Experimental results verify the robust statistics theory. Moreover, a novel study which considers the effect of each one of the data-fidelity and regularization terms on the SR image reconstruction result is carried out. The conclusions reached help to select an effective SR image reconstruction method, among several potential ones, for a given low-resolution sequence of frames. Finally, an image interpolation method employing a neural network is proposed. The presented training procedure results in the network learning the scanner point spread function. Experimental results prove that the proposed technique predominates over the classical algorithms of bicubic and spline interpolation. The proposed method is novel as it treats, for the first time, the issue of the training data presentation order to the neural network input.
73

Etude de l’imagerie de tenseur de diffusion en utilisant l’acquisition comprimée / Investigation of cardiac diffusion tensor imaging using compressed sensing

Huang, Jianping 13 December 2015 (has links)
L’étude de la structure microscopique des fibres du coeur offre une nouvelle approche pour expliquer les maladies du coeur et pour trouver des moyens de thérapie efficaces. L’imagerie de tenseur de diffusion par résonance magnétique (DTMR) ou l’imagerie de tenseur de diffusion (DTI) fournit actuellement un outil unique pour étudier les structures tridimensionnelles (3D) de fibres cardiaques in vivo. Cependant, DTI est connu pour souffrir des temps d'acquisition longs, ce qui limite considérablement son application pratique et clinique. Les méthodes traditionnelles pour l’acquisition et la reconstruction de l’image ne peuvent pas résoudre ce problème. La motivation principale de cette thèse est alors d’étudier des techniques d'imagerie rapide en reconstruisant des images de haute qualité à partir des données fortement sous-échantillonnées. La méthode adoptée est basée sur la nouvelle théorie de l’acquisition comprimée (CS). Plus précisément, nous étudions l’utilisation de la théorie de CS pour l’imagerie par résonance magnétique (IRM) et DTI cardiaque. Tout d'abord, nous formulons la reconstruction de l’image par résonance magnétique (MR) comme un problème d'optimisation avec les contraintes de trames ajustées guidées par les données (TF) et de variation totale généralisée (TGV) dans le cadre de CS, dans lequel, le TF guidé par les données est utilisé pour apprendre de manière adaptative un ensemble de filtres à partir des données fortement sous-échantillonné afin d’obtenir une meilleure approximation parcimonieuse des images, et le TGV est dédié à régulariser de façon adaptative les régions d'image et à réduire ainsi les effets d'escalier. Ensuite, nous proposons une nouvelle méthode CS qui emploie conjointement la parcimonie et la déficience de rang pour reconstruire des images de DTMR cardiaques à partir des données de l'espace k fortement sous-échantillonnées. Puis, toujours dans le cadre de la théorie CS, nous introduisons la contrainte de rang faible et la régularisation de variation totale (TV) dans la formulation de la reconstruction par CS. Deux régularisations TV sont considérées: TV locale (i.e. TV classique) et TV non locale (NLTV). Enfin, nous proposons deux schémas de sous-échantillonnage radial aléatoire (angle d’or et angle aléatoire) et une méthode d’optimisation avec la contrainte de faible rang et la régularisation TV pour traiter des données espace k fortement sous-échantillonnées en DTI cardiaque. Enfin, nous comparons nos méthodes avec des stratégies existantes de sous-échantillonnage radial telles que l’angle uniforme, l’angle uniforme perturbé aléatoirement, l’angle d’or et l’angle aléatoire. / The investigation of the micro fiber structures of the heart provides a new approach to explaining heart disease and investigating effective therapy means. Diffusion tensor magnetic resonance (DTMR) imaging or diffusion tensor imaging (DTI) currently provides a unique tool to image the three-dimensional (3D) fiber structures of the heart in vivo. However, DTI is known to suffer from long acquisition time, which greatly limits its practical and clinical use. Classical acquisition and reconstruction methods do not allow coping with the problem. The main motivation of this thesis is then to investigae fast imaging techniques by reconstructing high-quality images from highly undersampled data. The methodology adopted is based on the recent theory of compressed sensing (CS). More precisely, we address the use of CS for magnetic resonance imaging (MRI) and cardiac DTI. First, we formulate the magnetic resonance (MR) image reconstruction as a problem of optimization with data-driven tight frame (TF) and total generalized variation (TGV) constraints in the framework of CS, in which the data-driven TF is used to adaptively learn a set of filters from the highly under-sampled data itself to provide a better sparse approximation of images and the TGV is devoted to regularizing adaptively image regions and thus supprressing staircase effects. Second, we propose a new CS method that employs joint sparsity and rank deficiency prior to reconstruct cardiac DTMR images from highly undersampled k-space data. Then, always in the framework of CS theory, we introduce low rank constraint and total variation (TV) regularizations in the CS reconstruction formulation, to reconstruct cardiac DTI images from highly undersampled k-space data. Two TV regularizations are considered: local TV (i.e. classical TV) and nonlocal TV (NLTV). Finally, we propose two randomly perturbed radial undersampling schemes (golden-angle and random angle) and the optimization with low rank constraint and TV regularizations to deal with highly undersampled k-space acquisitons in cardiac DTI, and compare the proposed CS-based DTI with existing radial undersampling strategies such as uniformity-angle, randomly perturbed uniformity-angle, golden-angle, and random angle.
74

Implementace rekonstrukčních metod pro čtení čárového kódu / Implementation of restoring method for reading bar code

Kadlčík, Libor January 2013 (has links)
Bar code stores information in the form of series of bars and gaps with various widths, and therefore can be considered as an example of bilevel (square) signal. Magnetic bar codes are created by applying slightly ferromagnetic material to a substrate. Sensing is done by reading oscillator, whose frequency is modulated by presence of the mentioned ferromagnetic material. Signal from the oscillator is then subjected to frequency demodulation. Due to temperature drift of the reading oscillator, the demodulated signal is accompanied by DC drift. Method for removal of the drift is introduced. Also, drift-insensitive detection of presence of a bar code is described. Reading bar codes is complicated by convolutional distortion, which is result of spatially dispersed sensitivity of the sensor. Effect of the convolutional distortion is analogous to low-pass filtering, causing edges to be smoothed and overlapped, and making their detection difficult. Characteristics of convolutional distortion can be summarized into point-spread function (PSF). In case of magnetic bar codes, the shape of the PSF can be known in advance, but not its width of DC transfer. Methods for estimation of these parameters are discussed. The signal needs to be reconstructed (into original bilevel form) before decoding can take place. Variational methods provide effective way. Their core idea is to reformulate reconstruction as an optimization problem of functional minimization. The functional can be extended by other functionals (regularizations) in order to considerably improve results of reconstruction. Principle of variational methods will be shown, including examples of use of various regularizations. All algorithm and methods (including frequency demodulation of signal from reading oscillator) are digital. They are implemented as a program for a microcontroller from the PIC32 family, which offers high computing power, so that even blind deconvolution (when the real PSF also needs to be found) can be finished in a few seconds. The microcontroller is part of magnetic bar code reader, whose hardware allows the read information to be transferred to personal computer via the PS/2 interface or USB (by emulating key presses on virtual keyboard), or shown on display.
75

Nové typy a principy optimalizace digitálního zpracování obrazů v EIT / New Optimization Algorithms for a Digital Image Reconstruction in EIT

Kříž, Tomáš January 2016 (has links)
This doctoral thesis proposes a new algorithm for the reconstruction of impedance images in monitored objects. The algorithm eliminates the spatial resolution problems present in existing reconstruction methods, and, with respect to the monitored objects, it exploits both the partial knowledge of configuration and the material composition. The discussed novel method is designed to recognize certain significant fields of interest, such as material defects or blood clots and tumors in biological images. The actual reconstruction process comprises two phases; while the former stage is focused on industry-related images, with the aim to detect defects in conductive materials, the latter one concentrates on biomedical applications. The thesis also presents a description of the numerical model used to test the algorithm. The testing procedure was centred on the resulting impedivity value, influence of the regularization parameter, initial value of the numerical model impedivity, and effect exerted by noise on the voltage electrodes upon the overall reconstruction results. Another issue analyzed herein is the possibility of reconstructing impedance images from components of the magnetic flux density measured outside the investigated object. The given magnetic field is generated by a current passing through the object. The created algorithm for the reconstruction of impedance images is modeled on the proposed algorithm for EIT-based reconstruction of impedance images from voltage. The algoritm was tested for stability, influence of the regularization parameter, and initial conductivity. From the general perspective, the thesis describes the methodology for both magnetic field measurement via NMR and processing of the obtained data.
76

Theoretical and Numerical Analysis of Super-Resolution Without Grid / Analyse numérique et théorique de la super-résolution sans grille

Denoyelle, Quentin 09 July 2018 (has links)
Cette thèse porte sur l'utilisation du BLASSO, un problème d'optimisation convexe en dimension infinie généralisant le LASSO aux mesures, pour la super-résolution de sources ponctuelles. Nous montrons d'abord que la stabilité du support des solutions, pour N sources se regroupant, est contrôlée par un objet appelé pré-certificat aux 2N-1 dérivées nulles. Quand ce pré-certificat est non dégénéré, dans un régime de petit bruit dont la taille est contrôlée par la distance minimale séparant les sources, le BLASSO reconstruit exactement le support de la mesure initiale. Nous proposons ensuite l'algorithme Sliding Frank-Wolfe, une variante de l'algorithme de Frank-Wolfe avec déplacement continu des amplitudes et des positions, qui résout le BLASSO. Sous de faibles hypothèses, cet algorithme converge en un nombre fini d'itérations. Nous utilisons cet algorithme pour un problème 3D de microscopie par fluorescence en comparant trois modèles construits à partir des techniques PALM/STORM. / This thesis studies the noisy sparse spikes super-resolution problem for positive measures using the BLASSO, an infinite dimensional convex optimization problem generalizing the LASSO to measures. First, we show that the support stability of the BLASSO for N clustered spikes is governed by an object called the (2N-1)-vanishing derivatives pre-certificate. When it is non-degenerate, solving the BLASSO leads to exact support recovery of the initial measure, in a low noise regime whose size is controlled by the minimal separation distance of the spikes. In a second part, we propose the Sliding Frank-Wolfe algorithm, based on the Frank-Wolfe algorithm with an added step moving continuously the amplitudes and positions of the spikes, that solves the BLASSO. We show that, under mild assumptions, it converges in a finite number of iterations. We apply this algorithm to the 3D fluorescent microscopy problem by comparing three models based on the PALM/STORM technics.
77

The Eigenvalue Problem of the 1-Laplace Operator: Local Perturbation Results and Investigation of Related Vectorial Questions

Littig, Samuel 23 January 2015 (has links)
As a first aspect the thesis treats existence results of the perturbed eigenvalue problem of the 1-Laplace operator. This is done with the aid of a quite general critical point theory results with the genus as topological index. Moreover we show that the eigenvalues of the perturbed 1-Laplace operator converge to the eigenvalues of the unperturebed 1-Laplace operator when the perturbation goes to zero. As a second aspect we treat the eigenvalue problems of the vectorial 1-Laplace operator and the symmetrized 1-Laplace operator. And as a third aspect certain related parabolic problems are considered.
78

Sparse Processing Methodologies Based on Compressive Sensing for Directions of Arrival Estimation

Hannan, Mohammad Abdul 29 October 2020 (has links)
In this dissertation, sparse processing of signals for directions-of-arrival (DoAs) estimation is addressed in the framework of Compressive Sensing (CS). In particular, DoAs estimation problem for different types of sources, systems, and applications are formulated in the CS paradigm. In addition, the fundamental conditions related to the ``Sparsity'' and ``Linearity'' are carefully exploited in order to apply confidently the CS-based methodologies. Moreover, innovative strategies for various systems and applications are developed, validated numerically, and analyzed extensively for different scenarios including signal to noise ratio (SNR), mutual coupling, and polarization loss. The more realistic data from electromagnetic (EM) simulators are often considered for various analysis to validate the potentialities of the proposed approaches. The performances of the proposed estimators are analyzed in terms of standard root-mean-square error (RMSE) with respect to different degrees-of-freedom (DoFs) of DoAs estimation problem including number of elements, number of signals, and signal properties. The outcomes reported in this thesis suggest that the proposed estimators are computationally efficient (i.e., appropriate for real time estimations), robust (i.e., appropriate for different heterogeneous scenarios), and versatile (i.e., easily adaptable for different systems).
79

Better imaging for landmine detection : an exploration of 3D full-wave inversion for ground-penetrating radar

Watson, Francis Maurice January 2016 (has links)
Humanitarian clearance of minefields is most often carried out by hand, conventionally using a a metal detector and a probe. Detection is a very slow process, as every piece of detected metal must treated as if it were a landmine and carefully probed and excavated, while many of them are not. The process can be safely sped up by use of Ground-Penetrating Radar (GPR) to image the subsurface, to verify metal detection results and safely ignore any objects which could not possibly be a landmine. In this thesis, we explore the possibility of using Full Wave Inversion (FWI) to improve GPR imaging for landmine detection. Posing the imaging task as FWI means solving the large-scale, non-linear and ill-posed optimisation problem of determining the physical parameters of the subsurface (such as electrical permittivity) which would best reproduce the data. This thesis begins by giving an overview of all the mathematical and implementational aspects of FWI, so as to provide an informative text for both mathematicians (perhaps already familiar with other inverse problems) wanting to contribute to the mine detection problem, as well as a wider engineering audience (perhaps already working on GPR or mine detection) interested in the mathematical study of inverse problems and FWI.We present the first numerical 3D FWI results for GPR, and consider only surface measurements from small-scale arrays as these are suitable for our application. The FWI problem requires an accurate forward model to simulate GPR data, for which we use a hybrid finite-element boundary-integral solver utilising first order curl-conforming N\'d\'{e}lec (edge) elements. We present a novel `line search' type algorithm which prioritises inversion of some target parameters in a region of interest (ROI), with the update outside of the area defined implicitly as a function of the target parameters. This is particularly applicable to the mine detection problem, in which we wish to know more about some detected metallic objects, but are not interested in the surrounding medium. We may need to resolve the surrounding area though, in order to account for the target being obscured and multiple scattering in a highly cluttered subsurface. We focus particularly on spatial sensitivity of the inverse problem, using both a singular value decomposition to analyse the Jacobian matrix, as well as an asymptotic expansion involving polarization tensors describing the perturbation of electric field due to small objects. The latter allows us to extend the current theory of sensitivity in for acoustic FWI, based on the Born approximation, to better understand how polarization plays a role in the 3D electromagnetic inverse problem. Based on this asymptotic approximation, we derive a novel approximation to the diagonals of the Hessian matrix which can be used to pre-condition the GPR FWI problem.
80

Mathematical modelling of image processing problems : theoretical studies and applications to joint registration and segmentation / Modélisation mathématique de problèmes relatifs au traitement d'images : étude théorique et applications aux méthodes conjointes de recalage et de segmentation

Debroux, Noémie 15 March 2018 (has links)
Dans cette thèse, nous nous proposons d'étudier et de traiter conjointement plusieurs problèmes phares en traitement d'images incluant le recalage d'images qui vise à apparier deux images via une transformation, la segmentation d'images dont le but est de délimiter les contours des objets présents au sein d'une image, et la décomposition d'images intimement liée au débruitage, partitionnant une image en une version plus régulière de celle-ci et sa partie complémentaire oscillante appelée texture, par des approches variationnelles locales et non locales. Les relations étroites existant entre ces différents problèmes motivent l'introduction de modèles conjoints dans lesquels chaque tâche aide les autres, surmontant ainsi certaines difficultés inhérentes au problème isolé. Le premier modèle proposé aborde la problématique de recalage d'images guidé par des résultats intermédiaires de segmentation préservant la topologie, dans un cadre variationnel. Un second modèle de segmentation et de recalage conjoint est introduit, étudié théoriquement et numériquement puis mis à l'épreuve à travers plusieurs simulations numériques. Le dernier modèle présenté tente de répondre à un besoin précis du CEREMA (Centre d'Études et d'Expertise sur les Risques, l'Environnement, la Mobilité et l'Aménagement) à savoir la détection automatique de fissures sur des images d'enrobés bitumineux. De part la complexité des images à traiter, une méthode conjointe de décomposition et de segmentation de structures fines est mise en place, puis justifiée théoriquement et numériquement, et enfin validée sur les images fournies. / In this thesis, we study and jointly address several important image processing problems including registration that aims at aligning images through a deformation, image segmentation whose goal consists in finding the edges delineating the objects inside an image, and image decomposition closely related to image denoising, and attempting to partition an image into a smoother version of it named cartoon and its complementary oscillatory part called texture, with both local and nonlocal variational approaches. The first proposed model addresses the topology-preserving segmentation-guided registration problem in a variational framework. A second joint segmentation and registration model is introduced, theoretically and numerically studied, then tested on various numerical simulations. The last model presented in this work tries to answer a more specific need expressed by the CEREMA (Centre of analysis and expertise on risks, environment, mobility and planning), namely automatic crack recovery detection on bituminous surface images. Due to the image complexity, a joint fine structure decomposition and segmentation model is proposed to deal with this problem. It is then theoretically and numerically justified and validated on the provided images.

Page generated in 0.0814 seconds