• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 76
  • 46
  • 36
  • 20
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 494
  • 494
  • 145
  • 135
  • 80
  • 76
  • 75
  • 69
  • 69
  • 68
  • 65
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Wavelet based noise removal for ultrasonic non-destructive evaluation

Van Nevel, Alan J., January 1996 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1996. / Typescript. Vita. Includes bibliographical references (leaves 27-29). Also available on the Internet.
172

Quasi-3D statistical inversion of oceanographic tracer data

Herbei, Radu. January 2006 (has links)
Thesis (Ph. D.)--Florida State University, 2006. / Advisors: Kevin Speer, Martin Wegkamp, Florida State University, College of Arts and Sciences, Dept. of Statistics. Title and description from dissertation home page (viewed Sept. 20, 2006). Document formatted into pages; contains x, 48 pages. Includes bibliographical references.
173

Notes on layer stripping solutions of higher dimensional inverse seismic problems

January 1983 (has links)
Andrew E. Yagle. / Bibliography: leaf 16. / "December, 1983." / Air Force Office of Scientific Research Grant AFOSR-82-0135A
174

Spatial Coherence Enhancing Reconstructions for High Angular Resolution Diffusion MRI

Rügge, Christoph 02 February 2015 (has links)
No description available.
175

Méthodes itératives pour la reconstruction tomographique régularisée / Iterative Methods in regularized tomographic reconstruction

Paleo, Pierre 13 November 2017 (has links)
Au cours des dernières années, les techniques d'imagerie par tomographie se sont diversifiées pour de nombreuses applications. Cependant, des contraintes expérimentales conduisent souvent à une acquisition de données limitées, par exemple les scans rapides ou l'imagerie médicale pour laquelle la dose de rayonnement est une préoccupation majeure. L'insuffisance de données peut prendre forme d'un faible rapport signal à bruit, peu de vues, ou une gamme angulaire manquante. D'autre part, les artefacts nuisent à la qualité de reconstruction. Dans ces contextes, les techniques standard montrent leurs limitations. Dans ce travail, nous explorons comment les méthodes de reconstruction régularisée peuvent répondre à ces défis. Ces méthodes traitent la reconstruction comme un problème inverse, et la solution est généralement calculée par une procédure d'optimisation. L'implémentation de méthodes de reconstruction régularisée implique à la fois de concevoir une régularisation appropriée, et de choisir le meilleur algorithme d'optimisation pour le problème résultant. Du point de vue de la modélisation, nous considérons trois types de régularisations dans un cadre mathématique unifié, ainsi que leur implémentation efficace : la variation totale, les ondelettes et la reconstruction basée sur un dictionnaire. Du point de vue algorithmique, nous étudions quels algorithmes d'optimisation de l'état de l'art sont les mieux adaptés pour le problème et l'architecture parallèle cible (GPU), et nous proposons un nouvel algorithme d'optimisation avec une vitesse de convergence accrue. Nous montrons ensuite comment les modèles régularisés de reconstruction peuvent être étendus pour prendre en compte les artefacts usuels : les artefacts en anneau et les artefacts de tomographie locale. Nous proposons notamment un nouvel algorithme quasi-exact de reconstruction en tomographie locale. / In the last years, there have been a diversification of the tomography imaging technique for many applications. However, experimental constraints often lead to limited data - for example fast scans, or medical imaging where the radiation dose is a primary concern. The data limitation may come as a low signal to noise ratio, scarce views or a missing angle wedge.On the other hand, artefacts are detrimental to reconstruction quality.In these contexts, the standard techniques show their limitations.In this work, we explore how regularized tomographic reconstruction methods can handle these challenges.These methods treat the problem as an inverse problem, and the solution is generally found by the means of an optimization procedure.Implementing regularized reconstruction methods entails to both designing an appropriate regularization, and choosing the best optimization algorithm for the resulting problem.On the modelling part, we focus on three types of regularizers in an unified mathematical framework, along with their efficient implementation: Total Variation, Wavelets and dictionary-based reconstruction. On the algorithmic part, we study which state-of-the-art convex optimization algorithms are best fitted for the problem and parallel architectures (GPU), and propose a new algorithm for an increased convergence speed.We then show how the standard regularization models can be extended to take the usual artefacts into account, namely rings and local tomography artefacts. Notably, a novel quasi-exact local tomography reconstruction method is proposed.
176

Analyse de mélanges à partir de signaux de chromatographie gazeuse / Analysis of gaseous mixture from gas chromatography signal.

Bertholon, Francois 23 September 2016 (has links)
Les dispositifs dédiés à l’analyse de gaz ont de nombreuses applications, notamment le contrôle de la qualité de l’air et des gaz naturels ou industriels, ou l’étude des gaz respiratoires. Le LETI travaille en collaboration avec la société APIX sur une nouvelle génération de dispositifs microsystèmes combinant des étapes de préparation des échantillons, de séparation des composants gazeux par des dispositifs de microchromatographie intégrés sur silicium, et de transduction par des détecteurs NEMS (Nano Electro Mechanical Systems) à base de nanocantilevers utilisés comme des détecteurs dits gravimétriques. Ces capteurs NEMS sont constitués de nano poutres vibrantes dont la fréquence de résonance dépend de la masse de matière déposée sur la poutre. Ces poutres sont fonctionnalisées avec un matériau capable d’adsorber puis de désorber certains composants ciblés. Lors du passage d’une impulsion de matière dans la colonne chromatographique au niveau d’un NEMS, le signal, défini par sa fréquence de résonance instantanée en fonction du temps, varie. Le pic observé sur ce signal traduira le pic de matière dans la colonne et permettra d’estimer la concentration du composant. Partant de l’ensemble des signaux mesurés sur l’ensemble des capteurs, l’objectif du traitement du signal est de fournir le profil moléculaire, c’est-à-dire la concentration de chaque composant, et d’estimer le pouvoir calorifique supérieur (PCS) associé. Le défi est d’associer une haute sensibilité pour détecter de très petites quantités de composants, et des capacités de séparation efficaces pour s’affranchir de la complexité du mélange et identifier la signature des molécules ciblées. L’objectif de cette thèse en traitement du signal est de travailler sur la formalisation des problèmes d’analyse des signaux abordés et d’étendre notre méthodologie d’analyse reposant sur l’approche problème inverse associée à un modèle hiérarchique de la chaîne d’analyse, aux dispositifs de microchromatographie gazeuse intégrant des capteurs NEMS multiples. La première application visée est le suivi du pouvoir calorifique d’un gaz naturel. Les ruptures concernent notamment la décomposition de signaux NEMS, la fusion multi-capteurs, l’autocalibrage et le suivi dynamique temporel. Le travail comportera des phases d’expérimentation réalisées notamment sur le banc d’analyse de gaz du laboratoire commun APIX-LETI. / The chromatography is a chemical technique to separate entities from a mixture. In this thesis, we will focused on the gaseous mixture, and particularly on the signal processing of gas chromatography. To acquire those signal we use different sensors. However whatever the sensor used, we obtain a peaks succession, where each peak corresponds to an entity present in the mixture. Our aim is then to analyze gaseous mixtures from acquired signals, by characterizing each peaks. After a bibliographic survey of the chromatography, we chose the Giddings and Eyring distribution to describe a peak shape. This distribution define the probability that a molecule walking randomly through the chromatographic column go out at a given time. Then we propose analytical model of the chromatographic signal which corresponds to a peak shape mixture model. Also in first approximation, this model is considered as Gaussian mixture model. To process those signals, we studied two broad groups of methods, executed upon real data and simulated data. The first family of algorithms consists in a Bayesian estimation of unknown parameters of our model. The order of mixture model can be include in the unknown parameters. It corresponds also to the number of entity in the gaseous mixture. To estimate those parameters, we use a Gibbs sampler, and Markov Chain Monte Carlo sampling, or a variational approach. The second methods consists in a sparse representation of the signal upon a dictionary. This dictionary includes a large set of peak shapes. The sparsity is then characterized by the number of components of the dictionary needed to describe the signal. At last we propose a sparse Bayesian method.
177

Advanced beamforming techniques in ultrasound imaging and the associated inverse problems / Techniques avancées de formation de voies en imagerie ultrasonore et problèmes inverses associés

Szasz, Teodora 14 October 2016 (has links)
L'imagerie ultrasonore (US) permet de réaliser des examens médicaux non invasifs avec des méthodes d'acquisition rapides à des coûts modérés. L'imagerie cardiaque, abdominale, fœtale, ou mammaire sont quelques-unes des applications où elle est largement utilisée comme outil de diagnostic. En imagerie US classique, des ondes acoustiques sont transmises à une région d'intérêt du corps humain. Les signaux d'écho rétrodiffusés, sont ensuite formés pour créer des lignes radiofréquences. La formation de voies (FV) joue un rôle clé dans l'obtention des images US, car elle influence la résolution et le contraste de l'image finale. L'objectif de ce travail est de modéliser la formation de voies comme un problème inverse liant les données brutes aux signaux RF. Le modèle de formation de voies proposé ici améliore le contraste et la résolution spatiale des images échographiques par rapport aux techniques de FV existants. Dans un premier temps, nous nous sommes concentrés sur des méthodes de FV en imagerie US. Nous avons brièvement passé en revue les techniques de formation de voies les plus courantes, en commencent par la méthode par retard et somme standard puis en utilisant les techniques de formation de voies adaptatives. Ensuite, nous avons étudié l'utilisation de signaux qui exploitent une représentation parcimonieuse de l'image US dans le cadre de la formation de voies. Les approches proposées détectent les réflecteurs forts du milieu sur la base de critères bayésiens. Nous avons finalement développé une nouvelle façon d'aborder la formation de voies en imagerie US, en la formulant comme un problème inverse linéaire liant les échos réfléchis au signal final. L'intérêt majeur de notre approche est la flexibilité dans le choix des hypothèses statistiques sur le signal avant la formation de voies et sa robustesse dans à un nombre réduit d'émissions. Finalement, nous présentons une nouvelle méthode de formation de voies pour l'imagerie US basée sur l'utilisation de caractéristique statistique des signaux supposée alpha-stable. / Ultrasound (US) allows non-invasive and ultra-high frame rate imaging procedures at reduced costs. Cardiac, abdominal, fetal, and breast imaging are some of the applications where it is extensively used as diagnostic tool. In a classical US scanning process, short acoustic pulses are transmitted through the region-of-interest of the human body. The backscattered echo signals are then beamformed for creating radiofrequency(RF) lines. Beamforming (BF) plays a key role in US image formation, influencing the resolution and the contrast of final image. The objective of this thesis is to model BF as an inverse problem, relating the raw channel data to the signals to be recovered. The proposed BF framework improves the contrast and the spatial resolution of the US images, compared with the existing BF methods. To begin with, we investigated the existing BF methods in medical US imaging. We briefly review the most common BF techniques, starting with the standard delay-and-sum BF method and emerging to the most known adaptive BF techniques, such as minimum variance BF. Afterwards, we investigated the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well-known Bayesian Information Criteria used in statistical modeling. Furthermore, we propose a new way of addressing the BF in US imaging, by formulating it as a linear inverse problem relating the reflected echoes to the signal to be recovered. Our approach offers flexibility in the choice of statistical assumptions on the signal to be beamformed and it is robust to a reduced number of pulse emissions. At the end of this research, we investigated the use of the non-Gaussianity properties of the RF signals in the BF process, by assuming alpha-stable statistics of US images.
178

Recent Techniques for Regularization in Partial Differential Equations and Imaging

January 2018 (has links)
abstract: Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain. This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges. Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems. / Dissertation/Thesis / Doctoral Dissertation Mathematics 2018
179

Numerical study on some inverse problems and optimal control problems

Tian, Wenyi 31 August 2015 (has links)
In this thesis, we focus on the numerical study on some inverse problems and optimal control problems. In the first part, we consider some linear inverse problems with discontinuous or piecewise constant solutions. We use the total variation to regularize these inverse problems and then the finite element technique to discretize the regularized problems. These discretized problems are treated from the saddle-point perspective; and some primal-dual numerical schemes are proposed. We intensively investigate the convergence of these primal-dual type schemes, establishing the global convergence and estimating their worst-case convergence rates measured by the iteration complexity. We test these schemes by some experiments and verify their efficiency numerically. In the second part, we consider the finite difference and finite element discretization for an optimal control problem which is governed by time fractional diffusion equation. The prior error estimate of the discretized model is analyzed, and a projection gradient method is applied for iteratively solving the fully discretized surrogate. Some numerical experiments are conducted to verify the efficiency of the proposed method. Overall speaking, the thesis has been mainly inspired by some most recent advances developed in optimization community, especially in the area of operator splitting methods for convex programming; and it can be regarded as a combination of some contemporary optimization techniques with some relatively mature inverse and control problems. Keywords: Total variation minimization, linear inverse problem, saddle-point problem, finite element method, primal-dual method, convergence rate, optimal control problem, time fractional diffusion equation, projection gradient method.
180

Contribuição ao desenvolvimento de técnicas de visualização térmica para monitoração de processos envolvendo fluidos multifásicos / Contribution to the development of techniques of thermal visualization for monitoring of processes involving fluid multiphases

Gisleine Pereira de Campos 22 October 2004 (has links)
Técnicas de reconstrução térmica inversa são muito usadas em diferentes aplicações tais como a determinação de propriedades térmicas de novos materiais, controle da produção de calor, temperatura em processos de manufatura, etc. Apesar da ampla aplicabilidade, o problema inverso é intrinsecamente mal condicionado e tem sido tema de trabalhos de vários pesquisadores. A solução de um problema térmico inverso tridimensional é significantemente complexa, e, assim requer uma formulação que não contenha condições experimentais não realistas tais como confinamento bidimensional e estabilidade do campo térmico com relação a mudanças em parâmetros internos. Uma das abordagens adotada é baseada na formulação variacional sobre a forma do erro quadrático para reconstrução da distribuição de condução de calor interna e coeficiente de condução de calor parietal para um problema tridimensional. Dentro desta estrutura, a natureza mal condicionada do problema se manifesta na superfície de otimização por produzir topologias problemáticas tais como, vários mínimos locais, pontos de sela, vales e platôs ao redor da solução etc. Para viabilizar a abordagem escolhida, um modelo numérico foi escrito baseado na discretização por diferenças finitas da equação diferencial governante e condições de contorno. O erro funcional foi definido pela comparação entre medidas experimentais e numéricas de temperatura. O objetivo foi realizar simulações numéricas a fim de mapear a superfície de otimização correspondente e identificar a estrutura problemática associada ou patologia, chegando assim à reconstrução do coeficiente de convecção h. / Inverse thermal reconstruction techniques are widely used in different applications such as the determination of thermal properties of new materials, control of heat generation, temperature in manufacturing processes, etc. Despite the broad range of applicability, an inverse problem is intrinsically ill conditioned and has been the subject of the work of several researchers. The solution of an inverse 3-dimesional thermal problem is significantly complex, and, thus, requires a formulation that do not contain unrealistic experimental conditions such as 2-dimensional confinement and steadiness of the thermal field with respect to changes in internal parameters. One of the most adopted is the variational formulation based on quadratic error forms for the reconstruction of the internal heat conduction distribution and convection coefficient for a 3-dimensional problem. Within this framework, the ill conditioned nature of the problem manifests itself on the optimization surface by producing problematic topologies such as contour and multiple local minima, saddle points, plateaux around the solution pit and so on. To be able to apply th method a numerical model was written based on a finite difference discretization of the governing differential equation and boundary conditions. An error functional was defined by comparing experimental and numerical measurement temperatures. Numerical simulations aiming at mapping the corresponding optimization surfaces andatidentifing the associated problematic structures or pathologies, resulting in the reconstruction of convection coefficient.

Page generated in 0.4629 seconds