• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 76
  • 46
  • 36
  • 20
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 494
  • 494
  • 145
  • 135
  • 80
  • 76
  • 75
  • 69
  • 69
  • 68
  • 65
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Inversion for textured images : unsupervised myopic deconvolution, model selection, deconvolution-segmentation / Inversion pour image texturée : déconvolution myope non supervisée, choix de modèles, déconvolution-segmentation

Văcar, Cornelia Paula 19 September 2014 (has links)
Ce travail est dédié à la résolution de plusieurs problèmes de grand intérêt en traitement d’images : segmentation, choix de modèle et estimation de paramètres, pour le cas spécifique d’images texturées indirectement observées (convoluées et bruitées). Dans ce contexte, les contributions de cette thèse portent sur trois plans différents : modéle, méthode et algorithmique.Du point de vue modélisation de la texture, un nouveaumodèle non-gaussien est proposé. Ce modèle est défini dans le domaine de Fourier et consiste en un mélange de Gaussiennes avec une Densité Spectrale de Puissance paramétrique.Du point de vueméthodologique, la contribution est triple –troisméthodes Bayésiennes pour résoudre de manière :–optimale–non-supervisée–des problèmes inverses en imagerie dans le contexte d’images texturées ndirectement observées, problèmes pas abordés dans la littérature jusqu’à présent.Plus spécifiquement,1. la première méthode réalise la déconvolution myope non-supervisée et l’estimation des paramètres de la texture,2. la deuxième méthode est dédiée à la déconvolution non-supervisée, le choix de modèle et l’estimation des paramètres de la texture et, finalement,3. la troisième méthode déconvolue et segmente une image composée de plusieurs régions texturées, en estimant au même temps les hyperparamètres (niveau du signal et niveau du bruit) et les paramètres de chaque texture.La contribution sur le plan algorithmique est représentée par une nouvelle version rapide de l’algorithme Metropolis-Hastings. Cet algorithme est basé sur une loi de proposition directionnelle contenant le terme de la ”direction de Newton”. Ce terme permet une exploration rapide et efficace de l’espace des paramètres et, de ce fait, accélère la convergence. / This thesis is addressing a series of inverse problems of major importance in the fieldof image processing (image segmentation, model choice, parameter estimation, deconvolution)in the context of textured images. In all of the aforementioned problems theobservations are indirect, i.e., the textured images are affected by a blur and by noise. Thecontributions of this work belong to three main classes: modeling, methodological andalgorithmic. From the modeling standpoint, the contribution consists in the development of a newnon-Gaussian model for textures. The Fourier coefficients of the textured images are modeledby a Scale Mixture of Gaussians Random Field. The Power Spectral Density of thetexture has a parametric form, driven by a set of parameters that encode the texture characteristics.The methodological contribution is threefold and consists in solving three image processingproblems that have not been tackled so far in the context of indirect observationsof textured images. All the proposed methods are Bayesian and are based on the exploitingthe information encoded in the a posteriori law. The first method that is proposed is devotedto the myopic deconvolution of a textured image and the estimation of its parameters.The second method achieves joint model selection and model parameters estimation froman indirect observation of a textured image. Finally, the third method addresses the problemof joint deconvolution and segmentation of an image composed of several texturedregions, while estimating at the same time the parameters of each constituent texture.Last, but not least, the algorithmic contribution is represented by the development ofa new efficient version of the Metropolis Hastings algorithm, with a directional componentof the proposal function based on the”Newton direction” and the Fisher informationmatrix. This particular directional component allows for an efficient exploration of theparameter space and, consequently, increases the convergence speed of the algorithm.To summarize, this work presents a series of methods to solve three image processingproblems in the context of blurry and noisy textured images. Moreover, we present twoconnected contributions, one regarding the texture models andone meant to enhance theperformances of the samplers employed for all of the three methods.
132

Caractérisation large bande du comportement dynamique linéaire des structures hétérogènes viscoélastiques anisotropes : application à la table d'harmonie du piano / Wide-band characterization of the heterogeneous viscoelastic and anisotropic dynamical behavior of structures : application to the piano soundboard

Margerit, Pierre 17 December 2018 (has links)
Le présent travail, réalisé dans le cadre du projet ANR MAESSTRO, concerne le remplacement des tables d’harmonie de piano traditionnellement constituées d’épicéa par des structures composites stratifiées. Cette démarche suppose une connaissance fine des matériaux à remplacer et des matériaux de remplacement. La contribution de la thèse consiste donc en le développement d’outils de caractérisation du comportement dynamique de structures viscoélastiques anisotropes hétérogènes sur une large bande de fréquence. Dans une première partie, l’étude théorique de la propagation des ondes planes dans ces structures est étudiée. Contrairement à une approche modale classique, les conditions aux limites et chargements sont écartés du problème. Les surfaces de dispersion obtenues contiennent la signature de l’anisotropie, de l’hétérogénéité des propriétés mécaniques ou encore du comportement dissipatif de la structure. La deuxième partie est dédiée au développement d’un moyen de mesure plein-champ robotisé. Celui-ci permet la mesure du champ de vitesse tridimensionnel instantané d’une structure soumise à un chargement dynamique répétable. La définition de l’expérience est intégrée dans un environnement CAO, permettant la prise en compte des problématiques liées à l’utilisation d’un bras robot, ainsi que l’automatisation complète de la mesure. La troisième partie est consacrée à la formulation de procédures d’identification basées sur les mesures obtenues. Les paramètres d’un modèle réduit de la mesure sont identifiés par le biais d’une méthode ESPRIT originale, intégrant des développements spécifiques aux mesures plein-champ. Ces paramètres sont ensuite utilisés pour exprimer un problème aux valeurs propres inverse permettant l’identification des propriétés de la structure mesurée. La démarche est mise en œuvre dans le cadre de l’analyse modale (régime transitoire) et l’analyse en vecteurs d’onde proposée (régime permanent). Des validations expérimentales sur des poutres homogènes et plaques anisotropes sont présentées. Le manuscrit conclut par l’application des méthodes proposées à l’identification des propriétés matériau d’une table d’harmonie de piano à queue Stephen Paulello Technologies SP190// / The present work, as part of the MAESSTRO ANR project, is motivated by the replacement of wood by composite material in the design of the piano soundboard. The main focus is on the characterization of the mechanical properties of both replaced and replacement materials in a wide frequency range, taking into account anisotropy, heterogeneous and viscoelastic behavior. First, the wave propagation in such structures is investigated; boundary conditions and loads are discarded to focus on the mechanisms responsible for the energy transmission in the media. The footprint of the complex behavior of the studied structures is represented and interpreted via the dispersion surfaces. Second, a robotized setup is proposed, allowing for the measurement of the full-field instantaneous 3D velocity along the surface of structures submitted to a repeated dynamic load. Third, identification methods using this experimental data are proposed. Based on the parameters of a reduced signal model of the measurement identified with an original ESPRIT method, inverse eigenvalue problems are formulated. Both transient and steady regime are investigated, respectively through modal analysis and the proposed wavevector analysis. The proposed methods are validated through applications on homogeneous beams and anisotropic plates. Finally, the overall proposed procedure is applied for the identification of the material properties of the soundboard of the Stephen Paulello technologies SP190// grand piano
133

Estimação dinâmica em tomografia por impedância elétrica com modelos adaptativos. / Dynamic estimation in electrical impedance tomography with adaptive models.

Pellegrini, Sergio de Paula 21 March 2019 (has links)
Este trabalho investigou o uso de tomografia por impedância elétrica (TIE) na discriminação de fases em sistemas bifásicos água-ar. A TIE é uma técnica não-intrusiva em que são estimados parâmetros de condutividade elétrica de um sistema de interesse a partir de correntes elétricas impostas e potenciais elétricos medidos na fronteira desse meio. Esta técnica se traduz em um problema desafiador, por ser inverso, não-linear e mal-posto. Adicionalmente, na aplicação em análise, a dinâmica do sistema é rápida a ponto de influir nas estimativas procuradas. Foi sistematizada uma abordagem para integrar informações de medições a de outras fontes, como um regularizador generalizado de Tikhonov (filtro gaussiano), parametrização de estado e modelos de evolução, construindo um modelo adaptativo de estimação. Tal combinação de métodos é inédita na literatura. Parametrização do estado (vetor de condutividades do sistema de interesse, após discretização espacial) em condutividade logarítmica foi implementada para assegurar a obtenção de valores condizentes com a física, i.e., as estimativas em condutividade são mantidas estritamente positivas, com benefícios adicionais de aumento da região de convergência monotônica e melhoria na uniformidade da taxa de convergência das estimativas. O estudo de um sistema numérico evidenciou que a parametrização do estado permitiu o aumento do fator de sub-relaxação no método de Gauss-Newton, de 4~ para 15~, o que torna o algoritmo mais rápido. Dois modelos de evolução para escoamentos foram propostos e, comparativamente com o modelo de passeio aleatório, proporcionaram convergência mais rápida, melhor distinção das fases e melhoria do grau de observabilidade do problema de TIE. Esses modelos descrevem uma velocidade representativa para o escoamento, avaliada experimentalmente em 0; 47 m_s. Ensaios experimentais estáticos sugerem que os métodos aplicados diferenciam a presença das fases em um duto. No caso em que a dinâmica é relevante (passagem de bolhas ao longo do duto), o algoritmo desenvolvido permite o devido acompanhamento de não homogeneidades. Portanto, os resultados dessa pesquisa têm o potencial de apoiar a estimação de vazões bifásicas em trabalhos futuros, uma vez que a avaliação da fração de ocupação das fases é um passo crucial para o desenvolvimento de um medidor real de vazão multifásica. / This work investigated the use of electrical impedance tomography (EIT) in phase discrimination in two-phase air-water systems. EIT is a non-intrusive technique in which electric currents are imposed and electric potentials are measured at the boundary of a system. This method is mathematically challenging, as it is non-linear, inverse, and ill-posed. Also, for the application at hand, the system dynamics is fast enough to influence the sought estimates. A systematic approach was created to combine information from measurements and other sources, including a generalized Tikhonov regularization term (Gaussian filter), state parametrization and evolution models. This adaptive estimation approach is a contribution to the literature. State parametrization (vector of conductivities of the system of interest after spatial discretization) in logarithmic conductivity was implemented to ensure that the estimates remain in physical bounds, i.e., only positive values are achieved. Additional benefits are the increase of the region that leads to monotone convergence and a more uniform convergence rate of the estimates. The comparative analysis of a numerical system showed that state parametrization allowed an increase for the under-relaxation factor in the Gauss-Newton method, from 4% to 15%, increasing the algorithm\'s speed. Two evolution models for flows were proposed and, when compared to the random walk model, provided faster convergence, better phase distinction and an improved degree of observability for the EIT problem. These models describe a representative velocity for the flow, estimated experimentally as 0:47 m/s. Experimental tests of static setups suggest that the applied methods are able to differentiate the phases in a duct. In the case where the dynamics is relevant (flow of bubbles along the duct), the algorithm developed allows for monitoring inhomogeneities. Therefore, the results of this thesis are able to support the estimation of two-phase flow rates in future work, given that evaluating void fraction is a crucial step for an online multiphase flow rate meter.
134

Restoration super-resolution of image sequences : application to TV archive documents / Restauration super-résolution de séquences d'images : applications aux documents d'archives TV

Abboud, Feriel 15 December 2017 (has links)
Au cours du dernier siècle, le volume de vidéos stockées chez des organismes tel que l'Institut National de l'Audiovisuel a connu un grand accroissement. Ces organismes ont pour mission de préserver et de promouvoir ces contenus, car, au-delà de leur importance culturelle, ces derniers ont une vraie valeur commerciale grâce à leur exploitation par divers médias. Cependant, la qualité visuelle des vidéos est souvent moindre comparée à celles acquises par les récents modèles de caméras. Ainsi, le but de cette thèse est de développer de nouvelles méthodes de restauration de séquences vidéo provenant des archives de télévision française, grâce à de récentes techniques d'optimisation. La plupart des problèmes de restauration peuvent être résolus en les formulant comme des problèmes d'optimisation, qui font intervenir plusieurs fonctions convexes mais non-nécessairement différentiables. Pour ce type de problèmes, on a souvent recourt à un outil efficace appelé opérateur proximal. Le calcul de l'opérateur proximal d'une fonction se fait de façon explicite quand cette dernière est simple. Par contre, quand elle est plus complexe ou fait intervenir des opérateurs linéaires, le calcul de l'opérateur proximal devient plus compliqué et se fait généralement à l'aide d'algorithmes itératifs. Une première contribution de cette thèse consiste à calculer l'opérateur proximal d'une somme de plusieurs fonctions convexes composées avec des opérateurs linéaires. Nous proposons un nouvel algorithme d'optimisation de type primal-dual, que nous avons nommé Algorithme Explicite-Implicite Dual par Blocs. L'algorithme proposé permet de ne mettre à jour qu'un sous-ensemble de blocs choisi selon une règle déterministe acyclique. Des résultats de convergence ont été établis pour les deux suites primales et duales de notre algorithme. Nous avons appliqué notre algorithme au problème de déconvolution et désentrelacement de séquences vidéo. Pour cela, nous avons modélisé notre problème sous la forme d'un problème d'optimisation dont la solution est obtenue à l'aide de l'algorithme explicite-implicite dual par blocs. Dans la deuxième partie de cette thèse, nous nous sommes intéressés au développement d'une version asynchrone de notre l'algorithme explicite-implicite dual par blocs. Dans cette nouvelle extension, chaque fonction est considérée comme locale et rattachée à une unité de calcul. Ces unités de calcul traitent les fonctions de façon indépendante les unes des autres. Afin d'obtenir une solution de consensus, il est nécessaire d'établir une stratégie de communication efficace. Un point crucial dans le développement d'un tel algorithme est le choix de la fréquence et du volume de données à échanger entre les unités de calcul, dans le but de préserver de bonnes performances d'accélération. Nous avons évalué numériquement notre algorithme distribué sur un problème de débruitage de séquences vidéo. Les images composant la vidéo sont partitionnées de façon équitable, puis chaque processeur exécute une instance de l'algorithme de façon asynchrone et communique avec les processeurs voisins. Finalement, nous nous sommes intéressés au problème de déconvolution aveugle, qui vise à estimer le noyau de convolution et la séquence originale à partir de la séquence dégradée observée. Nous avons proposé une nouvelle méthode basée sur la formulation d'un problème non-convexe, résolu par un algorithme itératif qui alterne entre l'estimation de la séquence originale et l'identification du noyau. Notre méthode a la particularité de pouvoir intégrer divers types de fonctions de régularisations avec des propriétés mathématiques différentes. Nous avons réalisé des simulations sur des séquences synthétiques et réelles, avec différents noyaux de convolution. La flexibilité de notre approche nous a permis de réaliser des comparaisons entre plusieurs fonctions de régularisation convexes et non-convexes, en terme de qualité d'estimation / The last century has witnessed an explosion in the amount of video data stored with holders such as the National Audiovisual Institute whose mission is to preserve and promote the content of French broadcast programs. The cultural impact of these records, their value is increased due to commercial reexploitation through recent visual media. However, the perceived quality of the old data fails to satisfy the current public demand. The purpose of this thesis is to propose new methods for restoring video sequences supplied from television archive documents, using modern optimization techniques with proven convergence properties. In a large number of restoration issues, the underlying optimization problem is made up with several functions which might be convex and non-necessarily smooth. In such instance, the proximity operator, a fundamental concept in convex analysis, appears as the most appropriate tool. These functions may also involve arbitrary linear operators that need to be inverted in a number of optimization algorithms. In this spirit, we developed a new primal-dual algorithm for computing non-explicit proximity operators based on forward-backward iterations. The proposed algorithm is accelerated thanks to the introduction of a preconditioning strategy and a block-coordinate approach in which at each iteration, only a "block" of data is selected and processed according to a quasi-cyclic rule. This approach is well suited to large-scale problems since it reduces the memory requirements and accelerates the convergence speed, as illustrated by some experiments in deconvolution and deinterlacing of video sequences. Afterwards, a close attention is paid to the study of distributed algorithms on both theoretical and practical viewpoints. We proposed an asynchronous extension of the dual forward-backward algorithm, that can be efficiently implemented on a multi-cores architecture. In our distributed scheme, the primal and dual variables are considered as private and spread over multiple computing units, that operate independently one from another. Nevertheless, communication between these units following a predefined strategy is required in order to ensure the convergence toward a consensus solution. We also address in this thesis the problem of blind video deconvolution that consists in inferring from an input degraded video sequence, both the blur filter and a sharp video sequence. Hence, a solution can be reached by resorting to nonconvex optimization methods that estimate alternatively the unknown video and the unknown kernel. In this context, we proposed a new blind deconvolution method that allows us to implement numerous convex and nonconvex regularization strategies, which are widely employed in signal and image processing
135

Problèmes inverses et analyse en ondelettes adaptées

Pham Ngoc, Thanh Mai 27 November 2009 (has links) (PDF)
Nous abordons l'étude de deux problèmes inverses, le problème des moments de Hausdorff et celui de la déconvolution sur la sphère ainsi qu'un problème de régression en design aléatoire. Le problème des moments de Hausdorff consiste à estimer une densité de probabilité à partir d'une séquence de moments bruités. Nous établissons une borne supérieure pour notre estimateur ainsi qu'une borne inférieure pour la vitesse de convergence, démontrant ainsi que notre estimateur converge à la vitesse optimale pour les classes de régularité de type Sobolev. Quant au problème de déconvolution sur la sphère, nous proposons un nouvel algorithme qui combine la méthode SVD traditionnelle et une procédure de seuillage dans la base des Needlets sphériques. Nous donnons une borne supérieure en perte Lp et menons une étude numérique qui montre des résultats fort prometteurs. Le problème de la régression en design aléatoire est abordé sous le prisme bayésien et sur la base des ondelettes déformées. Nous considérons deux scenarios de modèles a priori faisant intervenir des gaussiennes à faible et à grande variance et fournissons des bornes supérieures pour l'estimateur de la médiane a posteriori. Nous menons aussi une étude numérique qui révèle de bonnes performances numériques.
136

Real-Time Optimal Parametric Design of a Simple Infiltration-Evaporation Model Using the Assess-Predict-Optimize (APO) Strategy

Ali, S., Damodaran, Murali, Patera, Anthony T. 01 1900 (has links)
Optimal parametric design of a system must be able to respond quickly to short term needs as well as long term conditions. To this end, we present an Assess-Predict-Optimize (APO) strategy which allows for easy modification of a system’s characteristics and constraints, enabling quick design adaptation. There are three components to the APO strategy: Assess - extract necessary information from given data; Predict - predict future behavior of system; and Optimize – obtain optimal system configuration based on information from the other components. The APO strategy utilizes three key mathematical ingredients to yield real-time results which would certainly conform to given constraints: dimension reduction of the model, a posteriori error estimation, and optimization methods. The resulting formulation resembles a bilevel optimization problem with an inherent nonconvexity in the inner level. Using a simple infiltration-evaporation model to simulate an irrigation system, we demonstrate the APO strategy’s ability to yield real-time optimal results. The linearized model, described by a coercive elliptic partial differential equation, is discretized by the reduced-basis output bounds method. A primal-dual interior point method is then chosen to solve the resulting APO problem. / Singapore-MIT Alliance (SMA)
137

Probabilistic Solution of Inverse Problems

Marroquin, Jose Luis 01 September 1985 (has links)
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
138

On a class of two-dimensional inverse problems wavefield-based shape detection and localization and material profile reconstruction /

Na, Seong-Won, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
139

FEM based interdisciplinary approaches to optimization of multi-stage metal forming processes

Ji, Meixing, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 130-135).
140

Axicon imaging by scalar diffraction theory

Burvall, Anna January 2004 (has links)
Axicons are optical elements that produce Bessel beams,i.e., long and narrow focal lines along the optical axis. Thenarrow focus makes them useful ine.g. alignment, harmonicgeneration, and atom trapping, and they are also used toincrease the longitudinal range of applications such astriangulation, light sectioning, and optical coherencetomography. In this thesis, axicons are designed andcharacterized for different kinds of illumination, using thestationary-phase and the communication-modes methods. The inverse problem of axicon design for partially coherentlight is addressed. A design relation, applicable toSchell-model sources, is derived from the Fresnel diffractionintegral, simplified by the method of stationary phase. Thisapproach both clarifies the old design method for coherentlight, which was derived using energy conservation in raybundles, and extends it to the domain of partial coherence. Thedesign rule applies to light from such multimode emitters aslight-emitting diodes, excimer lasers and some laser diodes,which can be represented as Gaussian Schell-model sources. Characterization of axicons in coherent, obliqueillumination is performed using the method of stationary phase.It is shown that in inclined illumination the focal shapechanges from the narrow Bessel distribution to a broadasteroid-shaped focus. It is proven that an axicon ofelliptical shape will compensate for this deformation. Theseresults, which are all confirmed both numerically andexperimentally, open possibilities for using axicons inscanning optical systems to increase resolution and depthrange. Axicons are normally manufactured as refractive cones or ascircular diffractive gratings. They can also be constructedfrom ordinary spherical surfaces, using the sphericalaberration to create the long focal line. In this dissertation,a simple lens axicon consisting of a cemented doublet isdesigned, manufactured, and tested. The advantage of the lensaxicon is that it is easily manufactured. The longitudinal resolution of the axicon varies. The methodof communication modes, earlier used for analysis ofinformation content for e.g. line or square apertures, isapplied to the axicon geometry and yields an expression for thelongitudinal resolution. The method, which is based on abi-orthogonal expansion of the Green function in the Fresneldiffraction integral, also gives the number of degrees offreedom, or the number of information channels available, forthe axicon geometry. Keywords:axicons, diffractive optics, coherence,asymptotic methods, communication modes, information content,inverse problems

Page generated in 0.0484 seconds