• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 98
  • 66
  • 16
  • 11
  • 5
  • 2
  • 2
  • Tagged with
  • 392
  • 392
  • 136
  • 55
  • 54
  • 54
  • 53
  • 45
  • 39
  • 38
  • 34
  • 32
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Utilisation de l’IRM de diffusion pour la reconstruction de réseaux d’activations cérébrales à partir de données MEG/EEG / Using diffusion MR information to reconstruct networks of brain activations from MEG and EEG measurements

Belaoucha, Brahim 30 May 2017 (has links)
Comprendre comment différentes régions du cerveau interagissent afin d’exécuter une tâche, est un défi très complexe. La magnéto- et l’électroencéphalographie (MEEG) sont deux techniques non-invasive d’imagerie fonctionnelle utilisées pour mesurer avec une bonne résolution temporelle l’activité cérébrale. Estimer cette activité à partir des mesures MEEG est un problème mal posé. Il est donc crucial de le régulariser pour obtenir une solution unique. Il a été montré que l’homogénéité structurelle des régions corticales reflète leur homogénéité fonctionnelle. Un des buts principaux de ce travail est d’utiliser cette information structurelle pour définir des a priori permettant de contraindre de manière plus anatomique ce problème inverse de reconstruction de sources. L’imagerie par résonance magnétique de diffusion (IRMd) est, à ce jour, la seule technique non-invasive qui fournisse des informations sur l’organisation structurelle de la matière blanche. Cela justifie son utilisation pour contraindre notre problème inverse. Nous utilisons l’information fournie par l’IRMd de deux manière différentes pour reconstruire les activations du cerveau : (1) via une méthode spatiale qui utilise une parcellisation du cerveau pour contraindre l’activité des sources. Ces parcelles sont obtenues par un algorithme qui permet d’obtenir un ensemble optimal de régions structurellement homogènes pour une mesure de similarité donnée sur tout le cerveau. (2) dans une approche spatio-temporelle qui utilise les connexions anatomiques, calculées à partir des données d’IRMd, pour contraindre la dynamique des sources. Ces méthodes sont appliquée à des données synthétiques et réelles. / Understanding how brain regions interact to perform a given task is a very challenging task. Electroencephalography (EEG) and Magnetoencephalography (MEG) are two non-invasive functional imaging modalities used to record brain activity with high temporal resolution. As estimating brain activity from these measurements is an ill-posed problem. We thus must set a prior on the sources to obtain a unique solution. It has been shown in previous studies that structural homogeneity of brain regions reflect their functional homogeneity. One of the main goals of this work is to use this structural information to define priors to constrain more anatomically the MEG/EEG source reconstruction problem. This structural information is obtained using diffusion magnetic resonance imaging (dMRI), which is, as of today, the unique non-invasive structural imaging modality that provides an insight on the structural organization of white matter. This makes its use to constrain the EEG/MEG inverse problem justified. In our work, dMRI information is used to reconstruct brain activation in two ways: (1) In a spatial method which uses brain parcels to constrain the sources activity. These parcels are obtained by our whole brain parcellation algorithm which computes cortical regions with the most structural homogeneity with respect to a similarity measure. (2) In a spatio-temporal method that makes use of the anatomical connections computed from dMRI to constrain the sources’ dynamics. These different methods are validated using synthetic and real data.
112

Ill-posedness of parameter estimation in jump diffusion processes

Düvelmeyer, Dana, Hofmann, Bernd 25 August 2004 (has links)
In this paper, we consider as an inverse problem the simultaneous estimation of the five parameters of a jump diffusion process from return observations of a price trajectory. We show that there occur some ill-posedness phenomena in the parameter estimation problem, because the forward operator fails to be injective and small perturbations in the data may lead to large changes in the solution. We illustrate the instability effect by a numerical case study. To overcome the difficulty coming from ill-posedness we use a multi-parameter regularization approach that finds a trade-off between a least-squares approach based on empircal densities and a fitting of semi-invariants. In this context, a fixed point iteration is proposed that provides good results for the example under consideration in the case study.
113

Property Localization for Grain Boundary Diffusivity via Inverse Problem Theory

Kurniawan, Christian 01 December 2018 (has links)
The structure and spatial arrangement of grain boundaries strongly affect the properties of polycrystalline materials such as corrosion, creep, weldability, superconductivity, and diffusivity. However, constructing predictive grain boundary structure-property models is taxing, both experimentally and computationally due to the high dimensionality of the grain boundary character space. The purpose of this work is to develop an effective method to infer grain boundary structure-property models from measurement of the effective properties of polycrystals by utilizing the inverse problem theory. This study presents an idealized case in which structure-property models for grain boundary diffusivity are inferred from a noisy simulation. The method presented in this study is derived from a general mathematical expression of inverse problem theory. The derivation of the method is carried step by step by considering diffusivity as the property of interest. The use of the Bayesian probability approach in the inference method makes the uncertainty quantification possible to perform. This study demonstrates how uncertainty quantification for the inferred structure-property models is easily performed within the idealized case framework. The method of quantifying the uncertainty is carried by utilizing the Metropolis-Hastings algorithm and Kernel Density Estimation method. The validation of the method is carried out by considering structure-property models with one, three, and five degrees of freedom. Two- and three-dimensional simulated polycrystals are used in this study to obtain the simulation data. The two-dimensional simulated polycrystals used in this study are generated using grain growth simulation performed using a front-tracking algorithm. The three-dimensional polycrystals used in this study are generated using Neper software resulting in a real-like polycrystals. The structure-property models used in the validation are picked by considering the qualitative features that reflect trends observed in literature. The inference method is performed by ignoring any knowledge about the structure-property model in the process.
114

Regularizing An Ill-Posed Problem with Tikhonov’s Regularization

Singh, Herman January 2022 (has links)
This thesis presents how Tikhonov’s regularization can be used to solve an inverse problem of Helmholtz equation inside of a rectangle. The rectangle will be met with both Neumann and Dirichlet boundary conditions. A linear operator containing a Fourier series will be derived from the Helmholtz equation. Using this linear operator, an expression for the inverse operator can be formulated to solve the inverse problem. However, the inverse problem will be found to be ill-posed according to Hadamard’s definition. The regularization used to overcome this ill-posedness (in this thesis) is Tikhonov’s regularization. To compare the efficiency of this inverse operator with Tikhonov’s regularization, another inverse operator will be derived from Helmholtz equation in the partial frequency domain. The inverse operator from the frequency domain will also be regularized with Tikhonov’s regularization. Plots and error measurements will be given to understand how accurate the Tikhonov’s regularization is for both inverse operators. The main focus in this thesis is the inverse operator containing the Fourier series. A series of examples will also be given to strengthen the definitions, theorems and proofs that are made in this work.
115

Complexity penalized methods for structured and unstructured data

Goeva, Aleksandrina 08 November 2017 (has links)
A fundamental goal of statisticians is to make inferences from the sample about characteristics of the underlying population. This is an inverse problem, since we are trying to recover a feature of the input with the availability of observations on an output. Towards this end, we consider complexity penalized methods, because they balance goodness of fit and generalizability of the solution. The data from the underlying population may come in diverse formats - structured or unstructured - such as probability distributions, text tokens, or graph characteristics. Depending on the defining features of the problem we can chose the appropriate complexity penalized approach, and assess the quality of the estimate produced by it. Favorable characteristics are strong theoretical guarantees of closeness to the true value and interpretability. Our work fits within this framework and spans the areas of simulation optimization, text mining and network inference. The first problem we consider is model calibration under the assumption that given a hypothesized input model, we can use stochastic simulation to obtain its corresponding output observations. We formulate it as a stochastic program by maximizing the entropy of the input distribution subject to moment matching. We then propose an iterative scheme via simulation to approximately solve it. We prove convergence of the proposed algorithm under appropriate conditions and demonstrate the performance via numerical studies. The second problem we consider is summarizing text documents through an inferred set of topics. We propose a frequentist reformulation of a Bayesian regularization scheme. Through our complexity-penalized perspective we lend further insight into the nature of the loss function and the regularization achieved through the priors in the Bayesian formulation. The third problem is concerned with the impact of sampling on the degree distribution of a network. Under many sampling designs, we have a linear inverse problem characterized by an ill-conditioned matrix. We investigate the theoretical properties of an approximate solution for the degree distribution found by regularizing the solution of the ill-conditioned least squares objective. Particularly, we study the rate at which the penalized solution tends to the true value as a function of network size and sampling rate.
116

Contrôle non destructif du sol et imagerie d'objets enfouis par des systèmes bi- et multi-statiques : de l’expérience à la modélisation / Non-destructive testing of the soil and imaging of buried objects by bi- and multi-static systems : from experience to modeling

Liu, Xiang 13 December 2017 (has links)
Les travaux présentés dans cette thèse portent sur les résolutions des problèmes direct et inverse associés à l’étude du radar de sol (GPR). Ils s’inscrivent dans un contexte d’optimisation des performances et d’amélioration de la qualité de l’imagerie. Un état de l’art est réalisé et l’accent est mis sur les méthodes de simulation et les techniques d’imagerie appliquées dans le GPR. L’étude de l’utilisation de la méthode du Galerkin discontinue (GD) pour la simulation GPR est d’abord réalisée. Des scénarios complets de GPR sont considérés et les simulations GD sont validées par comparaison avec des données obtenues par CST-MWS et des mesures. La suite de l’étude concerne la résolution du problème inverse en utilisant le Linear Sampling Method (LSM) pour l’application GPR. Une étude avec des données synthétiques est d’abord réalisée afin de valider et tester la fiabilité du LSM. Finalement, le LSM est adapté pour des applications GPR en prenant en compte les caractéristiques du rayonnement de l’antenne ainsi que ses paramètres S. Finalement, une étude est effectuée pour prouver la détectabilité de la jonction d‘un câble électrique souterrain dans un environnement réel. / The work presented in this thesis deals with the resolutions of the direct and inverse problems of the ground radar (GPR). The objective is to optimize GPR’s performance and its imaging quality. A state of the art of ground radar is realized. It focused on simulation methods and imaging techniques applied in GPR. The study of the use of the discontinuous Galerkin (GD) method for the GPR simulation is first performed. Some scenarios complete of GPR are considered and the GD simulations are validated by comparing the same scenarios’ modeling with CST-MWS and the measurements. Then a study of inverse problem resolution using the Linear Sampling Method (LSM) for the GPR application is carried out. A study with synthetic data is first performed to test the reliability of the LSM. Then, the LSM is adapted for the GPR application by taking into account the radiation of antenna. Finally, a study is designed to validate the detectability of underground electrical cables junction with GPR in a real environment.
117

Constitutive compatibility based identification of spatially varying elastic parameters distributions

Moussawi, Ali 12 1900 (has links)
The experimental identification of mechanical properties is crucial in mechanics for understanding material behavior and for the development of numerical models. Classical identification procedures employ standard shaped specimens, assume that the mechanical fields in the object are homogeneous, and recover global properties. Thus, multiple tests are required for full characterization of a heterogeneous object, leading to a time consuming and costly process. The development of non-contact, full-field measurement techniques from which complex kinematic fields can be recorded has opened the door to a new way of thinking. From the identification point of view, suitable methods can be used to process these complex kinematic fields in order to recover multiple spatially varying parameters through one test or a few tests. The requirement is the development of identification techniques that can process these complex experimental data. This thesis introduces a novel identification technique called the constitutive compatibility method. The key idea is to define stresses as compatible with the observed kinematic field through the chosen class of constitutive equation, making possible the uncoupling of the identification of stress from the identification of the material parameters. This uncoupling leads to parametrized solutions in cases where 5 the solution is non-unique (due to unknown traction boundary conditions) as demonstrated on 2D numerical examples. First the theory is outlined and the method is demonstrated in 2D applications. Second, the method is implemented within a domain decomposition framework in order to reduce the cost for processing very large problems. Finally, it is extended to 3D numerical examples. Promising results are shown for 2D and 3D problems
118

End-to-end Optics Design for Computational Cameras

Sun, Qilin 10 1900 (has links)
Imaging systems have long been designed in separated steps: the experience-driven optical design followed by sophisticated image processing. Such a general-propose approach achieves success in the past but left the question open for specific tasks and the best compromise between optics and post-processing, as well as minimizing costs. Driven from this, a series of works are proposed to bring the imaging system design into end-to-end fashion step by step, from joint optics design, point spread function (PSF) optimization, phase map optimization to a general end-to-end complex lens camera. To demonstrate the joint optics application with image recovery, we applied it to flat lens imaging with a large field of view (LFOV). In applying a super-resolution single-photon avalanche diode (SPAD) camera, the PSF encoded by diffractive op tical element (DOE) is optimized together with the post-processing, which brings the optics design into the end-to-end stage. Expanding to color imaging, optimizing PSF to achieve DOE fails to find the best compromise between different wavelengths. Snapshot HDR imaging is achieved by optimizing a phase map directly. All works are demonstrated with prototypes and experiments in the real world. To further compete for the blueprint of end-to-end camera design and break the limits of a simple wave optics model and a single lens surface. Finally, we propose a general end-to-end complex lens design framework enabled by a differentiable ray tracing image formation model. All works are demonstrated with prototypes and experiments in the real world. Our frameworks offer competitive alternatives for the design of modern imaging systems and several challenging imaging applications.
119

BAYESIAN METHODS FOR BRIDGING THE CONTINUOUS ANDELECTRODE DATA, AND LAYER STRIPPING IN ELECTRICALIMPEDANCE TOMOGRAPHY.

Nakkireddy, Sumanth Reddy R. 21 June 2021 (has links)
No description available.
120

Data-driven sparse computational imaging with deep learning

Mdrafi, Robiulhossain 13 May 2022 (has links) (PDF)
Typically, inverse imaging problems deal with the reconstruction of images from the sensor measurements where sensors can take form of any imaging modality like camera, radar, hyperspectral or medical imaging systems. In an ideal scenario, we can reconstruct the images via applying an inversion procedure from these sensors’ measurements, but practical applications have several challenges: the measurement acquisition process is heavily corrupted by the noise, the forward model is not exactly known, and non-linearities or unknown physics of the data acquisition play roles. Hence, perfect inverse function is not exactly known for immaculate image reconstruction. To this end, in this dissertation, I propose an automatic sensing and reconstruction scheme based on deep learning within the compressive sensing (CS) framework to solve the computational imaging problems. Here, I develop a data-driven approach to learn both the measurement matrix and the inverse reconstruction scheme for a given class of signals, such as images. This approach paves the way for end-to-end learning and reconstruction of signals with the aid of cascaded fully connected and multistage convolutional layers with a weighted loss function in an adversarial learning framework. I also propose to extend our analysis to introduce data driven models to directly classify from compressed measurements through joint reconstruction and classification. I develop constrained measurement learning framework and demonstrate higher performance of the proposed approach in the field of typical image reconstruction and hyperspectral image classification tasks. Finally, I also propose a single data driven network that can take and reconstruct images at multiple rates of signal acquisition. In summary, this dissertation proposes novel methods on the data driven measurement acquisition for sparse signal reconstruction and classification, learning measurements for given constraints underlying the requirement of the hardware for different applications, and producing a common data driven platform for learning measurements to reconstruct signals at multiple rates. This dissertation opens the path to the learned sensing systems. The future research can use these proposed data driven approaches as the pivotal factors to accomplish task-specific smart sensors in several real-world applications.

Page generated in 0.0933 seconds