• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 153
  • 98
  • 65
  • 14
  • 9
  • 5
  • 2
  • 2
  • Tagged with
  • 368
  • 368
  • 136
  • 53
  • 52
  • 50
  • 49
  • 45
  • 34
  • 33
  • 32
  • 30
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Complexity penalized methods for structured and unstructured data

Goeva, Aleksandrina 08 November 2017 (has links)
A fundamental goal of statisticians is to make inferences from the sample about characteristics of the underlying population. This is an inverse problem, since we are trying to recover a feature of the input with the availability of observations on an output. Towards this end, we consider complexity penalized methods, because they balance goodness of fit and generalizability of the solution. The data from the underlying population may come in diverse formats - structured or unstructured - such as probability distributions, text tokens, or graph characteristics. Depending on the defining features of the problem we can chose the appropriate complexity penalized approach, and assess the quality of the estimate produced by it. Favorable characteristics are strong theoretical guarantees of closeness to the true value and interpretability. Our work fits within this framework and spans the areas of simulation optimization, text mining and network inference. The first problem we consider is model calibration under the assumption that given a hypothesized input model, we can use stochastic simulation to obtain its corresponding output observations. We formulate it as a stochastic program by maximizing the entropy of the input distribution subject to moment matching. We then propose an iterative scheme via simulation to approximately solve it. We prove convergence of the proposed algorithm under appropriate conditions and demonstrate the performance via numerical studies. The second problem we consider is summarizing text documents through an inferred set of topics. We propose a frequentist reformulation of a Bayesian regularization scheme. Through our complexity-penalized perspective we lend further insight into the nature of the loss function and the regularization achieved through the priors in the Bayesian formulation. The third problem is concerned with the impact of sampling on the degree distribution of a network. Under many sampling designs, we have a linear inverse problem characterized by an ill-conditioned matrix. We investigate the theoretical properties of an approximate solution for the degree distribution found by regularizing the solution of the ill-conditioned least squares objective. Particularly, we study the rate at which the penalized solution tends to the true value as a function of network size and sampling rate.
102

Contrôle non destructif du sol et imagerie d'objets enfouis par des systèmes bi- et multi-statiques : de l’expérience à la modélisation / Non-destructive testing of the soil and imaging of buried objects by bi- and multi-static systems : from experience to modeling

Liu, Xiang 13 December 2017 (has links)
Les travaux présentés dans cette thèse portent sur les résolutions des problèmes direct et inverse associés à l’étude du radar de sol (GPR). Ils s’inscrivent dans un contexte d’optimisation des performances et d’amélioration de la qualité de l’imagerie. Un état de l’art est réalisé et l’accent est mis sur les méthodes de simulation et les techniques d’imagerie appliquées dans le GPR. L’étude de l’utilisation de la méthode du Galerkin discontinue (GD) pour la simulation GPR est d’abord réalisée. Des scénarios complets de GPR sont considérés et les simulations GD sont validées par comparaison avec des données obtenues par CST-MWS et des mesures. La suite de l’étude concerne la résolution du problème inverse en utilisant le Linear Sampling Method (LSM) pour l’application GPR. Une étude avec des données synthétiques est d’abord réalisée afin de valider et tester la fiabilité du LSM. Finalement, le LSM est adapté pour des applications GPR en prenant en compte les caractéristiques du rayonnement de l’antenne ainsi que ses paramètres S. Finalement, une étude est effectuée pour prouver la détectabilité de la jonction d‘un câble électrique souterrain dans un environnement réel. / The work presented in this thesis deals with the resolutions of the direct and inverse problems of the ground radar (GPR). The objective is to optimize GPR’s performance and its imaging quality. A state of the art of ground radar is realized. It focused on simulation methods and imaging techniques applied in GPR. The study of the use of the discontinuous Galerkin (GD) method for the GPR simulation is first performed. Some scenarios complete of GPR are considered and the GD simulations are validated by comparing the same scenarios’ modeling with CST-MWS and the measurements. Then a study of inverse problem resolution using the Linear Sampling Method (LSM) for the GPR application is carried out. A study with synthetic data is first performed to test the reliability of the LSM. Then, the LSM is adapted for the GPR application by taking into account the radiation of antenna. Finally, a study is designed to validate the detectability of underground electrical cables junction with GPR in a real environment.
103

End-to-end Optics Design for Computational Cameras

Sun, Qilin 10 1900 (has links)
Imaging systems have long been designed in separated steps: the experience-driven optical design followed by sophisticated image processing. Such a general-propose approach achieves success in the past but left the question open for specific tasks and the best compromise between optics and post-processing, as well as minimizing costs. Driven from this, a series of works are proposed to bring the imaging system design into end-to-end fashion step by step, from joint optics design, point spread function (PSF) optimization, phase map optimization to a general end-to-end complex lens camera. To demonstrate the joint optics application with image recovery, we applied it to flat lens imaging with a large field of view (LFOV). In applying a super-resolution single-photon avalanche diode (SPAD) camera, the PSF encoded by diffractive op tical element (DOE) is optimized together with the post-processing, which brings the optics design into the end-to-end stage. Expanding to color imaging, optimizing PSF to achieve DOE fails to find the best compromise between different wavelengths. Snapshot HDR imaging is achieved by optimizing a phase map directly. All works are demonstrated with prototypes and experiments in the real world. To further compete for the blueprint of end-to-end camera design and break the limits of a simple wave optics model and a single lens surface. Finally, we propose a general end-to-end complex lens design framework enabled by a differentiable ray tracing image formation model. All works are demonstrated with prototypes and experiments in the real world. Our frameworks offer competitive alternatives for the design of modern imaging systems and several challenging imaging applications.
104

BAYESIAN METHODS FOR BRIDGING THE CONTINUOUS ANDELECTRODE DATA, AND LAYER STRIPPING IN ELECTRICALIMPEDANCE TOMOGRAPHY.

Nakkireddy, Sumanth Reddy R. 21 June 2021 (has links)
No description available.
105

Data-driven sparse computational imaging with deep learning

Mdrafi, Robiulhossain 13 May 2022 (has links) (PDF)
Typically, inverse imaging problems deal with the reconstruction of images from the sensor measurements where sensors can take form of any imaging modality like camera, radar, hyperspectral or medical imaging systems. In an ideal scenario, we can reconstruct the images via applying an inversion procedure from these sensors’ measurements, but practical applications have several challenges: the measurement acquisition process is heavily corrupted by the noise, the forward model is not exactly known, and non-linearities or unknown physics of the data acquisition play roles. Hence, perfect inverse function is not exactly known for immaculate image reconstruction. To this end, in this dissertation, I propose an automatic sensing and reconstruction scheme based on deep learning within the compressive sensing (CS) framework to solve the computational imaging problems. Here, I develop a data-driven approach to learn both the measurement matrix and the inverse reconstruction scheme for a given class of signals, such as images. This approach paves the way for end-to-end learning and reconstruction of signals with the aid of cascaded fully connected and multistage convolutional layers with a weighted loss function in an adversarial learning framework. I also propose to extend our analysis to introduce data driven models to directly classify from compressed measurements through joint reconstruction and classification. I develop constrained measurement learning framework and demonstrate higher performance of the proposed approach in the field of typical image reconstruction and hyperspectral image classification tasks. Finally, I also propose a single data driven network that can take and reconstruct images at multiple rates of signal acquisition. In summary, this dissertation proposes novel methods on the data driven measurement acquisition for sparse signal reconstruction and classification, learning measurements for given constraints underlying the requirement of the hardware for different applications, and producing a common data driven platform for learning measurements to reconstruct signals at multiple rates. This dissertation opens the path to the learned sensing systems. The future research can use these proposed data driven approaches as the pivotal factors to accomplish task-specific smart sensors in several real-world applications.
106

Improving Accuracy in Microwave Radiometry via Probability and Inverse Problem Theory

Hudson, Derek Lavell 20 November 2009 (has links) (PDF)
Three problems at the forefront of microwave radiometry are solved using probability theory and inverse problem formulations which are heavily based in probability theory. Probability theory is able to capture information about random phenomena, while inverse problem theory processes that information. The use of these theories results in more accurate estimates and assessments of estimate error than is possible with previous, non-probabilistic approaches. The benefits of probabilistic approaches are expounded and demonstrated. The first problem to be solved is a derivation of the error that remains after using a method which corrects radiometric measurements for polarization rotation. Yueh [1] proposed a method of using the third Stokes parameter TU to correct brightness temperatures such as Tv and Th for polarization rotation. This work presents an extended error analysis of Yueh's method. In order to carry out the analysis, a forward model of polarization rotation is developed which accounts for the random nature of thermal radiation, receiver noise, and (to first order) calibration. Analytic formulas are then derived and validated for bias, variance, and root-mean-square error (RMSE) as functions of scene and radiometer parameters. Examination of the formulas reveals that: 1) natural TU from planetary surface radiation, of the magnitude expected on Earth at L-band, has a negligible effect on correction for polarization rotation; 2) RMSE is a function of rotation angle Ω, but the value of Ω which minimizes RMSE is not known prior to instrument fabrication; and 3) if residual calibration errors can be sufficiently reduced via postlaunch calibration, then Yueh's method reduces the error incurred by polarization rotation to negligibility. The second problem addressed in this dissertation is optimal estimation of calibration parameters in microwave radiometers. Algebraic methods for internal calibration of a certain class of polarimetric microwave radiometers are presented by Piepmeier [2]. This dissertation demonstrates that Bayesian estimation of the calibration parameters decreases the RMSE of the estimates by a factor of two as compared with algebraic estimation. This improvement is obtained by using knowledge of the noise structure of the measurements and by utilizing all of the information provided by the measurements. Furthermore, it is demonstrated that much significant information is contained in the covariance information between the calibration parameters. This information can be preserved and conveyed by reporting a multidimensional pdf for the parameters rather than merely the means and variances of those parameters. The proposed method is also extended to estimate several hardware parameters of interest in system calibration. The final portion of this dissertation demonstrates the advantages of a probabilistic approach in an empirical situation. A recent inverse problem formulation, sketched in [3], is founded on probability theory and is sufficiently general that it can be applied in empirical situations. This dissertation applies that formulation to the retrieval of Antarctic air temperature from satellite measurements of microwave brightness temperature. The new method is contrasted with the curve-fitting approach which is the previous state-of-the-art. The adaptibility of the new method not only results in improved estimation but is also capable of producing useful estimates of air temperature in areas where the previous method fails due to the occurence of melt events.
107

Computing traction forces, intracellular prestress, and intracellular modulus distribution from fluorescence microscopy image stacks

Fan, Weiyuan 24 May 2023 (has links)
Cell modulus and prestress are important determinants of cell behavior. This study creates new software tools to compute the modulus and prestress distribution within a living cell. As input, we have a sequence of images of a cell plated on a substrate with fluorescently labeled fibronectin dots. The cell generates focal adhesions with the dots and thus deforms the substrate. A sequence of images of the cell and the fibronectin dots shows their deformation. We tested three different ways to track the movement of the fluorescent fibronectin dots. We demonstrated the accuracy and the adaptability of each method on a sequence of test images with a rigid movement. We found the best method for dot tracking is a combination of successive dot identification and digital image correlation. The dot deformation provides a measure of traction forces acting on the cell. From traction forces thus inferred, we use FEM to compute the stress distribution within a cell. We consider two approaches. The first is based on the assumption that the cell has homogeneous elastic properties. This is straightforward and requires only the cell being meshed and the linear elasticity problem solved on that mesh. Second, we relaxed the homogeneity assumption. We used previously published correlations between prestress and modulus to iteratively update the modulus and prestress distributions within the cell. A novel feature of this work is the implicit reconstruction of the modulus distribution without a measured displacement field, and the reconstruction of the prestress distribution accounting for intracellular inhomogeneity.
108

Limited angle reconstruction for 2D CT based on machine learning

Oldgren, Eric, Salomonsson, Knut January 2023 (has links)
The aim of this report is to study how machine learning can be used to reconstruct 2 dimensional computed tomography images from limited angle data. This could be used in a variety of applications where either the space or timeavailable for the CT scan limits the acquired data.In this study, three different types of models are considered. The first model uses filtered back projection (FBP) with a single learned filter, while the second uses a combination of multiple FBP:s with learned filters. The last model instead uses an FNO (Fourieer Neural Operator) layer to both inpaint and filter the limited angle data followed by a backprojection layer. The quality of the reconstructions are assessed both visually and statistically, using PSNR and SSIM measures.The results of this study show that while an FBP-based model using one or more trainable filter(s) can achieve better reconstructions than ones using an analytical Ram-Lak filter, their reconstructions still fail for small angle spans. Better results in the limited angle case can be achieved using the FNO-basedmodel.
109

Parameter Estimation In Heat Transfer And Elasticity Using Trained Pod-rbf Network Inverse Methods

Rogers, Craig 01 January 2010 (has links)
In applied mechanics it is always necessary to understand the fundamental properties of a system in order to generate an accurate numerical model or to predict future operating conditions. These fundamental properties include, but are not limited to, the material parameters of a specimen, the boundary conditions inside of a system, or essential dimensional characteristics that define the system or body. However in certain instances there may be little to no knowledge about the systems conditions or properties; as a result the problem cannot be modeled accurately using standard numerical methods. Consequently, it is critical to define an approach that is capable of identifying such characteristics of the problem at hand. In this thesis, an inverse approach is formulated using proper orthogonal decomposition (POD) with an accompanying radial basis function (RBF) network to estimate the current material parameters of a specimen with little prior knowledge of the system. Specifically conductive heat transfer and linear elasticity problems are developed in this thesis and modeled with a corresponding finite element (FEM) or boundary element (BEM) method. In order to create the truncated POD-RBF network to be utilized in the inverse approach, a series of direct FEM or BEM solutions are used to generate a statistical data set of temperatures or deformations in the system or body, each having a set of various material parameters. The data set is then transformed via POD to generate an orthonormal basis to accurately solve for the desired material characteristics using the Levenberg-Marquardt (LM) algorithm. For now, the LM algorithm can be simply defined as a direct relation to the minimization of the Euclidean norm of the objective Least Squares function(s). The trained POD-RBF inverse technique outlined in this thesis provides a flexible by which this inverse approach can be implemented into various fields of engineering and mechanics. More importantly this approach is designed to offer an inexpensive way to accurately estimate material characteristics or properties using nondestructive techniques. While the POD-RBF inverse approach outlined in this thesis focuses primarily in application to conduction heat transfer, elasticity, and fracture mechanics, this technique is designed to be directly applicable to other realistic conditions and/or industries.
110

Application of Trained POD-RBF to Interpolation in Heat Transfer and Fluid Mechanics

Ashley, Rebecca A 01 January 2018 (has links)
To accurately model or predict future operating conditions of a system in engineering or applied mechanics, it is necessary to understand its fundamental principles. These may be the material parameters, defining dimensional characteristics, or the boundary conditions. However, there are instances when there is little to no prior knowledge of the system properties or conditions, and consequently, the problem cannot be modeled accurately. It is therefore critical to define a method that can identify the desired characteristics of the current system without accumulating extensive computation time. This thesis formulates an inverse approach using proper orthogonal decomposition (POD) with an accompanying radial basis function (RBF) interpolation network. This method is capable of predicting the desired characteristics of a specimen even with little prior knowledge of the system. This thesis first develops a conductive heat transfer problem, and by using the truncated POD – RBF interpolation network, temperature values are predicted given a varying Biot number. Then, a simple bifurcation problem is modeled and solved for velocity profiles while changing the mass flow rate. This bifurcation problem provides the data and foundation for future research into the left ventricular assist device (LVAD) and implementation of POD – RBF. The trained POD – RBF inverse approach defined in this thesis can be implemented in several applications of engineering and mechanics. It provides model reduction, error filtration, regularization and an improvement over previous analysis utilizing computational fluid dynamics (CFD).

Page generated in 0.0581 seconds