• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 76
  • 46
  • 30
  • 10
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 438
  • 438
  • 110
  • 101
  • 79
  • 75
  • 70
  • 69
  • 68
  • 64
  • 60
  • 56
  • 53
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

A Method for Characterizing the Properties of Industrial Foams

Salisbury, Shaun M. 10 August 2005 (has links) (PDF)
Assessing the effect of foam layers on transport phenomena is of significant interest in many industries, so a method for predicting foam layer properties has been developed. A model of the propagation of radiation from an amplitude-modulated laser beam through a non-absorbing foam layer has been developed using diffusion theory. Measurements predicted by diffusion theory were compared to results generated using Monte Carlo methods for a variety of foam layer properties in both the time-domain and the frequency-domain. The properties that were varied include the layer thickness, the scattering coefficient, and the asymmetry parameter. Layer thicknesses between 8.5 mm and 18 cm were considered. Values of the scattering coefficient ranged from about 600 m-1 to 14000 m-1, while the asymmetry parameter varied between 0 and 1. A conjugate-gradient algorithm was used to minimize the difference between simulated Monte Carlo measurements and diffusion theory predicted measurements. A large set of simulated measurements, calculated at various source-detector separations and three discrete frequencies were used to predict the layer properties. Ten blind cases were considered and property predictions were made for each. The predicted properties were within approximately 10% of the actual values, and on average the errors were approximately 4%. Predictions of the reduced scattering coefficient were all within approximately 5% with the majority being within 3%. Predictions of L were all within approximately 10% with the majority being within 7%. Attempts to separate g from the reduced scattering coefficient were unsuccessful, and it was determined that implementation of different source models might make such attempts possible. It was shown that with a large number of measurements, properties could be accurately predicted. A method for reducing the number of measurements needed for accurate property estimation was developed. Starting with a single measurement location, property predictions were made. An approach for updating the optimal detector location, based on the current estimate of the properties, was developed and applied to three cases. Property predictions for the three cases were made to within 10% of the actual values. A maximum of three measurement locations were necessary to obtain such predictions, a significant reduction as compared to the previously illustrated method.
232

Signal Processing Methods for Ultra-High Resolution Scatterometry

Williams, Brent A. 05 April 2010 (has links) (PDF)
This dissertation approaches high resolution scatterometry from a new perspective. Three related general topics are addressed: high resolution σ^0 imaging, wind estimation from high resolution σ^0 images over the ocean, and high resolution wind estimation directly from the scatterometer measurements. Theories of each topic are developed, and previous approaches are generalized and formalized. Improved processing algorithms for these theories are developed, implemented for particular scatterometers, and analyzed. Specific results and contributions are noted below. The σ^0 imaging problem is approached as the inversion of a noisy aperture-filtered sampling operation-extending the current theory to deal explicitly with noise. A maximum aposteriori (MAP) reconstruction estimator is developed to regularize the problem and deal appropriately with noise. The method is applied to the SeaWinds scatterometer and the Advanced Scatterometer (ASCAT). The MAP approach produces high resolution σ^0 images without introducing the ad-hoc processing steps employed in previous methods. An ultra high resolution (UHR) wind product has been previously developed and shown to produce valuable high resolution information, but the theory has not been formalized. This dissertation develops the UHR sampling model and noise model, and explicitly states the implicit assumptions involved. Improved UHR wind retrieval methods are also developed. The developments in the σ^0 imaging problem are extended to deal with the nonlinearities involved in wind field estimation. A MAP wind field reconstruction estimator is developed and implemented for the SeaWinds scatterometer. MAP wind reconstruction produces a wind field estimate that is consistent with the conventional product, but with higher resolution. The MAP reconstruction estimates have a resolution similar to the UHR estimates, but with less noise. A hurricane wind model is applied to obtain an informative prior used in MAP estimation, which reduces noise and ameliorates ambiguity selection and rain contamination.
233

Inverse Boundary Element/genetic Algorithm Method For Reconstruction O

Silieti, Mahmood 01 January 2004 (has links)
A methodology is formulated for the solution of the inverse problem concerned with the reconstruction of multi-dimensional heat fluxes for film cooling applications. The motivation for this study is the characterization of complex thermal conditions in industrial applications such as those encountered in film cooled turbomachinery components. The heat conduction problem in the metal endwall/shroud is solved using the boundary element method (bem), and the inverse problem is solved using a genetic algorithm (ga). Thermal conditions are overspecified at exposed surfaces amenable to measurement, while the temperature and surface heat flux distributions are unknown at the film cooling hole/slot walls. The latter are determined in an iterative process by developing two approaches. The first approach, developed for 2d applications, solves an inverse problem whose objective is to adjust the film cooling hole/slot wall temperatures and heat fluxes until the temperature and heat flux at the measurement surfaces are matched in an overall heat conduction solution. The second approach, developed for 2d and 3d applications, is to distribute a set of singularities (sinks) at the vicinity of the cooling slots/holes surface inside a fictitious extension of the physical domain or along cooling hole centerline with a given initial strength distribution. The inverse problem iteratively alters the strength distribution of the singularities (sinks) until the measuring surfaces heat fluxes are matched. The heat flux distributions are determined in a post-processing stage after the inverse problem is solved. The second approach provides a tremendous advantage in solving the inverse problem, particularly in 3d applications, and it is recommended as the method of choice for this class of problems. It can be noted that the ga reconstructed heat flux distributions are robust, yielding accurate results to both exact and error-laden inputs. In all cases in this study, results from experiments are simulated using a full conjugate heat transfer (cht) finite volume models which incorporate the interactions of the external convection in the hot turbulent gas, internal convection within the cooling plena, and the heat conduction in the metal endwall/shroud region. Extensive numerical investigations are undertaken to demonstrate the significant importance of conjugate heat transfer in film cooling applications and to identify the implications of various turbulence models in the prediction of accurate and more realistic surface temperatures and heat fluxes in the cht simulations. These, in turn, are used to provide numerical inputs to the inverse problem. Single and multiple cooling slots, cylindrical cooling holes, and fan-shaped cooling holes are considered in this study. The turbulence closure is modeled using several two-equation approach, the four-equation turbulence model, as well as five and seven moment reynolds stress models. The predicted results, by the different turbulence models, for the cases of adiabatic and conjugate models, are compared to experimental data reported in the open literature. Results show the significant effects of conjugate heat transfer on the temperature field in the film cooling hole region, and the additional heating up of the cooling jet itself. Moreover, results from the detailed numerical studies presented in this study validate the inverse problem approaches and reveal good agreement between the bem/ga reconstructed heat fluxes and the cht simulated heat fluxes along the inaccessible cooling slot/hole walls
234

Iterative methods for the solution of the electrical impedance tomography inverse problem.

Alruwaili, Eman January 2023 (has links)
No description available.
235

Inverse Problems in Structural Mechanics

Li, Jing 29 December 2005 (has links)
This dissertation deals with the solution of three inverse problems in structural mechanics. The first one is load updating for finite element models (FEMs). A least squares fitting is used to identify the load parameters. The basic studies are made for geometrically linear and nonlinear FEMs of beams or frames by using a four-noded curved beam element, which, for a given precision, may significantly solve the ill-posed problem by reducing the overall number of degrees of freedom (DOF) of the system, especially the number of the unknown variables to obtain an overdetermined system. For the basic studies, the unknown applied load within an element is represented by a linear combination of integrated Legendre polynomials, the coefficients of which are the parameters to be extracted using measured displacements or strains. The optimizer L-BFGS-B is used to solve the least squares problem. The second problem is the placement optimization of a distributed sensing fiber optic sensor for a smart bed using Genetic Algorithms (GA), where the sensor performance is maximized. The sensing fiber optic cable is represented by a Non-uniform Rational B-Splines (NURBS) curve, which changes the placement of a set of infinite number of the infinitesimal sensors to the placement of a set of finite number of the control points. The sensor performance is simplified as the integration of the absolute curvature change of the fiber optic cable with respect to a perturbation due to the body movement of a patient. The smart bed is modeled as an elastic mattress core, which supports a fiber optic sensor cable. The initial and deformed geometries of the bed due to the body weight of the patient are calculated using MSC/NASTRAN for a given body pressure. The deformation of the fiber optic cable can be extracted from the deformation of the mattress. The performance of the fiber optic sensor for any given placement is further calculated for any given perturbation. The third application is stiffened panel optimization, including the size and placement optimization for the blade stiffeners, subject to buckling and stress constraints. The present work uses NURBS for the panel and stiffener representation. The mesh for the panel is generated using DistMesh, a triangulation algorithm in MATLAB. A NASTRAN/MATLAB interface is developed to automatically transfer the data between the analysis and optimization processes respectively. The optimization consists of minimizing the weight of the stiffened panel with design variables being the thickness of the plate and height and width of the stiffener as well as the placement of the stiffeners subjected to buckling and stress constraints under in-plane normal/shear and out-plane pressure loading conditions. / Ph. D.
236

Kernel Estimation Approaches to Blind Deconvolution

Yash Sanghvi (18387693) 19 April 2024 (has links)
<p dir="ltr">The past two decades have seen photography shift from the hands of professionals to that of the average smartphone user. However, fitting a camera module in the palm of your hand has come with its own cost. The reduced sensor size, and hence the smaller pixels, has made the image inherently noisier due to fewer photons being captured. To compensate for fewer photons, we can increase the exposure of the camera but this may exaggerate the effect of hand shake, making the image blurrier. The presence of both noise and blur has made the post-processing algorithms necessary to produce a clean and sharp image. </p><p dir="ltr">In this thesis, we discuss various methods of deblurring images in the presence of noise. Specifically, we address the problem of photon-limited deconvolution, both with and without the underlying blur kernel being known i.e. non-blind and blind deconvolution respectively. For the problem of blind deconvolution, we discuss the flaws of the conventional approach of joint estimation of the image and blur kernel. This approach, despite its drawbacks, has been the go-to method for solving blind deconvolution for decades. We then discuss the relatively unexplored kernel-first approach to solving the problem which is numerically stable than the alternating minimization counterpart. We show how to implement this framework using deep neural networks in practice for both photon-limited and noiseless deconvolution problems. </p>
237

On local constraints and regularity of PDE in electromagnetics : applications to hybrid imaging inverse problems

Alberti, Giovanni S. January 2014 (has links)
The first contribution of this thesis is a new regularity theorem for time harmonic Maxwell's equations with less than Lipschitz complex anisotropic coefficients. By using the L<sup>p</sup> theory for elliptic equations, it is possible to prove H<sup>1</sup> and Hölder regularity results, provided that the coefficients are W<sup>1,p</sup> for some p = 3. This improves previous regularity results, where the assumption W<sup>1,∞</sup> for the coefficients was believed to be optimal. The method can be easily extended to the case of bi-anisotropic materials, for which a separate approach turns out to be unnecessary. The second focus of this work is the boundary control of the Helmholtz and Maxwell equations to enforce local constraints inside the domain. More precisely, we look for suitable boundary conditions such that the corresponding solutions and their derivatives satisfy certain local non-zero constraints. Complex geometric optics solutions can be used to construct such illuminations, but are impractical for several reasons. We propose a constructive approach to this problem based on the use of multiple frequencies. The suitable boundary conditions are explicitly constructed and give the desired constraints, provided that a finite number of frequencies, given a priori, are chosen in a fixed range. This method is based on the holomorphicity of the solutions with respect to the frequency and on the regularity theory for the PDE under consideration. This theory finds applications to several hybrid imaging inverse problems, where the unknown coefficients have to be imaged from internal measurements. In order to perform the reconstruction, we often need to find suitable boundary conditions such that the corresponding solutions satisfy certain non-zero constraints, depending on the particular problem under consideration. The multiple frequency approach introduced in this thesis represents a valid alternative to the use of complex geometric optics solutions to construct such boundary conditions. Several examples are discussed.
238

Parameter recovery in AC solution-phase voltammetry and a consideration of some issues arising when applied to surface-confined reactions

Morris, Graham Peter January 2014 (has links)
A major problem in the quantitative analysis of AC voltammetric data has been the variance in results between laboratories, often resulting from a reliance on "heuristic" methods of parameter estimation that are strongly dependent on the choices of the operator. In this thesis, an automatic method for parameter estimation will be tested in the context of experiments involving electron-transfer processes in solution-phase. It will be shown that this automatic method produces parameter estimates consistent with those from other methods and the literature in the case of the ferri-/ferrocyanide couple, and is able to explain inconsistency in published values of the rate parameter for the ferrocene/ferrocenium couple. When a coupled homogeneous reaction is considered in a theoretical study, parameter recovery is achieved with a higher degree of accuracy when simulated data resulting from a high frequency AC voltammetry waveform are used. When surface-confined reactions are considered, heterogeneity in the rate constant and formal potential make parameter estimation more challenging. In the final study, a method for incorporating these "dispersion" effects into voltammetric simulations is presented, and for the first time, a quantitive theoretical study of the impact of dispersion on measured current is undertaken.
239

Signal processing methods for fast and accurate reconstruction of digital holograms

Seifi, Mozhdeh 03 October 2013 (has links) (PDF)
Techniques for fast, 3D, quantitative microscopy are of great interest in many fields. In this context, in-line digital holography has significant potential due to its relatively simple setup (lensless imaging), its three-dimensional character and its temporal resolution. The goal of this thesis is to improve existing hologram reconstruction techniques by employing an "inverse problems" approach. For applications of objects with parametric shapes, a greedy algorithm has been previously proposed which solves the (inherently ill-posed) inversion problem of reconstruction by maximizing the likelihood between a model of holographic patterns and the measured data. The first contribution of this thesis is to reduce the computational costs of this algorithm using a multi-resolution approach (FAST algorithm). For the second contribution, a "matching pursuit" type of pattern recognition approach is proposed for hologram reconstruction of volumes containing parametric objects, or non-parametric objects of a few shape classes. This method finds the closest set of diffraction patterns to the measured data using a diffraction pattern dictionary. The size of the dictionary is reduced by employing a truncated singular value decomposition to obtain a low cost algorithm. The third contribution of this thesis was carried out in collaboration with the laboratory of fluid mechanics and acoustics of Lyon (LMFA). The greedy algorithm is used in a real application: the reconstruction and tracking of free-falling, evaporating, ether droplets. In all the proposed methods, special attention has been paid to improvement of the accuracy of reconstruction as well as to reducing the computational costs and the number of parameters to be tuned by the user (so that the proposed algorithms are used with little or no supervision). A Matlab® toolbox (accessible on-line) has been developed as part of this thesis
240

Méthodes proximales pour la résolution de problèmes inverses : application à la tomographie par émission de positrons / Proximal methods for the resolution of inverse problems : application to positron emission tomography

Pustelnik, Nelly 13 December 2010 (has links)
L'objectif de cette thèse est de proposer des méthodes fiables, efficaces et rapides pour minimiser des critères convexes apparaissant dans la résolution de problèmes inverses en imagerie. Ainsi, nous nous intéresserons à des problèmes de restauration/reconstruction lorsque les données sont dégradées par un opérateur linéaire et un bruit qui peut être non additif. La fiabilité de la méthode sera assurée par l'utilisation d'algorithmes proximaux dont la convergence est garantie lorsqu'il s'agit de minimiser des critères convexes. La quête d'efficacité impliquera le choix d'un critère adapté aux caractéristiques du bruit, à l'opérateur linéaire et au type d'image à reconstruire. En particulier, nous utiliserons des termes de régularisation basés sur la variation totale et/ou favorisant la parcimonie des coefficients du signal recherché dans une trame. L'utilisation de trames nous amènera à considérer deux approches : une formulation du critère à l'analyse et une formulation du critère à la synthèse. De plus, nous étendrons les algorithmes proximaux et leurs preuves de convergence aux cas de problèmes inverses multicomposantes. La recherche de la rapidité de traitement se traduira par l'utilisation d'algorithmes proximaux parallélisables. Les résultats théoriques obtenus seront illustrés sur différents types de problèmes inverses de grandes tailles comme la restauration d'images mais aussi la stéréoscopie, l'imagerie multispectrale, la décomposition en composantes de texture et de géométrie. Une application attirera plus particulièrement notre attention ; il s'agit de la reconstruction de l'activité dynamique en Tomographie par Emission de Positrons (TEP) qui constitue un problème inverse difficile mettant en jeu un opérateur de projection et un bruit de Poisson dégradant fortement les données observées. Pour optimiser la qualité de reconstruction, nous exploiterons les caractéristiques spatio-temporelles de l'activité dans les tissus / The objective of this work is to propose reliable, efficient and fast methods for minimizing convex criteria, that are found in inverse problems for imagery. We focus on restoration/reconstruction problems when data is degraded with both a linear operator and noise, where the latter is not assumed to be necessarily additive.The methods reliability is ensured through the use of proximal algorithms, the convergence of which is guaranteed when a convex criterion is considered. Efficiency is sought through the choice of criteria adapted to the noise characteristics, the linear operators and the image specificities. Of particular interest are regularization terms based on total variation and/or sparsity of signal frame coefficients. As a consequence of the use of frames, two approaches are investigated, depending on whether the analysis or the synthesis formulation is chosen. Fast processing requirements lead us to consider proximal algorithms with a parallel structure. Theoretical results are illustrated on several large size inverse problems arising in image restoration, stereoscopy, multi-spectral imagery and decomposition into texture and geometry components. We focus on a particular application, namely Positron Emission Tomography (PET), which is particularly difficult because of the presence of a projection operator combined with Poisson noise, leading to highly corrupted data. To optimize the quality of the reconstruction, we make use of the spatio-temporal characteristics of brain tissue activity

Page generated in 0.0476 seconds