• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 76
  • 46
  • 30
  • 10
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 438
  • 438
  • 110
  • 101
  • 79
  • 75
  • 70
  • 69
  • 68
  • 64
  • 60
  • 56
  • 53
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

A consistent direct-iterative inverse design method for the Euler equations

Brock, Jerry S. 20 October 2005 (has links)
A new, consistent direct-iterative method is proposed for the solution of the aerodynamic inverse design problem. Direct-iterative methods couple analysis and shape modification methods to iteratively determine the geometry required to support a target surface pressure. The proposed method includes a consistent shape modification method wherein the identical governing equations are used in both portions of the design procedure. The new shape modification method is simple, having been developed from a truncated, quasi-analytical Taylor's series expansion of the global governing equations. This method includes a unique solution algorithm and a design tangency boundary condition which directly relates the target pressure to shape modification. The new design method was evaluated with an upwind, cell-centered finite-volume formulation of the two-dimensional Euler equations. Controlled inverse design tests were conducted with a symmetric channel where the initial and target geometries were known. The geometric design variable was a channel-wall ramp angle, 0, which is nominally five degrees. Target geometries were defined with ramp angle perturbations of J10 = 2 %, 10%, and 20 %. The new design method was demonstrated to accurately predict the target geometries for subsonic, transonic, and supersonic test cases; M=0.30, 0.85, and 2.00. The supersonic test case efficiently solved the design tests and required very few iterations. A stable and convergent solution process was also demonstrated for the lower speed test cases using an under-relaxed geometry update procedure. The development and demonstration of the consistent direct-iterative method herein represent the important first steps required for a new research area for the advancement of aerodynamic inverse design methods. / Ph. D.
172

Advanced Sampling Methods for Solving Large-Scale Inverse Problems

Attia, Ahmed Mohamed Mohamed 19 September 2016 (has links)
Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies. / Ph. D.
173

Approximate Deconvolution Reduced Order Modeling

Xie, Xuping 01 February 2016 (has links)
This thesis proposes a large eddy simulation reduced order model (LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition (POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution (AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient ( ν= 10⁻³). / Master of Science
174

On the Use of Arnoldi and Golub-Kahan Bases to Solve Nonsymmetric Ill-Posed Inverse Problems

Brown, Matthew Allen 20 February 2015 (has links)
Iterative Krylov subspace methods have proven to be efficient tools for solving linear systems of equations. In the context of ill-posed inverse problems, they tend to exhibit semiconvergence behavior making it difficult detect ``inverted noise" and stop iterations before solutions become contaminated. Regularization methods such as spectral filtering methods use the singular value decomposition (SVD) and are effective at filtering inverted noise from solutions, but are computationally prohibitive on large problems. Hybrid methods apply regularization techniques to the smaller ``projected problem" that is inherent to iterative Krylov methods at each iteration, thereby overcoming the semiconvergence behavior. Commonly, the Golub-Kahan bidiagonalization is used to construct a set of orthonormal basis vectors that span the Krylov subspaces from which solutions will be chosen, but seeking a solution in the orthonormal basis generated by the Arnoldi process (which is fundamental to the popular iterative method GMRES) has been of renewed interest recently. We discuss some of the positive and negative aspects of each process and use example problems to examine some qualities of the bases they produce. Computing optimal solutions in a given basis gives some insight into the performance of the corresponding iterative methods and how hybrid methods can contribute. / Master of Science
175

Randomization for Efficient Nonlinear Parametric Inversion

Sariaydin, Selin 04 June 2018 (has links)
Nonlinear parametric inverse problems appear in many applications in science and engineering. We focus on diffuse optical tomography (DOT) in medical imaging. DOT aims to recover an unknown image of interest, such as the absorption coefficient in tissue to locate tumors in the body. Using a mathematical (forward) model to predict measurements given a parametrization of the tissue, we minimize the misfit between predicted and actual measurements up to a given noise level. The main computational bottleneck in such inverse problems is the repeated evaluation of this large-scale forward model, which corresponds to solving large linear systems for each source and frequency at each optimization step. Moreover, to efficiently compute derivative information, we need to solve, repeatedly, linear systems with the adjoint for each detector and frequency. As rapid advances in technology allow for large numbers of sources and detectors, these problems become computationally prohibitive. In this thesis, we introduce two methods to drastically reduce this cost. To efficiently implement Newton methods, we extend the use of simultaneous random sources to reduce the number of linear system solves to include simultaneous random detectors. Moreover, we combine simultaneous random sources and detectors with optimized ones that lead to faster convergence and more accurate solutions. We can use reduced order models (ROM) to drastically reduce the size of the linear systems to be solved in each optimization step while still solving the inverse problem accurately. However, the construction of the ROM bases still incurs a substantial cost. We propose to use randomization to drastically reduce the number of large linear solves needed for constructing the global ROM bases without degrading the accuracy of the solution to the inversion problem. We demonstrate the efficiency of these approaches with 2-dimensional and 3-dimensional examples from DOT; however, our methods have the potential to be useful for other applications as well. / Ph. D.
176

Solving Forward and Inverse Problems for Seismic Imaging using Invertible Neural Networks

Gupta, Naveen 11 July 2023 (has links)
Full Waveform Inversion (FWI) is a widely used optimization technique for subsurface imaging where the goal is to estimate the seismic wave velocity beneath the Earth's surface from the observed seismic data at the surface. The problem is primarily governed by the wave equation, which is a non-linear second-order partial differential equation. A number of approaches have been developed for FWI including physics-based iterative numerical solvers as well as data-driven machine learning (ML) methods. Existing numerical solutions to FWI suffer from three major challenges: (1) sensitivity to initial velocity guess (2) non-convex loss landscape, and (3) sensitivity to noise. Additionally, they suffer from high computational cost, making them infeasible to apply in complex real-world applications. Existing ML solutions for FWI only solve for the inverse and are prone to yield non-unique solutions. In this work, we propose to solve both forward and inverse problems jointly to alleviate the issue of non-unique solutions for an inverse problem. We study the FWI problem from a new perspective and propose a novel approach based on Invertible Neural Networks. This type of neural network is designed to learn bijective mappings between the input and target distributions and hence they present a potential solution to solve forward and inverse problems jointly. In this thesis, we developed a data-driven framework that can be used to learn forward and inverse mappings between any arbitrary input and output space. Our model, Invertible X-net, can be used to solve FWI to obtain high-quality velocity images and also predict the seismic waveforms data. We compare our model with the existing baseline mod- els and show that our model outperforms them in velocity reconstruction on the OpenFWI dataset. Additionally, we also compare the predicted waveforms with a baseline and ground truth and show that our model is capable of predicting highly accurate seismic waveforms simultaneously. / Master of Science / Recent advancements in deep learning have led to the development of sophisticated methods that can be used to solve scientific problems in many disciplines including medical imaging, geophysics, and signal processing. For example, in geophysics, we study the internal structure of the Earth from indirect physical measurements. Often, these kind of problems are challenging due to existence of non-unique and unstable solutions. In this thesis, we look at one such problem called Full Waveform Inversion which aims to estimate velocity of mechanical wave inside the Earth from wave amplitude observations on the surface. For this problem, we explore a special class of neural networks that allows to uniquely map the input and output space and thus alleviate the non-uniqueness and instability in performing Full Waveform Inversion for seismic imaging.
177

The Calderón problem for connections

Cekić, Mihajlo January 2017 (has links)
This thesis is concerned with the inverse problem of determining a unitary connection $A$ on a Hermitian vector bundle $E$ of rank $m$ over a compact Riemannian manifold $(M, g)$ from the Dirichlet-to-Neumann (DN) map $\Lambda_A$ of the associated connection Laplacian $d_A^*d_A$. The connection is to be determined up to a unitary gauge equivalence equal to the identity at the boundary. In our first approach to the problem, we restrict our attention to conformally transversally anisotropic (cylindrical) manifolds $M \Subset \mathbb{R}\times M_0$. Our strategy can be described as follows: we construct the special Complex Geometric Optics solutions oscillating in the vertical direction, that concentrate near geodesics and use their density in an integral identity to reduce the problem to a suitable $X$-ray transform on $M_0$. The construction is based on our proof of existence of Gaussian Beams on $M_0$, which are a family of smooth approximate solutions to $d_A^*d_Au = 0$ depending on a parameter $\tau \in \mathbb{R}$, bounded in $L^2$ norm and concentrating in measure along geodesics when $\tau \to \infty$, whereas the small remainder (that makes the solution exact) can be shown to exist by using suitable Carleman estimates. In the case $m = 1$, we prove the recovery of the connection given the injectivity of the $X$-ray transform on $0$ and $1$-forms on $M_0$. For $m > 1$ and $M_0$ simple we reduce the problem to a certain two dimensional $\textit{new non-abelian ray transform}$. In our second approach, we assume that the connection $A$ is a $\textit{Yang-Mills connection}$ and no additional assumption on $M$. We construct a global gauge for $A$ (possibly singular at some points) that ties well with the DN map and in which the Yang-Mills equations become elliptic. By using the unique continuation property for elliptic systems and the fact that the singular set is suitably small, we are able to propagate the gauges globally. For the case $m = 1$ we are able to reconstruct the connection, whereas for $m > 1$ we are forced to make the technical assumption that $(M, g)$ is analytic in order to prove the recovery. Finally, in both approaches we are using the vital fact that is proved in this work: $\Lambda_A$ is a pseudodifferential operator of order $1$ acting on sections of $E|_{\partial M}$, whose full symbol determines the full Taylor expansion of $A$ at the boundary.
178

Optical Characterization and Optimization of Display Components : Some Applications to Liquid-Crystal-Based and Electrochromics-Based Devices

Valyukh, Iryna January 2009 (has links)
This dissertation is focused on theoretical and experimental studies of optical properties of materials and multilayer structures composing liquid crystal displays (LCDs) and electrochromic (EC) devices. By applying spectroscopic ellipsometry, we have determined the optical constants of thin films of electrochromic tungsten oxide (WOx) and nickel oxide (NiOy), the films’ thickness and roughness. These films, which were obtained at spattering conditions possess high transmittance that is important for achieving good visibility and high contrast in an EC device. Another application of the general spectroscopic ellipsometry relates to the study of a photo-alignment layer of a mixture of azo-dyes SD-1 and SDA-2. We have found the optical constants of this mixture before and after illuminating it by polarized UV light. The results obtained confirm the diffusion model to explain the formation of the photo-induced order in azo-dye films. We have developed new techniques for fast characterization of twisted nematic LC cells in transmissive and reflective modes. Our techniques are based on the characteristics functions that we have introduced for determination of parameters of non-uniform birefringent media. These characteristic functions are found by simple procedures and can be utilised for simultaneous determination of retardation, its wavelength dispersion, and twist angle, as well as for solving associated optimization problems. Cholesteric LCD that possesses some unique properties, such as bistability and good selective scattering, however, has a disadvantage – relatively high driving voltage (tens of volts). The way we propose to reduce the driving voltage consists of applying a stack of thin (~1µm) LC layers. We have studied the ability of a layer of a surface stabilized ferroelectric liquid crystal coupled with several retardation plates for birefringent color generation. We have demonstrated that in order to accomplish good color characteristics and high brightness of the display, one or two retardation plates are sufficient.
179

Inversion of seismic attributes for petrophysical parameters and rock facies

Shahraeeni, Mohammad Sadegh January 2011 (has links)
Prediction of rock and fluid properties such as porosity, clay content, and water saturation is essential for exploration and development of hydrocarbon reservoirs. Rock and fluid property maps obtained from such predictions can be used for optimal selection of well locations for reservoir development and production enhancement. Seismic data are usually the only source of information available throughout a field that can be used to predict the 3D distribution of properties with appropriate spatial resolution. The main challenge in inferring properties from seismic data is the ambiguous nature of geophysical information. Therefore, any estimate of rock and fluid property maps derived from seismic data must also represent its associated uncertainty. In this study we develop a computationally efficient mathematical technique based on neural networks to integrate measured data and a priori information in order to reduce the uncertainty in rock and fluid properties in a reservoir. The post inversion (a posteriori) information about rock and fluid properties are represented by the joint probability density function (PDF) of porosity, clay content, and water saturation. In this technique the a posteriori PDF is modeled by a weighted sum of Gaussian PDF’s. A so-called mixture density network (MDN) estimates the weights, mean vector, and covariance matrix of the Gaussians given any measured data set. We solve several inverse problems with the MDN and compare results with Monte Carlo (MC) sampling solution and show that the MDN inversion technique provides good estimate of the MC sampling solution. However, the computational cost of training and using the neural network is much lower than solution found by MC sampling (more than a factor of 104 in some cases). We also discuss the design, implementation, and training procedure of the MDN, and its limitations in estimating the solution of an inverse problem. In this thesis we focus on data from a deep offshore field in Africa. Our goal is to apply the MDN inversion technique to obtain maps of petrophysical properties (i.e., porosity, clay content, water saturation), and petrophysical facies from 3D seismic data. Petrophysical facies (i.e., non-reservoir, oil- and brine-saturated reservoir facies) are defined probabilistically based on geological information and values of the petrophysical parameters. First, we investigate the relationship (i.e., petrophysical forward function) between compressional- and shear-wave velocity and petrophysical parameters. The petrophysical forward function depends on different properties of rocks and varies from one rock type to another. Therefore, after acquisition of well logs or seismic data from a geological setting the petrophysical forward function must be calibrated with data and observations. The uncertainty of the petrophysical forward function comes from uncertainty in measurements and uncertainty about the type of facies. We present a method to construct the petrophysical forward function with its associated uncertainty from the both sources above. The results show that introducing uncertainty in facies improves the accuracy of the petrophysical forward function predictions. Then, we apply the MDN inversion method to solve four different petrophysical inverse problems. In particular, we invert P- and S-wave impedance logs for the joint PDF of porosity, clay content, and water saturation using a calibrated petrophysical forward function. Results show that posterior PDF of the model parameters provides reasonable estimates of measured well logs. Errors in the posterior PDF are mainly due to errors in the petrophysical forward function. Finally, we apply the MDN inversion method to predict 3D petrophysical properties from attributes of seismic data. In this application, the inversion objective is to estimate the joint PDF of porosity, clay content, and water saturation at each point in the reservoir, from the compressional- and shear-wave-impedance obtained from the inversion of AVO seismic data. Uncertainty in the a posteriori PDF of the model parameters are due to different sources such as variations in effective pressure, bulk modulus and density of hydrocarbon, uncertainty of the petrophysical forward function, and random noise in recorded data. Results show that the standard deviations of all model parameters are reduced after inversion, which shows that the inversion process provides information about all parameters. We also applied the result of the petrophysical inversion to estimate the 3D probability maps of non-reservoir facies, brine- and oil-saturated reservoir facies. The accuracy of the predicted oil-saturated facies at the well location is good, but due to errors in the petrophysical inversion the predicted non-reservoir and brine-saturated facies are ambiguous. Although the accuracy of results may vary due to different sources of error in different applications, the fast, probabilistic method of solving non-linear inverse problems developed in this study can be applied to invert well logs and large seismic data sets for petrophysical parameters in different applications.
180

Development of Technical Nuclear Forensics for Spent Research Reactor Fuel

Sternat, Matthew Ryan 1982- 14 March 2013 (has links)
Pre-detonation technical nuclear forensics techniques for research reactor spent fuel were developed in a collaborative project with Savannah River National Lab ratory. An inverse analysis method was employed to reconstruct reactor parameters from a spent fuel sample using results from a radiochemical analysis. In the inverse analysis, a reactor physics code is used as a forward model. Verification and validation of different reactor physics codes was performed for usage in the inverse analysis. The verification and validation process consisted of two parts. The first is a variance analysis of Monte Carlo reactor physics burnup simulation results. The codes used in this work are MONTEBURNS and MCNPX/CINDER. Both utilize Monte Carlo transport calculations for reaction rate and flux results. Neither code has a variance analysis that will propagate through depletion steps, so a method to quantify and understand the variance propagation through these depletion calculations was developed. The second verification and validation process consisted of comparing reactor physics code output isotopic compositions to radiochemical analysis results. A sample from an Oak Ridge Research Reactor spent fuel assembly was acquired through a drilling process. This sample was then dissolved in nitric acid and diluted in three different quantities, creating three separate samples. A radiochemical analysis was completed and the results were compared to simulation outputs at different levels ofdetail. After establishing a forward model, an inverse analysis was developed to re-construct the burnup, initial uranium isotopic compositions, and cooling time of a research reactor spent fuel sample. A convergence acceleration technique was used that consisted of an analytical calculation to predict burnup, initial 235U, and 236U enrichments. The analytic calculation results may also be used stand alone or in a database search algorithm. In this work, a reactor physics code is used as a for- ward model with the analytic results as initial conditions in a numerical optimization algorithm. In the numerical analysis, the burnup and initial uranium isotopic com- positions are reconstructed until the iterative spent fuel characteristics converge with the measured data. Upon convergence of the sample’s burnup and initial uranium isotopic composition, the cooling time can be reconstructed. To reconstruct cooling time, the standard decay equation is inverted and solved for time. Two methods were developed. One method uses the converged burnup and initial uranium isotopic compositions along in a reactor depletion simulation. The second method uses an isotopic signature that does not decay out of its mass bin and has a simple production chain. An example would be 137Cs which decays into the stable 137Ba. Similar results are achieved with both methods, but extended shutdown time or time away from power results in over prediction of the cooling time. The over prediction of cooling time and comparison of different burnup reconstruction isotope results are indicator signatures of extended shutdown or time away from power. Due to dynamic operation in time and function, detailed power history reconstruction for research reactors is very challenging. Frequent variations in power, repeated variable shutdown time length, and experimentation history affect the spectrum an individual assembly is burned with such that full reactor parameter reconstruction is difficult. The results from this technical nuclear forensic analysis may be used with law enforcement, intelligence data, macroscopic and microscopic sample characteristics in a process called attribution to suggest or exclude possible sources of origin for a sample.

Page generated in 0.0469 seconds