• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 76
  • 46
  • 30
  • 10
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 435
  • 435
  • 110
  • 101
  • 79
  • 75
  • 70
  • 69
  • 68
  • 64
  • 60
  • 56
  • 52
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A survey on linearized method for inverse wave equations.

January 2012 (has links)
在本文中, 我們將主要討論一種在求解一類波動方程反問題中很有價值的數值方法:線性化方法。 / 在介紹上述的數值方法之前, 我們將首先討論波動方程的一些重要的特質,主要包括四類典型的波動方程模型,方程的基本解和一般解,以及波動方程解的性質。 / 接下來,在本文的第二部分中,我們會首先介紹所求解的模型以及其反問題。此反問題主要研究求解波動方程[附圖]中的系數c. 線性化方法的主要思想在於將速度c分解成兩部分:c₁ 和c₂ ,並且滿足關系式:[附圖],其中c₁ 是一個小的擾動量。另一方面,上述波動方程的解u 可以被線性表示:u = u₀ + u₁ ,其中u₀ 和u₁ 分別是一維問題和二維問題的解。相應的,我們將運用有限差分方法和傅利葉變換方法求解上述一維問題和二維問題,從而分別求解c₁ 和c₂ ,最終求解得到係數c. 在本文的最後,我們將進行一些數值試驗,從而驗證此線性化方法的有效性和可靠性。 / In this thesis, we will discuss a numerical method of enormous value, a linearized method for solving a certain kind of inverse wave equations. / Before the introduction of the above-mentioned method, we shall discuss some important features of the wave equations in the first part of the thesis, consisting of four typical mathematical models of wave equations, there fundamental solutions, general solutions and the properties of those general solutions. / Next, we shall present the model and its inverse problem of recovering the coefficient c representing the propagation velocity of wave from the wave equation [with mathematic formula] The linearized method aims at dividing the velocity c into two parts, c₀ and c₁, which satisfying the relation [with mathematic formula], where c₁ is a tiny perturbation. On the other hand, the solution u can be represented in the linear form, u = u₀ + u₁, where u₀ and u₁ are the solutions to one-dimensional problem and two- dimensional problem respectively. Accordingly, we can use the numerical methods, finite difference method and Fourier transform method to solve the one-dimensional forward problem and two-dimensional inverse problem respectively, thus we can get c₀ and c₁, a step before we recover the velocity c. In the numerical experiments, we shall test the proposed linearized numerical method for some special examples and demonstrate the effectiveness and robustness of the method. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Xu, Xinyi. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 64-65). / Abstracts also in Chinese. / Chapter 1 --- Fundamental aspects of wave equations --- p.6 / Chapter 1.1 --- Introduction --- p.6 / Chapter 1.1.1 --- Four important wave equations --- p.6 / Chapter 1.1.2 --- General form of wave equations --- p.10 / Chapter 1.2 --- Fundamental solutions --- p.11 / Chapter 1.2.1 --- Fourier transform --- p.11 / Chapter 1.2.2 --- Fundamental solution in three-dimensional space --- p.14 / Chapter 1.2.3 --- Fundamental solution in two-dimensional space --- p.16 / Chapter 1.3 --- General solution --- p.19 / Chapter 1.3.1 --- One-dimensional wave equations --- p.19 / Chapter 1.3.2 --- Two and three dimensional wave equations --- p.26 / Chapter 1.3.3 --- n dimensional case --- p.28 / Chapter 1.4 --- Properties of solutions to wave equation --- p.31 / Chapter 1.4.1 --- Properties of Kirchhoff’s solutions --- p.31 / Chapter 1.4.2 --- Properties of Poisson’s solutions --- p.33 / Chapter 1.4.3 --- Decay of the solutions to wave equation --- p.34 / Chapter 2 --- Linearized method for wave equations --- p.36 / Chapter 2.1 --- Introduction --- p.36 / Chapter 2.1.1 --- Background --- p.36 / Chapter 2.1.2 --- Forward and inverse problem --- p.38 / Chapter 2.2 --- Basic ideas of Linearized Method --- p.39 / Chapter 2.3 --- Theoretical analysis on linearized method --- p.41 / Chapter 2.3.1 --- One-dimensional forward problem --- p.42 / Chapter 2.3.2 --- Two-dimensional forward problem --- p.43 / Chapter 2.3.3 --- Existence and uniqueness of solutions to the inverse problem --- p.45 / Chapter 2.4 --- Numerical analysis on linearized method --- p.45 / Chapter 2.4.1 --- Discrete analog of the inverse problem --- p.46 / Chapter 2.4.2 --- Fourier transform --- p.48 / Chapter 2.4.3 --- Direct methods for inverse and forward problems --- p.52 / Chapter 2.5 --- Numerical Simulation --- p.54 / Chapter 2.5.1 --- Special Case --- p.54 / Chapter 2.5.2 --- General Case --- p.59 / Chapter 3 --- Conclusion --- p.63 / Bibliography --- p.64
32

Some efficient numerical methods for inverse problems. / CUHK electronic theses & dissertations collection

January 2008 (has links)
Inverse problems are mathematically and numerically very challenging due to their inherent ill-posedness in the sense that a small perturbation of the data may cause an enormous deviation of the solution. Regularization methods have been established as the standard approach for their stable numerical solution thanks to the ground-breaking work of late Russian mathematician A.N. Tikhonov. However, existing studies mainly focus on general-purpose regularization procedures rather than exploiting mathematical structures of specific problems for designing efficient numerical procedures. Moreover, the stochastic nature of data noise and model uncertainties is largely ignored, and its effect on the inverse solution is not assessed. This thesis attempts to design some problem-specific efficient numerical methods for the Robin inverse problem and to quantify the associated uncertainties. It consists of two parts: Part I discusses deterministic methods for the Robin inverse problem, while Part II studies stochastic numerics for uncertainty quantification of inverse problems and its implication on the choice of the regularization parameter in Tikhonov regularization. / Key Words: Robin inverse problem, variational approach, preconditioning, Modica-Motorla functional, spectral stochastic approach, Bayesian inference approach, augmented Tikhonov regularization method, regularization parameter, uncertainty quantification, reduced-order modeling / Part I considers the variational approach for reconstructing smooth and nonsmooth coefficients by minimizing a certain functional and its discretization by the finite element method. We propose the L2-norm regularization and the Modica-Mortola functional from phase transition for smooth and nonsmooth coefficients, respectively. The mathematical properties of the formulations and their discrete analogues, e.g. existence of a minimizer, stability (compactness), convexity and differentiability, are studied in detail. The convergence of the finite element approximation is also established. The nonlinear conjugate gradient method and the concave-convex procedure are suggested for solving discrete optimization problems. An efficient preconditioner based on the Sobolev inner product is proposed for justifying the gradient descent and for accelerating its convergence. / Part II studies two promising methodologies, i.e. the spectral stochastic approach (SSA) and the Bayesian inference approach, for uncertainty quantification of inverse problems. The SSA extends the variational approach to the stochastic context by generalized polynomial chaos expansion, and addresses inverse problems under uncertainties, e.g. random data noise and stochastic material properties. The well-posedness of the stochastic variational formulation is studied, and the convergence of its stochastic finite element approximation is established. Bayesian inference provides a natural framework for uncertainty quantification of a specific solution by considering an ensemble of inverse solutions consistent with the given data. To reduce its computational cost for nonlinear inverse problems incurred by repeated evaluation of the forward model, we propose two accelerating techniques by constructing accurate and inexpensive surrogate models, i.e. the proper orthogonal decomposition from reduced-order modeling and the stochastic collocation method from uncertainty propagation. By observing its connection with Tikhonov regularization, we propose two functionals of Tikhonov type that could automatically determine the regularization parameter and accurately detect the noise level. We establish the existence of a minimizer, and the convergence of an alternating iterative algorithm. This opens an avenue for designing fully data-driven inverse techniques. / This thesis considers deterministic and stochastic numerics for inverse problems associated with elliptic partial differential equations. The specific inverse problem under consideration is the Robin inverse problem: estimating the Robin coefficient of a Robin boundary condition from boundary measurements. It arises in diverse industrial applications, e.g. thermal engineering and nondestructive evaluation, where the coefficient profiles material properties on the boundary. / Jin, Bangti. / Adviser: Zou Jun. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3541. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 174-187). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
33

Joint recovery of high-dimensional signals from noisy and under-sampled measurements using fusion penalties

Poddar, Sunrita 01 December 2018 (has links)
The presence of missing entries pose a hindrance to data analysis and interpretation. The missing entries may occur due to a variety of reasons such as sensor malfunction, limited acquisition time or unavailability of information. In this thesis, we present algorithms to analyze and complete data which contain several missing entries. We consider the recovery of a group of signals, given a few under-sampled and noisy measurements of each signal. This involves solving ill-posed inverse problems, since the number of available measurements are considerably fewer than the dimensionality of the signal that we aim to recover. In this work, we consider different data models to enable joint recovery of the signals from their measurements, as opposed to the independent recovery of each signal. This prior knowledge makes the inverse problems well-posed. While compressive sensing techniques have been proposed for low-rank or sparse models, such techniques have not been studied to the same extent for other models such as data appearing in clusters or lying on a low-dimensional manifold. In this work, we consider several data models arising in different applications, and present some theoretical guarantees for the joint reconstruction of the signals from few measurements. Our proposed techniques make use of fusion penalties, which are regularizers that promote solutions with similarity between certain pairs of signals. The first model that we consider is that of points lying on a low-dimensional manifold, embedded in high dimensional ambient space. This model is apt for describing a collection of signals, each of which is a function of only a few parameters; the manifold dimension is equal to the number of parameters. We propose a technique to recover a series of such signals, given a few measurements for each signal. We demonstrate this in the context of dynamic Magnetic Resonance Imaging (MRI) reconstruction, where only a few Fourier measurements are available for each time frame. A novel acquisition scheme enables us to detect the neighbours of each frame on the manifold. We then recover each frame by enforcing similarity with its neighbours. The proposed scheme is used to enable fast free-breathing cardiac and speech MRI scans. Next, we consider the recovery of curves/surfaces from few sampled points. We model the curves as the zero-level set of a trigonometric polynomial, whose bandwidth controls the complexity of the curve. We present theoretical results for the minimum number of samples required to uniquely identify the curve. We show that the null-space vectors of high dimensional feature maps of these points can be used to recover the curve. The method is demonstrated on the recovery of the structure of DNA filaments from a few clicked points. This idea is then extended to recover data lying on a high-dimensional surface from few measurements. The formulated algorithm has similarities to our algorithm for recovering points on a manifold. Hence, we apply the above ideas to the cardiac MRI reconstruction problem, and are able to show better image quality with reduced computational complexity. Finally, we consider the case where the data is organized into clusters. The goal is to recover the true clustering of the data, even when a few features of each data point is unknown. We propose a fusion-penalty based optimization problem to cluster data reliably in the presence of missing entries, and present theoretical guarantees for successful recovery of the correct clusters. We next propose a computationally efficient algorithm to solve a relaxation of this problem. We demonstrate that our algorithm reliably recovers the true clusters in the presence of large fractions of missing entries on simulated and real datasets. This work thus results in several theoretical insights and solutions to different practical problems which involve reconstructing and analyzing data with missing entries. The fusion penalties that are used in each of the above models are obtained directly as a result of model assumptions. The proposed algorithms show very promising results on several real datasets, and we believe that they are general enough to be easily extended to several other practical applications.
34

MICROWAVE IMAGING OF BIOLOGICAL TISSUES: applied toward breast tumor detection

Gunnarsson, Tommy January 2007 (has links)
<p>Microwave imaging is an efficient diagnostic modality for non-invasively visualizing dielectric contrasts of non-metallic bodies. An increasing interest of this field has been observed during the last decades. Many application areas in biomedicine have been issued, recently the breast tumor detection application using microwave imaging.</p><p>Many groups are working in the field at the moment for several reasons. Breast cancer is a major health problem globally for women, while it is the second most common cancer form for women causing 0.3 % of the yearly female death in Sweden. Medical imaging is considered as the most effective way of diagnostic breast tumors, where X-ray mammography is the dominating technique. However, this imaging modality still suffers from some limitations. Many women, mostly young ones, have radiographically dense breasts, which means that the breast tissues containing high rates of fibroglandular tissues. In this case the density is very similar to the breast tumor and the diagnosis is very difficult. In this case alternative modalities like Magnetic Resonance Imaging (MRI) with contrast enhancement and Ultrasound imaging are used, however those are not suitable for large scale screening program.Another limitation is the false-negative and false-positive rate using mammography, in general 5–15 % of the tumors are not detected and many cases have to go though a breast biopsy to verify a tumor diagnosis. At last the mammography using breast compression sometimes painful, and utilizing ionizing X-rays. The big potential in microwave imaging is the reported high contrast of complex permittivity between fibroglandular tissues and tumor tissues in breasts and that it is a non-ionizing method which probably will be rather inexpensive.</p><p>The goal with this work is to develop a microwave imaging system able to reconstruct quantitative images of a female breast. In the frame of this goal this Licentiate thesis contains a brief review of the ongoing research in the field of microwave imaging of biological tissues, with the major focus on the breast tumor application. Both imaging algorithms and experimental setups are included. A feasibility study is performed to analyze what response levels could be expected, in signal properties, in a breast tumor detection application. Also, the usability of a 3D microwave propagation simulator, (QW3D), in the setup development is investigated. This is done by using a simple antenna setup with a breast phantom with different tumor positions. From those results it is clear that strong responses are obtained by a tumor presence and the diffracted responses gives strong information about inhomogeneities inside the breast. The second part of this Licentiate thesis is done in collaboration between Mälardalen University and Supélec. Using the existing planar 2.45 GHz microwave camera and the iterative non-linear Newton Kantorovich code, developed at Département de Recherches en Electromagnétisme (DRE) at Supélec, as a starting point, a new platform for both real-time qualitative imaging and quantitative images of inhomogeneous objects are investigated. The focusing is related to breast tumor detection. For the moment the tomographic performance of the planar camera is verified in simulations through a comparison with other setups. Good calibration is observed, but still experimental work concerning phantom development etc. is needed before experimental results on breast tumor detection may be obtained.</p>
35

The inverse problem of fiber Bragg gratings /

Jin, Hai, January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (p. 140-144).
36

MICROWAVE IMAGING OF BIOLOGICAL TISSUES: applied toward breast tumor detection

Gunnarsson, Tommy January 2007 (has links)
Microwave imaging is an efficient diagnostic modality for non-invasively visualizing dielectric contrasts of non-metallic bodies. An increasing interest of this field has been observed during the last decades. Many application areas in biomedicine have been issued, recently the breast tumor detection application using microwave imaging. Many groups are working in the field at the moment for several reasons. Breast cancer is a major health problem globally for women, while it is the second most common cancer form for women causing 0.3 % of the yearly female death in Sweden. Medical imaging is considered as the most effective way of diagnostic breast tumors, where X-ray mammography is the dominating technique. However, this imaging modality still suffers from some limitations. Many women, mostly young ones, have radiographically dense breasts, which means that the breast tissues containing high rates of fibroglandular tissues. In this case the density is very similar to the breast tumor and the diagnosis is very difficult. In this case alternative modalities like Magnetic Resonance Imaging (MRI) with contrast enhancement and Ultrasound imaging are used, however those are not suitable for large scale screening program.Another limitation is the false-negative and false-positive rate using mammography, in general 5–15 % of the tumors are not detected and many cases have to go though a breast biopsy to verify a tumor diagnosis. At last the mammography using breast compression sometimes painful, and utilizing ionizing X-rays. The big potential in microwave imaging is the reported high contrast of complex permittivity between fibroglandular tissues and tumor tissues in breasts and that it is a non-ionizing method which probably will be rather inexpensive. The goal with this work is to develop a microwave imaging system able to reconstruct quantitative images of a female breast. In the frame of this goal this Licentiate thesis contains a brief review of the ongoing research in the field of microwave imaging of biological tissues, with the major focus on the breast tumor application. Both imaging algorithms and experimental setups are included. A feasibility study is performed to analyze what response levels could be expected, in signal properties, in a breast tumor detection application. Also, the usability of a 3D microwave propagation simulator, (QW3D), in the setup development is investigated. This is done by using a simple antenna setup with a breast phantom with different tumor positions. From those results it is clear that strong responses are obtained by a tumor presence and the diffracted responses gives strong information about inhomogeneities inside the breast. The second part of this Licentiate thesis is done in collaboration between Mälardalen University and Supélec. Using the existing planar 2.45 GHz microwave camera and the iterative non-linear Newton Kantorovich code, developed at Département de Recherches en Electromagnétisme (DRE) at Supélec, as a starting point, a new platform for both real-time qualitative imaging and quantitative images of inhomogeneous objects are investigated. The focusing is related to breast tumor detection. For the moment the tomographic performance of the planar camera is verified in simulations through a comparison with other setups. Good calibration is observed, but still experimental work concerning phantom development etc. is needed before experimental results on breast tumor detection may be obtained.
37

Bayesian inference for source determination in the atmospheric environment

Keats, William Andrew January 2009 (has links)
In the event of a hazardous release (chemical, biological, or radiological) in an urban environment, monitoring agencies must have the tools to locate and characterize the source of the emission in order to respond and minimize damage. Given a finite and noisy set of concentration measurements, determining the source location, strength and time of release is an ill-posed inverse problem. We treat this problem using Bayesian inference, a framework under which uncertainties in modelled and measured concentrations can be propagated, in a consistent, rigorous manner, toward a final probabilistic estimate for the source. The Bayesian methodology operates independently of the chosen dispersion model, meaning it can be applied equally well to problems in urban environments, at regional scales, or at global scales. Both Lagrangian stochastic (particle-tracking) and Eulerian (fixed-grid, finite-volume) dispersion models have been used successfully. Calculations are accomplished efficiently by using adjoint (backward) dispersion models, which reduces the computational effort required from calculating one [forward] plume per possible source configuration to calculating one [backward] plume per detector. Markov chain Monte Carlo (MCMC) is used to efficiently sample from the posterior distribution for the source parameters; both the Metropolis-Hastings and hybrid Hamiltonian algorithms are used. In this thesis, four applications falling under the rubric of source determination are addressed: dispersion in highly disturbed flow fields characteristic of built-up (urban) environments; dispersion of a nonconservative scalar over flat terrain in a statistically stationary and horizontally homogeneous (turbulent) wind field; optimal placement of an auxiliary detector using a decision-theoretic approach; and source apportionment of particulate matter (PM) using a chemical mass balance (CMB) receptor model. For the first application, the data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. For the second and third applications, the background wind and terrain conditions are based on those encountered during the Project Prairie Grass field experiment; mean concentration and turbulent scalar flux data are synthesized using a Lagrangian stochastic model where necessary. In the fourth and final application, Bayesian source apportionment results are compared to the US Environmental Protection Agency's standard CMB model using a test case involving PM data from Fresno, California. For each of the applications addressed in this thesis, combining Bayesian inference with appropriate computational techniques results in a computationally efficient methodology for performing source determination.
38

Bayesian inference for source determination in the atmospheric environment

Keats, William Andrew January 2009 (has links)
In the event of a hazardous release (chemical, biological, or radiological) in an urban environment, monitoring agencies must have the tools to locate and characterize the source of the emission in order to respond and minimize damage. Given a finite and noisy set of concentration measurements, determining the source location, strength and time of release is an ill-posed inverse problem. We treat this problem using Bayesian inference, a framework under which uncertainties in modelled and measured concentrations can be propagated, in a consistent, rigorous manner, toward a final probabilistic estimate for the source. The Bayesian methodology operates independently of the chosen dispersion model, meaning it can be applied equally well to problems in urban environments, at regional scales, or at global scales. Both Lagrangian stochastic (particle-tracking) and Eulerian (fixed-grid, finite-volume) dispersion models have been used successfully. Calculations are accomplished efficiently by using adjoint (backward) dispersion models, which reduces the computational effort required from calculating one [forward] plume per possible source configuration to calculating one [backward] plume per detector. Markov chain Monte Carlo (MCMC) is used to efficiently sample from the posterior distribution for the source parameters; both the Metropolis-Hastings and hybrid Hamiltonian algorithms are used. In this thesis, four applications falling under the rubric of source determination are addressed: dispersion in highly disturbed flow fields characteristic of built-up (urban) environments; dispersion of a nonconservative scalar over flat terrain in a statistically stationary and horizontally homogeneous (turbulent) wind field; optimal placement of an auxiliary detector using a decision-theoretic approach; and source apportionment of particulate matter (PM) using a chemical mass balance (CMB) receptor model. For the first application, the data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. For the second and third applications, the background wind and terrain conditions are based on those encountered during the Project Prairie Grass field experiment; mean concentration and turbulent scalar flux data are synthesized using a Lagrangian stochastic model where necessary. In the fourth and final application, Bayesian source apportionment results are compared to the US Environmental Protection Agency's standard CMB model using a test case involving PM data from Fresno, California. For each of the applications addressed in this thesis, combining Bayesian inference with appropriate computational techniques results in a computationally efficient methodology for performing source determination.
39

Inverse Problems in Portfolio Selection: Scenario Optimization Framework

Bhowmick, Kaushiki 10 1900 (has links)
A number of researchers have proposed several Bayesian methods for portfolio selection, which combine statistical information from financial time series with the prior beliefs of the portfolio manager, in an attempt to reduce the impact of estimation errors in distribution parameters on the portfolio selection process and the effect of these errors on the performance of 'optimal' portfolios in out-of-sample-data. This thesis seeks to reverse the direction of this process, inferring portfolio managers’ probabilistic beliefs about future distributions based on the portfolios that they hold. We refer to the process of portfolio selection as the forward problem and the process of retrieving the implied probabilities, given an optimal portfolio, as the inverse problem. We attempt to solve the inverse problem in a general setting by using a finite set of scenarios. Using a discrete time framework, we can retrieve probabilities associated with each of the scenarios, which tells us the views of the portfolio manager implicit in the choice of a portfolio considered optimal. We conduct the implied views analysis for portfolios selected using expected utility maximization, where the investor's utility function is a globally non-optimal concave function, and in the mean-variance setting with the covariance matrix assumed to be given. We then use the models developed for inverse problem on empirical data to retrieve the implied views implicit in a given portfolio, and attempt to determine whether incorporating these views in portfolio selection improves portfolio performance out of sample.
40

Minimum I-divergence Methods for Inverse Problems

Choi, Kerkil 23 November 2005 (has links)
Problems of estimating nonnegative functions from nonnegative data induced by nonnegative mappings are ubiquitous in science and engineering. We address such problems by minimizing an information-theoretic discrepancy measure, namely Csiszar's I-divergence, between the collected data and hypothetical data induced by an estimate. Our applications can be summarized along the following three lines: 1) Deautocorrelation: Deautocorrelation involves recovering a function from its autocorrelation. Deautocorrelation can be interpreted as phase retrieval in that recovering a function from its autocorrelation is equivalent to retrieving Fourier phases from just the corresponding Fourier magnitudes. Schulz and Snyder invented an minimum I-divergence algorithm for phase retrieval. We perform a numerical study concerning the convergence of their algorithm to local minima. X-ray crystallography is a method for finding the interatomic structure of a crystallized molecule. X-ray crystallography problems can be viewed as deautocorrelation problems from aliased autocorrelations, due to the periodicity of the crystal structure. We derive a modified version of the Schulz-Snyder algorithm for application to crystallography. Furthermore, we prove that our tweaked version can theoretically preserve special symmorphic group symmetries that some crystals possess. We quantify noise impact via several error metrics as the signal-to-ratio changes. Furthermore, we propose penalty methods using Good's roughness and total variation for alleviating roughness in estimates caused by noise. 2) Deautoconvolution: Deautoconvolution involves finding a function from its autoconvolution. We derive an iterative algorithm that attempts to recover a function from its autoconvolution via minimizing I-divergence. Various theoretical properties of our deautoconvolution algorithm are derived. 3) Linear inverse problems: Various linear inverse problems can be described by the Fredholm integral equation of the first kind. We address two such problems via minimum I-divergence methods, namely the inverse blackbody radiation problem, and the problem of estimating an input distribution to a communication channel (particularly Rician channels) that would create a desired output. Penalty methods are proposed for dealing with the ill-posedness of the inverse blackbody problem.

Page generated in 0.0517 seconds