• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 214
  • 76
  • 46
  • 30
  • 10
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 437
  • 437
  • 110
  • 101
  • 79
  • 75
  • 70
  • 69
  • 68
  • 64
  • 60
  • 56
  • 52
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A fault diagnosis technique for complex systems using Bayesian data analysis

Lee, Young Ki 01 April 2008 (has links)
This research develops a fault diagnosis method for complex systems in the presence of uncertainties and possibility of multiple solutions. Fault diagnosis is a challenging problem because data used in diagnosis contain random errors and often systematic errors as well. Furthermore, fault diagnosis is basically an inverse problem so that it inherits unfavorable characteristics of inverse problems: The existence and uniqueness of an inverse solution are not guaranteed and the solution may be unstable. The weighted least squares method and its variations are traditionally used for solving inverse problems. However, the existing algorithms often fail to identify multiple solutions if they are present. In addition, the existing algorithms are not capable of selecting variables systematically so that they generally use the full model in which may contain unnecessary variables as well as necessary variables. Ignoring this model uncertainty often gives rise to, so called, the smearing effect in solutions, because of which unnecessary variables are overestimated and necessary variables are underestimated. The proposed method solves the inverse problem using Bayesian inference. An engineering system can be parameterized using state variables. The probability of each state variable is inferred from observations made on the system. A bias in an observation is treated as a variable, and the probability of the bias variable is inferred as well. To take the uncertainty of model structure into account, multiple Bayesian models are created with various combinations of the state variables and the bias variables. The results from all models are averaged according to how likely each model is. Gibbs sampling is used for approximating updated probabilities. The method is demonstrated for two applications: the status matching of a turbojet engine and the fault diagnosis of an industrial gas turbine. In the status matching application only physical faults in the components of a turbojet engine are considered whereas in the fault diagnosis application sensor biases are considered as well as physical faults. The proposed method is tested in various faulty conditions using simulated measurements. Results show that the proposed method identifies physical faults and sensor biases simultaneously. It is also demonstrated that multiple solutions can be identified. Overall, there is a clear improvement in ability to identify correct solutions over the full model that contains all state and bias variables.
62

New algorithms for solving inverse source problems in imaging techniques with applications in fluorescence tomography

Yin, Ke 16 September 2013 (has links)
This thesis is devoted to solving the inverse source problem arising in image reconstruction problems. In general, the solution is non-unique and the problem is severely ill-posed. Therefore, small perturbations, such as the noise in the data, and the modeling error in the forward problem, will cause huge errors in the computations. In practice, the most widely used method to tackle the problem is based on Tikhonov-type regularizations, which minimizes a cost function combining a regularization term and a data fitting term. However, because the two tasks, namely regularization and data fitting, are coupled together in Tikhonov regularization, they are difficult to solve. It happens even if each task can be efficiently solved when they are separate. We propose a method to overcome the major difficulties, namely the non-uniqueness of the solution and noisy data fitting, separately. First we find a particular solution called the orthogonal solution that satisfies the data fitting term. Then we add to it a correction function in the kernel space so that the final solution fulfills the regularization and other physical requirements. The key idea is that the correction function in the kernel has no impact to the data fitting, and the regularization is imposed in a smaller space. Moreover, there is no parameter needed to balance the data fitting and regularization terms. As a case study, we apply the proposed method to Fluorescence Tomography (FT), an emerging imaging technique well known for its ill-posedness and low image resolution in existing reconstruction techniques. We demonstrate by theory and examples that the proposed algorithm can drastically improve the computation speed and the image resolution over existing methods.
63

Numerical Study Of Regularization Methods For Elliptic Cauchy Problems

Gupta, Hari Shanker 05 1900 (has links) (PDF)
Cauchy problems for elliptic partial differential equations arise in many important applications, such as, cardiography, nondestructive testing, heat transfer, sonic boom produced by a maneuvering aerofoil, etc. Elliptic Cauchy problems are typically ill-posed, i.e., there may not be a solution for some Cauchy data, and even if a solution exists uniquely, it may not depend continuously on the Cauchy data. The ill-posedness causes numerical instability and makes the classical numerical methods inappropriate to solve such problems. For Cauchy problems, the research on uniqueness, stability, and efficient numerical methods are of significant interest to mathematicians. The main focus of this thesis is to develop numerical techniques for elliptic Cauchy problems. Elliptic Cauchy problems can be approached as data completion problems, i.e., from over-specified Cauchy data on an accessible part of the boundary, one can try to recover missing data on the inaccessible part of the boundary. Then, the Cauchy problems can be solved by finding a so-lution to a well-posed boundary value problem for which the recovered data constitute a boundary condition on the inaccessible part of the boundary. In this thesis, we use natural linearization approach to transform the linear Cauchy problem into a problem of solving a linear operator equation. We consider this operator in a weaker image space H−1, which differs from the previous works where the image space of the operator is usually considered as L2 . The lower smoothness of the image space will make a problem a bit more ill-posed. But under such settings, we can prove the compactness of the considered operator. At the same time, it allows a relaxation of the assumption concerning noise. The numerical methods that can cope with these ill-posed operator equations are the so called regularization methods. One prominent example of such regularization methods is Tikhonov regularization which is frequently used in practice. Tikhonov regularization can be considered as a least-squares tracking of data with a regularization term. In this thesis we discuss a possibility to improve the reconstruction accuracy of the Tikhonov regularization method by using an iterative modification of Tikhonov regularization. With this iterated Tikhonov regularization the effect of the penalty term fades away as iterations go on. In the application of iterated Tikhonov regularization, we find that for severely ill-posed problems such as elliptic Cauchy problems, discretization has such a powerful influence on the accuracy of the regularized solution that only with some reasonable discretization level, desirable accuracy can be achieved. Thus, regularization by projection method which is commonly known as self-regularization is also considered in this thesis. With this method, the regularization is achieved only by discretization along with an appropriate choice of discretization level. For all regularization methods, the choice of an appropriate regularization parameter is a crucial issue. For this purpose, we propose the balancing principle which is a recently introduced powerful technique for the choice of the regularization parameter. While applying this principle, a balance between the components related to the convergence rate and stability in the accuracy estimates has to be made. The main advantage of the balancing principle is that it can work in an adaptive way to obtain an appropriate value of the regularization parameter, and it does not use any quantitative knowledge of convergence rate or stability. The accuracy provided by this adaptive strategy is worse only by a constant factor than one could achieve in the case of known stability and convergence rates. We apply the balancing principle in both iterated Tikhonov regularization and self-regularization methods to choose the proper regularization parameters. In the thesis, we also investigate numerical techniques based on iterative Tikhonov regular-ization for nonlinear elliptic Cauchy problems. We consider two types of problems. In the first kind, the nonlinear problem can be transformed to a linear problem while in the second kind, linearization of the nonlinear problem is not possible, and for this we propose a special iterative method which differs from methods such as Landweber iteration and Newton-type method which are usually based on the calculation of the Frech´et derivative or adjoint of the equation. Abundant examples are presented in the thesis, which illustrate the performance of the pro-posed regularization methods as well as the balancing principle. At the same time, these examples can be viewed as a support for the theoretical results achieved in this thesis. In the end of this thesis, we describe the sonic boom problem, where we first encountered the ill-posed nonlinear Cauchy problem. This is a very difficult problem and hence we took this problem to provide a motivation for the model problems. These model problems are discussed one by one in the thesis in the increasing order of difficulty, ending with the nonlinear problems in Chapter 5. The main results of the dissertation are communicated in the article [35].
64

Two Inverse Problems In Linear Elasticity With Applications To Force-Sensing And Mechanical Characterization

Reddy, Annem Narayana 12 1900 (has links) (PDF)
Two inverse problems in elasticity are addressed with motivation from cellular biomechanics. The first application is computation of holding forces on a cell during its manipulation and the second application is estimation of a cell’s interior elastic mapping (i.e., estimation of inhomogeneous distribution of stiffness) using only boundary forces and displacements. It is clear from recent works that mechanical forces can play an important role in developmental biology. In this regard, we have developed a vision-based force-sensing technique to estimate forces that are acting on a cell while it is manipulated. This problem is connected to one inverse problem in elasticity known as Cauchy’s problem in elasticity. Geometric nonlinearity under noisy displacement data is accounted while developing the solution procedures for Cauchy’s problem. We have presented solution procedures to the Cauchy’s problem under noisy displacement data. Geometric nonlinearity is also considered in order to account large deformations that the mechanisms (grippers) undergo during the manipulation. The second inverse problem is connected to elastic mapping of the cell. We note that recent works in biomechanics have shown that the disease state can alter the gross stiffness of a cell. Therefore, the pertinent question that one can ask is that which portion (for example Nucleus, cortex, ER) of the elastic property of the cell is majorly altered by the disease state. Mathematically, this question (estimation of inhomogeneous properties of cell) can be answered by solving an inverse elastic boundary value problem using sets of force-displacements boundary measurements. We address the theoretical question of number of boundary data sets required to solve the inverse boundary value problem.
65

Quantitative analysis of algorithms for compressed signal recovery

Thompson, Andrew J. January 2013 (has links)
Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled nonadaptive linear measurements taken at a rate proportional to the signal's true information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been established, both theoretically and empirically, that certain optimization algorithms are able to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007), which is the focus of this thesis, is an established CS recovery algorithm which is known to be effective in practice, both in terms of recovery performance and computational efficiency. However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case recovery conditions have not yet been quantified in terms of the sparsity/undersampling trade-off, and also there is a need for average-case analysis in order to understand the behaviour of the algorithm in practice. In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed. Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting the realistic average-case assumption that the underlying signal and measurement matrix are independent. We obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT (NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous results within this framework shows a substantial quantitative improvement. We also extend our analysis to a related algorithm which exploits the assumption that the underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010). We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery is guaranteed. Our results, which are the first in the phase transition framework for tree-based CS, show a further significant improvement over results for the standard sparsity model. We also propose a dynamic programming algorithm which is guaranteed to compute an exact tree projection in low-order polynomial time.
66

Variational Estimators in Statistical Multiscale Analysis

Li, Housen 17 February 2016 (has links)
No description available.
67

Multi-material nanoindentation simulations of viral capsids

Subramanian, Bharadwaj 10 November 2010 (has links)
An understanding of the mechanical properties of viral capsids (protein assemblies forming shell containers) has become necessary as their perceived use as nano-materials for targeted drug delivery. In this thesis, a heterogeneous, spatially detailed model of the viral capsid is considered. This model takes into account the increased degrees of freedom between the capsomers (capsid sub-structures) and the interactions between them to better reflect their deformation properties. A spatially realistic finite element multi-domain decomposition of viral capsid shells is also generated from atomistic PDB (Protein Data Bank) information, and non-linear continuum elastic simulations are performed. These results are compared to homogeneous shell simulation re- sults to bring out the importance of non-homogenous material properties in determining the deformation of the capsid. Finally, multiscale methods in structural analysis are reviewed to study their potential application to the study of nanoindentation of viral capsids. / text
68

Iterative projection algorithms and applications in x-ray crystallography

Lo, Victor Lai-Xin January 2011 (has links)
X-ray crystallography is a technique for determining the structure (positions of atoms in space) of molecules. It is a well developed technique, and is applied routinely to both small inorganic and large organic molecules. However, the determination of the structures of large biological molecules by x-ray crystallography can still be an experimentally and computationally expensive task. The data in an x-ray experiment are the amplitudes of the Fourier transform of the electron density in the crystalline specimen. The structure determination problem in x-ray crystallography is therefore identical to a phase retrieval problem in image reconstruction, for which iterative transform algorithms are a common solution method. This thesis is concerned with iterative projection algorithms, a generalized and more powerful version of iterative transform algorithms, and their application to macromolecular x-ray crystallography. A detailed study is made of iterative projection algorithms, including their properties, convergence, and implementations. Two applications to macromolecular crystallography are then investigated. The first concerns reconstruction of binary image and the application of iterative projection algorithms to determining molecular envelopes from x-ray solvent contrast variation data. An effective method for determining molecular envelopes is developed. The second concerns the use of symmetry constraints and the application of iterative projection algorithms to ab initio determination of macromolecular structures from crystal diffraction data. The algorithm is tested on an icosahedral virus and a protein tetramer. The results indicate that ab initio phasing is feasible for structures containing 4-fold or 5-fold non-crystallographic symmetry using these algorithms if an estimate of the molecular envelope is available.
69

Chemnitz Symposium on Inverse Problems 2014

02 October 2014 (has links) (PDF)
Our symposium will bring together experts from the German and international 'Inverse Problems Community' and young scientists. The focus will be on ill-posedness phenomena, regularization theory and practice, and on the analytical, numerical, and stochastic treatment of applied inverse problems in natural sciences, engineering, and finance.
70

Trees and graphs : congestion, polynomials and reconstruction

Law, Hiu-Fai January 2011 (has links)
Spanning tree congestion was defined by Ostrovskii (2004) as a measure of how well a network can perform if only minimal connection can be maintained. We compute the parameter for several families of graphs. In particular, by partitioning a hypercube into pieces with almost optimal edge-boundaries, we give tight estimates of the parameter thereby disproving a conjecture of Hruska (2008). For a typical random graph, the parameter exhibits a zigzag behaviour reflecting the feature that it is not monotone in the number of edges. This motivates the study of the most congested graphs where we show that any graph is close to a graph with small congestion. Next, we enumerate independent sets. Using the independent set polynomial, we compute the extrema of averages in trees and graphs. Furthermore, we consider inverse problems among trees and resolve a conjecture of Wagner (2009). A result in a more general setting is also proved which answers a question of Alon, Haber and Krivelevich (2011). After briefly considering polynomial invariants of general graphs, we specialize into trees. Three levels of tree distinguishing power are exhibited. We show that polynomials which do not distinguish rooted trees define typically exponentially large equivalence classes. On the other hand, we prove that the rooted Ising polynomial distinguishes rooted trees and that the Negami polynomial determines the subtree polynomial, strengthening results of Bollobás and Riordan (2000) and Martin, Morin and Wagner (2008). The top level consists of the chromatic symmetric function and it is proved to be a complete invariant for caterpillars.

Page generated in 0.0563 seconds