• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 76
  • 46
  • 30
  • 10
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 438
  • 438
  • 110
  • 101
  • 79
  • 75
  • 70
  • 69
  • 68
  • 64
  • 60
  • 56
  • 53
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomography

Coban, Sophia January 2017 (has links)
The problem of reconstructing an image from a set of tomographic data is not new, nor is it lacking attention. However there is still a distinct gap between the mathematicians and the experimental scientists working in the computed tomography (CT) imaging community. One of the aims in this thesis is to bridge this gap with mathematical reconstruction algorithms and analysis approaches applied to practical CT problems. The thesis begins with an extensive analysis for assessing the suitability of reconstruction algorithms for a given problem. The paper presented examines the idea of extracting physical information from a reconstructed sample and comparing against the known sample characteristics to determine the accuracy of a reconstructed volume. Various test cases are studied, which are relevant to both mathematicians and experimental scientists. These include the variance in quality of reconstructed volume as the dose is reduced or the implementation of the level set evolution method, used as part of a simultaneous reconstruction and segmentation technique. The work shows that the assessment of physical attributes results in more accurate conclusions. Furthermore, this approach allows for further analysis into interesting questions in CT. This theme is continued throughout the thesis. Recent results in compressive sensing (CS) gained attention in the CT community as they indicate the possibility of obtaining an accurate reconstruction of a sparse image from severely limited or reduced amount of measured data. Literature produced so far has not shown that CS directly guarantees a successful recovery in X-ray CT, and it is still unclear under which conditions a successful sparsity regularized reconstruction can be achieved. The work presented in the thesis aims to answer this question in a practical setting, and seeks to establish a direct connection between the success of sparsity regularization methods and the sparsity level of the image, which is similar to CS. Using this connection, one can determine the sufficient amount of measurements to collect from just the sparsity of an image. A link was found in a previous study using simulated data, and the work is repeated here with experimental data, where the sparsity level of the scanned object varies. The preliminary work presented here verifies the results from simulated data, showing an "almost-linear" relationship between the sparsity of the image and the sufficient amount of data for a successful sparsity regularized reconstruction. Several unexplained artefacts are noted in the literature as the `partial volume', the 'exponential edge gradient' or the 'penumbra' effect, with no clear explanation for their cause, or established techniques to remove them. The work presented in this paper shows that these artefacts are due to a non-linearity in the measured data, which comes from either the set up of the system, the scattering of rays or the dependency of linear attenuation on wavelength in the polychromatic case. However, even in monochromatic CT systems, the non-linearity effect can be detected. The paper shows that in some cases, the non-linearity effect is too large to ignore, and the reconstruction problem should be adapted to solve a non-linear problem. We derive this non-linear problem and solve it using a numerical optimization technique for both simulatedand real, gamma-ray data. When compared to reconstructions obtained using the standard linear model, the non-linear reconstructed images show clear improvements in that the non-linear effect is largely eliminated. The thesis is finished with a highlight article in the special issue of Solid Earth, named "Pore-scale tomography & imaging - applications, techniques and recommended practice". The paper presents a major technical advancement in a dynamic 3D CT data acquisition, where the latest hardware and optimal data acquisition plan are applied and as a result, ultra fast 3D volume acquisition was made possible. The experiment comprised of fast, free-falling water-saline drops traveling through a pack of rock grains with varying porosities. The imaging work was enhanced by the use of iterative methods and physical quantification analysis performed. The data acquisition and imaging work is the first in the field to capture a free falling drop and the imaging work clearly shows the fluid interaction with speed, gravity and more importantly, the inter- and intra-grain fluid transfers.
192

Predictive numerical simulations for rebuilding freestream conditions in atmospheric entry flows / Simulations numériques prédictives pour la reconstruction des conditions en amont dans les écoulements de rentrée atmosphérique

Cortesi, Andrea Francesco 16 February 2018 (has links)
Une prédiction fidèle des écoulements hypersoniques à haute enthalpie est capitale pour les missions d'entrée atmosphérique. Cependant, la présence d'incertitudes est inévitable, sur les conditions de l'écoulement libre comme sur d'autres paramètres des modèles physico-chimiques. Pour cette raison, une quantification rigoureuse de l'effet de ces incertitudes est obligatoire pour évaluer la robustesse et la prédictivité des simulations numériques. De plus, une reconstruction correcte des paramètres incertains à partir des mesures en vol peut aider à réduire le niveau d'incertitude sur les sorties. Dans ce travail, nous utilisons un cadre statistique pour la propagation directe des incertitudes ainsi que pour la reconstruction inverse des conditions de l'écoulement libre dans le cas d'écoulements de rentrée atmosphérique. La possibilité d'exploiter les mesures de flux thermique au nez du véhicule pour la reconstruction des variables de l'écoulement libre et des paramètres incertains du modèle est évaluée pour les écoulements de rentrée hypersoniques. Cette reconstruction est réalisée dans un cadre bayésien, permettant la prise en compte des différentes sources d'incertitudes et des erreurs de mesure. Différentes techniques sont introduites pour améliorer les capacités de la stratégie statistique de quantification des incertitudes. Premièrement, une approche est proposée pour la génération d'un métamodèle amélioré, basée sur le couplage de Kriging et Sparse Polynomial Dimensional Decomposition. Ensuite, une méthode d'ajoute adaptatif de nouveaux points à un plan d'expériences existant est présentée dans le but d'améliorer la précision du métamodèle créé. Enfin, une manière d'exploiter les sous-espaces actifs dans les algorithmes de Markov Chain Monte Carlo pour les problèmes inverses bayésiens est également exposée. / Accurate prediction of hypersonic high-enthalpy flows is of main relevance for atmospheric entry missions. However, uncertainties are inevitable on freestream conditions and other parameters of the physico-chemical models. For this reason, a rigorous quantification of the effect of uncertainties is mandatory to assess the robustness and predictivity of numerical simulations. Furthermore, a proper reconstruction of uncertain parameters from in-flight measurements can help reducing the level of uncertainties of the output. In this work, we will use a statistical framework for direct propagation of uncertainties and inverse freestream reconstruction applied to atmospheric entry flows. We propose an assessment of the possibility of exploiting forebody heat flux measurements for the reconstruction of freestream variables and uncertain parameters of the model for hypersonic entry flows. This reconstruction is performed in a Bayesian framework, allowing to account for sources of uncertainties and measurement errors. Different techniques are introduced to enhance the capabilities of the statistical framework for quantification of uncertainties. First, an improved surrogate modeling technique is proposed, based on Kriging and Sparse Polynomial Dimensional Decomposition. Then a method is proposed to adaptively add new training points to an existing experimental design to improve the accuracy of the trained surrogate model. A way to exploit active subspaces in Markov Chain Monte Carlo algorithms for Bayesian inverse problems is also proposed.
193

Inverse Problems in Asteroseismology

Bellinger, Earl Patrick 16 May 2018 (has links)
No description available.
194

Numerical methods for solving systems of ODEs with BVMs and restoration of chopped and nodded images.

January 2002 (has links)
by Tam Yue Hung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 49-52). / Abstracts in English and Chinese. / List of Tables --- p.vi / List of Figures --- p.vii / Chapter 1 --- Solving Systems of ODEs with BVMs --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Background --- p.4 / Chapter 1.2.1 --- Linear Multistep Formulae --- p.4 / Chapter 1.2.2 --- Preconditioned GMRES Method --- p.6 / Chapter 1.3 --- Strang-Type Preconditioners with BVMs --- p.7 / Chapter 1.3.1 --- Block-BVMs and Their Matrix Forms --- p.8 / Chapter 1.3.2 --- Construction of the Strang-type Preconditioner --- p.10 / Chapter 1.3.3 --- Convergence Rate and Operation Cost --- p.12 / Chapter 1.3.4 --- Numerical Result --- p.13 / Chapter 1.4 --- Strang-Type BCCB Preconditioner --- p.15 / Chapter 1.4.1 --- Construction of BCCB Preconditioners --- p.15 / Chapter 1.4.2 --- Convergence Rate and Operation Cost --- p.17 / Chapter 1.4.3 --- Numerical Result --- p.19 / Chapter 1.5 --- Preconditioned Waveform Relaxation --- p.20 / Chapter 1.5.1 --- Waveform Relaxation --- p.20 / Chapter 1.5.2 --- Invertibility of the Strang-type preconditioners --- p.23 / Chapter 1.5.3 --- Convergence rate and operation cost --- p.24 / Chapter 1.5.4 --- Numerical Result --- p.25 / Chapter 1.6 --- Multigrid Waveform Relaxation --- p.27 / Chapter 1.6.1 --- Multigrid Method --- p.27 / Chapter 1.6.2 --- Numerical Result --- p.28 / Chapter 1.6.3 --- Concluding Remark --- p.30 / Chapter 2 --- Restoration of Chopped and Nodded Images --- p.31 / Chapter 2.1 --- Introduction --- p.31 / Chapter 2.2 --- The Projected Landweber Method --- p.35 / Chapter 2.3 --- Other Numerical Methods --- p.37 / Chapter 2.3.1 --- Tikhonov Regularization --- p.38 / Chapter 2.3.2 --- MRNSD --- p.41 / Chapter 2.3.3 --- Piecewise Polynomial TSVD --- p.43 / Chapter 2.4 --- Numerical Result --- p.46 / Chapter 2.5 --- Concluding Remark --- p.47 / Bibliography --- p.49
195

Algoritmo de tomografia por impedância elétrica baseado em Simulated Annealing. / Electrical impedance tomography algorithm using Simulated Annealing as a search method.

Lara Herrera, Claudia Natalia 14 November 2007 (has links)
A Tomografia por Impedância Elétrica (TIE) é uma técnica não invasiva usada para produzir imagens que representam a distribuição de resistividade, ou condutividade, de uma seção transversal dentro de um domínio, por vezes o tórax humano, a partir do conhecimento de medidas elétricas feitas através de eletrodos distribuídos na sua fronteira. Correntes injetam-se e medem-se voltagens ou vice-versa. Distribuição de variação de resistividade ou distribuição de valor absoluto de resistividade podem ser estimadas, gerando algoritmos ditos de diferenças ou absolutos. O presente trabalho avalia o desempenho de um algoritmo probabilístico baseado no método Simulated Annealing (SA) para obter distribuições absolutas de resistividade em duas dimensões (2D). O SA difere dos métodos tradicionais de busca, tem a capacidade de escapar de mínimos locais graças ao emprego do critério de Metropolis para a aceitação dos novos pontos no espaço de busca e não precisa da avaliação de derivadas da função objetivo. O algoritmo desenvolvido soluciona o problema inverso da TIE ao resolver iterativamente um problema direto, utilizando distribuições de resistividade obtidas por sorteio aleatório. O sorteio é realizado pelo algoritmo de Metropolis. Na ausência de regularizações, assume-se que a imagem sorteada que minimiza a diferença entre as voltagens medidas na fronteira do domínio e as calculadas é a que mais se aproxima da distribuição de resistividade real. Neste sentido, a imagem final maximiza a verossemelhança. Este trabalho contribui com o desenvolvimento de algoritmos para estimação de imagem aplicados para monitorar a ventilação mecânica dos pulmões. Uma vez que se pretende resolver um problema inverso, não-linear e mal-posto é necessário introduzir informação a priori, na forma de restrições do espaço solução ou de regularizações. São realizados ensaios com dados simulados por meio de um fantoma numérico, dados de bancada experimental e dados provenientes de um tórax humano. Os resultados mostram que a localização, o tamanho e a resistividade do objeto estão dentro da precisão da TIE obtida por métodos clássicos, mas o esforço computacional é grande. Verificam-se, assim, as vantagens e a viabilidade do algoritmo proposto. / The Electrical Impedance Tomography (EIT) is a non-invasive technique used to produce images that represent the cross-sectional electrical resistivity distribution, or conductivity, within a domain, for instance the human thorax, from electrical measurements made through electrodes distributed on its boundary. Currents are injected and voltages measured, or vice-versa. Distributions of resistivity variations or distributions of absolute resistivity can be estimated, producing difference or absolute algorithms. The present work develops and evaluates the performance of a probabilistic algorithm based on the Simulated Annealing method (SA) to obtain absolute resistivity distributions in two dimensions (2D). The SA differs from the traditional search methods, no evaluation of objective function derivatives is required and it is possible to escape from local minima through the use of the Metropolis criterion for acceptance of new points in the search space. The developed algorithm solves the inverse problem of EIT by solving iteratively a direct problem, using random resistivity distributions. The random search is accomplished by the Metropolis algorithm. In the absence of regularizations, it is assumed that the resistivity distribution, an image, that minimizes the difference between the measured electrical potentials on the boundary and computed electrical potentials is the closest to the real resistivity distribution. In this sense, the algorithm maximizes the likelihood. This work contributes to the development of image estimation algorithms applied to lung monitoring, for instance, during mechanical ventilation. To solve this non-linear ill-posed inverse problem it is necessary to introduce prior information in the form of restrictions of the solution space or regularization techniques. The tests are carried out using simulated data obtained from a numerical phantom, an experimental phantom and human thorax data. The results show that the localization of an object, the size of an object and the resistivity of an object are within the accuracy of EIT obtained by classical methods, but the computational effort is large. The advantages and feasibility of the proposed algorithm were investigated.
196

New Stable Inverses of Linear Discrete Time Systems and Application to Iterative Learning Control

Ji, Xiaoqiang January 2019 (has links)
Digital control needs discrete time models, but conversion from continuous time, fed by a zero order hold, to discrete time introduces sampling zeros which are outside the unit circle, i.e. non-minimum phase (NMP) zeros, in the majority of the systems. Also, some systems are already NMP in continuous time. In both cases, the inverse problem to find the input required to maintain a desired output tracking, produces an unstable causal control action. The control action will grow exponentially every time step, and the error between time steps also grows exponentially. This prevents many control approaches from making use of inverse models. The problem statement for the existing stable inverse theorem is presented in this work, and it aims at finding a bounded nominal state-input trajectory by solving a two-point boundary value problem obtained by decomposing the internal dynamics of the system. This results in the causal part specified from the minus infinity time; and its non-causal part from the positive infinity time. By solving for the nominal bounded internal dynamics, the exact output tracking is achieved in the original finite time interval. The new stable inverses concepts presented and developed here address this instability problem in a different way based on the modified versions of problem states, and in a way that is more practical for implementation. The statements of how the different inverse problems are posed is presented, as well as the calculation and implementation. In order to produce zero tracking error at the addressed time steps, two modified statements are given as the initial delete and the skip step. The development presented here involves: (1) The detection of the signature of instability in both the nonhomogeneous difference equation and matrix form for finite time problems. (2) Create a new factorization of the system separating maximum part from minimum part in matrix form as analogous to transfer function format, and more generally, modeling the behavior of finite time zeros and poles. (3) Produce bounded stable inverse solutions evolving from the minimum Euclidean norm satisfying different optimization objective functions, to the solution having no projection on transient solutions terms excited by initial conditions. Iterative Learning Control (ILC) iterates with a real world control system repeatedly performing the same task. It adjusts the control action based on error history from the previous iteration, aiming to converge to zero tracking error. ILC has been widely used in various applications due to its high precision in trajectory tracking, e.g. semiconductor manufacturing sensors that repeatedly perform scanning maneuvers. Designing effective feedback controllers for non-minimum phase (NMP) systems can be challenging. Applying Iterative Learning Control (ILC) to NMP systems is particularly problematic. Incorporating the initial delete stable inverse thinkg into ILC, the control action obtained in the limit as the iterations tend to infinity, is a function of the tracking error produced by the command in the initial run. It is shown here that this dependence is very small, so that one can reasonably use any initial run. By picking an initial input that goes to zero approaching the final time step, the influence becomes particularly small. And by simply commanding zero in the first run, the resulting converged control minimizes the Euclidean norm of the underdetermined control history. Three main classes of ILC laws are examined, and it is shown that all ILC laws converge to the identical control history, as the converged result is not a function of the ILC law. All of these conclusions apply to ILC that aims to track a given finite time trajectory, and also apply to ILC that in addition aims to cancel the effect of a disturbance that repeats each run. Having these stable inverses opens up opportunities for many control design approaches. (1) ILC was the original motivation of the new stable inverses. Besides the scenario using the initial delete above, consider ILC to perform local learning in a trajectory, by using a quadratic cost control in general, but phasing into the skip step stable inverse for some portion of the trajectory that needs high precision tracking. (2) One step ahead control uses a model to compute the control action at the current time step to produce the output desired at the next time step. Before it can be useful, it must be phased in to honor actuator saturation limits, and being a true inverse it requires that the system have a stable inverse. One could generalize this to p-step ahead control, updating the control action every p steps instead of every one step. It determines how small p can be to give a stable implementation using skip step, and it can be quite small. So it only requires knowledge of future desired control for a few steps. (3) Note that the statement in (2) can be reformulated as Linear Model Predictive Control that updates every p steps instead of every step. This offers the ability to converge to zero tracking error at every time step of the skip step inverse, instead of the usual aim to converge to a quadratic cost solution. (4) Indirect discrete time adaptive control combines one step ahead control with the projection algorithm to perform real time identification updates. It has limited applications, because it requires a stable inverse.
197

Optical Characterization and Optimization of Display Components : Some Applications to Liquid-Crystal-Based and Electrochromics-Based Devices

Valyukh, Iryna January 2009 (has links)
This dissertation is focused on theoretical and experimental studies of optical properties of materials and multilayer structures composing liquid crystal displays (LCDs) and electrochromic (EC) devices. By applying spectroscopic ellipsometry, we have determined the optical constants of thin films of electrochromic tungsten oxide (WOx) and nickel oxide (NiOy), the films’ thickness and roughness. These films, which were obtained at spattering conditions possess high transmittance that is important for achieving good visibility and high contrast in an EC device. Another application of the general spectroscopic ellipsometry relates to the study of a photo-alignment layer of a mixture of azo-dyes SD-1 and SDA-2. We have found the optical constants of this mixture before and after illuminating it by polarized UV light. The results obtained confirm the diffusion model to explain the formation of the photo-induced order in azo-dye films. We have developed new techniques for fast characterization of twisted nematic LC cells in transmissive and reflective modes. Our techniques are based on the characteristics functions that we have introduced for determination of parameters of non-uniform birefringent media. These characteristic functions are found by simple procedures and can be utilised for simultaneous determination of retardation, its wavelength dispersion, and twist angle, as well as for solving associated optimization problems. Cholesteric LCD that possesses some unique properties, such as bistability and good selective scattering, however, has a disadvantage – relatively high driving voltage (tens of volts). The way we propose to reduce the driving voltage consists of applying a stack of thin (~1µm) LC layers. We have studied the ability of a layer of a surface stabilized ferroelectric liquid crystal coupled with several retardation plates for birefringent color generation. We have demonstrated that in order to accomplish good color characteristics and high brightness of the display, one or two retardation plates are sufficient.
198

Combination Of Conventional Regularization Methods And Genetic Algorithms For Solving The Inverse Problem Of Electrocardiography

Sarikaya, Sedat 01 February 2010 (has links) (PDF)
Distribution of electrical potentials over the surface of the heart, i.e., the epicardial potentials, is a valuable tool to understand whether there is a defect in the heart. However, it is not easy to detect these potentials non-invasively. Instead, body surface potentials, which occur as a result of the electrical activity of the heart, are measured to diagnose heart defects. However the source electrical signals loose some critical details because of the attenuation and smoothing they encounter due to body tissues such as lungs, fat, etc. Direct measurement of these epicardial potentials requires invasive procedures. Alternatively, one can reconstruct the epicardial potentials non-invasively from the body surface potentials / this method is called the inverse problem of electrocardiography (ECG). The goal of this study is to solve the inverse problem of ECG using several well-known regularization methods and using their combinations with genetic algorihm (GA) and finally compare the performances of these methods. The results show that GA can be combined with the conventional regularization methods and their combination improves the regularization of ill-posed inverse ECG problem. In several studies, the results show that their combination provide a good scheme for solving the ECG inverse problem and the performance of regularization methods can be improved further. We also suggest that GA can be initiated succesfully with a training set of epicardial potentials, and with the optimum, over- and under-regularized Tikhonov regularization solutions.
199

Ill-Posedness Aspects of Some Nonlinear Inverse Problems and their Linearizations

Fleischer, G., Hofmann, B. 30 October 1998 (has links) (PDF)
In this paper we deal with aspects of characterizing the ill-posedn ess of nonlinear inverse problems based on the discussion of specific examples. In particular, a parameter identification problem to a second order differential equation and its ill-posed linear components are under consideration. A new approach to the classification ofill-posedness degrees for multiplication operators completes the paper.
200

Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flow

Flath, Hannah Pearl 11 September 2013 (has links)
Subsurface flow phenomena characterize many important societal issues in energy and the environment. A key feature of these problems is that subsurface properties are uncertain, due to the sparsity of direct observations of the subsurface. The Bayesian formulation of this inverse problem provides a systematic framework for inferring uncertainty in the properties given uncertainties in the data, the forward model, and prior knowledge of the properties. We address the problem: given noisy measurements of the head, the pdf describing the noise, prior information in the form of a pdf of the hydraulic conductivity, and a groundwater flow model relating the head to the hydraulic conductivity, find the posterior probability density function (pdf) of the parameters describing the hydraulic conductivity field. Unfortunately, conventional sampling of this pdf to compute statistical moments is intractable for problems governed by large-scale forward models and high-dimensional parameter spaces. We construct a Gaussian process surrogate of the posterior pdf based on Bayesian interpolation between a set of "training" points. We employ a greedy algorithm to find the training points by solving a sequence of optimization problems where each new training point is placed at the maximizer of the error in the approximation. Scalable Newton optimization methods solve this "optimal" training point problem. We tailor the Gaussian process surrogate to the curvature of the underlying posterior pdf according to the Hessian of the log posterior at a subset of training points, made computationally tractable by a low-rank approximation of the data misfit Hessian. A Gaussian mixture approximation of the posterior is extracted from the Gaussian process surrogate, and used as a proposal in a Markov chain Monte Carlo method for sampling both the surrogate as well as the true posterior. The Gaussian process surrogate is used as a first stage approximation in a two-stage delayed acceptance MCMC method. We provide evidence for the viability of the low-rank approximation of the Hessian through numerical experiments on a large scale atmospheric contaminant transport problem and analysis of an infinite dimensional model problem. We provide similar results for our groundwater problem. We then present results from the proposed MCMC algorithms. / text

Page generated in 0.0329 seconds