• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 168
  • 98
  • 66
  • 16
  • 11
  • 5
  • 2
  • 2
  • Tagged with
  • 390
  • 390
  • 135
  • 55
  • 54
  • 54
  • 53
  • 45
  • 39
  • 37
  • 34
  • 32
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analytical inverse model for post-event attribution of plutonium

Miller, James Christopher 15 May 2009 (has links)
An integral part of deterring nuclear terrorism is the swift attribution of any event to a particular state or organization. By quickly being able to identify the responsible party after a nuclear event, appropriate people may be held accountable for their actions. Currently, there is a system in place to determine the origin of nuclear devices and materials from post-event data; however, the system requires significant time to produce an answer within acceptable error margins. Described here is a deterministic approach derived from first principles to solve the inverse problem. The derivation starts with the basic change rate equation and ends in relationships for important nuclear concentrations and device yield. This results in a computationally efficient and timely method for producing an estimate of the material attributes. This estimate can then be used as a starting point for other more detailed methods and reduce the overall computation time of the post-event forensics. This work focused on a specific type of nuclear event: a plutonium improvised nuclear device (IND) explosion. From post-event isotopic ratios, this method determines the device’s pre-event isotopic concentrations of special nuclear material. From the original isotopic concentrations, the field of possible origins for the nuclear material is narrowed. In this scenario, knowing where the nuclear material did not originate is as important as knowing where it did. The derived methodology was tested using several cases of interest including simplified and realistic cases. For the simplistic cases, only two isotopes comprised the material being fissioned. In the realistic cases, both Weapons Grade and Reactor Grade plutonium were used to cover the spectrum of possible fissile material to be used by terrorists. The methodology performed very well over the desired energy range. Errors were under two percent from the expected values for all yields under 50 kT. In the realistic cases, competing reactions caused an increase in error; however, these stayed under five percent. As expected, with an increased yield, the error continued to rise, but these errors increased linearly. A sensitivity analysis was performed on the methodology to determine the impact of uncertainty in various physical constants. The result was that the inverse methodology is not overly sensitive to perturbations in these constants.
22

Scattered neutron tomography based on a neutron transport problem

Scipolo, Vittorio 01 November 2005 (has links)
Tomography refers to the cross-sectional imaging of an object from either transmission or reflection data collected by illuminating the object from many different directions. Classical tomography fails to reconstruct the optical properties of thick scattering objects because it does not adequately account for the scattering component of the neutron beam intensity exiting the sample. We proposed a new method of computed tomography which employs an inverse problem analysis of both the transmitted and scattered images generated from a beam passing through an optically thick object. This inverse problem makes use of a computationally efficient, two-dimensional forward problem based on neutron transport theory that effectively calculates the detector readings around the edges of an object. The forward problem solution uses a Step-Characteristic (SC) code with known uncollided source per cell, zero boundary flux condition and Sn discretization for the angular dependence. The calculation of the uncollided sources is performed by using an accurate discretization scheme given properties and position of the incoming beam and beam collimator. The detector predictions are obtained considering both the collided and uncollided components of the incoming radiation. The inverse problem is referred as an optimization problem. The function to be minimized, called an objective function, is calculated as the normalized-squared error between predicted and measured data. The predicted data are calculated by assuming a uniform distribution for the optical properties of the object. The objective function depends directly on the optical properties of the object; therefore, by minimizing it, the correct property distribution can be found. The minimization of this multidimensional function is performed with the Polack Ribiere conjugate-gradient technique that makes use of the gradient of the function with respect to the cross sections of the internal cells of the domain. The forward and inverse models have been successfully tested against numerical results obtained with MCNP (Monte Carlo Neutral Particles) showing excellent agreements. The reconstructions of several objects were successful. In the case of a single intrusion, TNTs (Tomography Neutron Transport using Scattering) was always able to detect the intrusion. In the case of the double body object, TNTs was able to reconstruct partially the optical distribution. The most important defect, in terms of gradient, was correctly located and reconstructed. Difficulties were discovered in the location and reconstruction of the second defect. Nevertheless, the results are exceptional considering they were obtained by lightening the object from only one side. The use of multiple beams around the object will significantly improve the capability of TNTs since it increases the number of constraints for the minimization problem.
23

Recovery of the logical gravity field by spherical regularization wavelets approximation and its numerical implementation

Shuler, Harrey Jeong 29 April 2014 (has links)
As an alternative to spherical harmonics in modeling the gravity field of the Earth, we built a multiresolution gravity model by employing spherical regularization wavelets in solving the inverse problem, i.e. downward propagation of the gravity signal to the Earth.s surface. Scale discrete Tikhonov spherical regularization scaling function and wavelet packets were used to decompose and reconstruct the signal. We recovered the local gravity anomaly using only localized gravity measurements at the observing satellite.s altitude of 300 km. When the upward continued gravity anomaly to the satellite altitude with a resolution 0.5° was used as simulated measurement inputs, our model could recover the local surface gravity anomaly at a spatial resolution of 1° with an RMS error between 1 and 10 mGal, depending on the topography of the gravity field. Our study of the effect of varying the data volume and altering the maximum degree of Legendre polynomials on the accuracy of the recovered gravity solution suggests that the short wavelength signals and the regions with high magnitude gravity gradients respond more strongly to such changes. When tested with simulated SGG measurements, i.e. the second order radial derivative of the gravity anomaly, at an altitude of 300 km with a 0.7° spatial resolution as input data, our model could obtain the gravity anomaly with an RMS error of 1 ~ 7 mGal at a surface resolution of 0.7° (< 80 km). The study of the impact of measurement noise on the recovered gravity anomaly implies that the solutions from SGG measurements are less susceptible to measurement errors than those recovered from the upward continued gravity anomaly, indicating that the SGG type mission such as GOCE would be an ideal choice for implementing our model. Our simulation results demonstrate the model.s potential in determining the local gravity field at a finer scale than could be achieved through spherical harmonics, i.e. less than 100 km, with excellent performance in edge detection. / text
24

On use of inhomogeneous media for elimination of ill-posedness in the inverse problem

Feroj, Md Jamil 17 April 2014 (has links)
This thesis outlines a novel approach to make ill-posed inverse source problem well-posed exploiting inhomogeneous media. More precisely, we use Maxwell fish-eye lens to make scattered field emanating from distinct regions of an object of interest more directive and concentrated onto distinct regions of observation. The object of interest in this thesis is a thin slab placed conformally to the Maxwell fish-eye lens. Focused Green’s function of the background medium results in diagonal dominance of the matrix to be inverted for inverse problem solution. Hence, the problem becomes well-posed. We have studied one-dimensional variation of a very thin dielectric slab of interest having conformal shape to the lens. This method has been tested solving the forward problem using both Mie series and using COMSOL. Most common techniques for solving inverse problem are full non-linear inversion techniques, such as: distorted Born iterative method (DBIM) and contrast source inversion (CSI). DBIM needs to be regularized at every iteration. In some cases, it converges to a solution, and, in some cases, it does not. Diffraction tomography does not utilize regularization. It is a technique under Born approximation. It eliminates ill-posedness, but it works only for small contrast. Our proposed method works for high contrast and also provides well-posedness. In this thesis, our objective is to demonstrate inverse source problem and inverse scattering problem are not inherently ill-posed. They are ill-posed because conventional techniques usually use homogeneous or non-focusing background medium. These mediums do not support separation of scattered field. Utilization of background medium for scattered field separation casts the inverse problem in well-posed form.
25

Bayesian M/EEG source localization with possible joint skull conductivity estimation / Méthodes bayésiennes pour la localisation des sources M/EEG et estimation de la conductivité du crâne

Costa, Facundo hernan 02 March 2017 (has links)
Les techniques M/EEG permettent de déterminer les changements de l'activité du cerveau, utiles au diagnostic de pathologies cérébrales, telle que l'épilepsie. Ces techniques consistent à mesurer les potentiels électriques sur le scalp et le champ magnétique autour de la tête. Ces mesures sont reliées à l'activité électrique du cerveau par un modèle linéaire dépendant d'une matrice de mélange liée à un modèle physique. La localisation des sources, ou dipôles, des mesures M/EEG consiste à inverser le modèle physique. Cependant, la non-unicité de la solution (due à la loi fondamentale de physique) et le faible nombre de dipôles rendent le problème inverse mal-posé. Sa résolution requiert une forme de régularisation pour restreindre l'espace de recherche. La littérature compte un nombre important de travaux traitant de ce problème, notamment avec des approches variationnelles. Cette thèse développe des méthodes Bayésiennes pour résoudre des problèmes inverses, avec application au traitement des signaux M/EEG. L'idée principale sous-jacente à ce travail est de contraindre les sources à être parcimonieuses. Cette hypothèse est valide dans plusieurs applications, en particulier pour certaines formes d'épilepsie. Nous développons différents modèles Bayésiens hiérarchiques pour considérer la parcimonie des sources. En théorie, contraindre la parcimonie des sources équivaut à minimiser une fonction de coût pénalisée par la norme l0 de leurs positions. Cependant, la régularisation l0 générant des problèmes NP-complets, l'approximation de cette pseudo-norme par la norme l1 est souvent adoptée. Notre première contribution consiste à combiner les deux normes dans un cadre Bayésien, à l'aide d'une loi a priori Bernoulli-Laplace. Un algorithme Monte Carlo par chaîne de Markov est utilisé pour estimer conjointement les paramètres du modèle et les positions et intensités des sources. La comparaison des résultats, selon plusieurs scenarii, avec ceux obtenus par sLoreta et la régularisation par la norme l1 montre des performances intéressantes, mais au détriment d'un coût de calcul relativement élevé. Notre modèle Bernoulli Laplace résout le problème de localisation des sources pour un instant donné. Cependant, il est admis que l'activité cérébrale a une certaine structure spatio-temporelle. L'exploitation de la dimension temporelle est par conséquent intéressante pour contraindre d'avantage le problème. Notre seconde contribution consiste à formuler un modèle de parcimonie structurée pour exploiter ce phénomène biophysique. Précisément, une distribution Bernoulli-Laplacienne multivariée est proposée comme loi a priori pour les dipôles. Une variable latente est introduite pour traiter la loi a posteriori complexe résultante et un algorithme d'échantillonnage original de type Metropolis Hastings est développé. Les résultats montrent que la technique d'échantillonnage proposée améliore significativement la convergence de la méthode MCMC. Une analyse comparative des résultats a été réalisée entre la méthode proposée, une régularisation par la norme mixte l21, et l'algorithme MSP (Multiple Sparse Priors). De nombreuses expérimentations ont été faites avec des données synthétiques et des données réelles. Les résultats montrent que notre méthode a plusieurs avantages, notamment une meilleure localisation des dipôles. Nos deux précédents algorithmes considèrent que le modèle physique est entièrement connu. Cependant, cela est rarement le cas dans les applications pratiques. Au contraire, la matrice du modèle physique est le résultat de méthodes d'approximation qui conduisent à des incertitudes significatives. / M/EEG mechanisms allow determining changes in the brain activity, which is useful in diagnosing brain disorders such as epilepsy. They consist of measuring the electric potential at the scalp and the magnetic field around the head. The measurements are related to the underlying brain activity by a linear model that depends on the lead-field matrix. Localizing the sources, or dipoles, of M/EEG measurements consists of inverting this linear model. However, the non-uniqueness of the solution (due to the fundamental law of physics) and the low number of dipoles make the inverse problem ill-posed. Solving such problem requires some sort of regularization to reduce the search space. The literature abounds of methods and techniques to solve this problem, especially with variational approaches. This thesis develops Bayesian methods to solve ill-posed inverse problems, with application to M/EEG. The main idea underlying this work is to constrain sources to be sparse. This hypothesis is valid in many applications such as certain types of epilepsy. We develop different hierarchical models to account for the sparsity of the sources. Theoretically, enforcing sparsity is equivalent to minimizing a cost function penalized by an l0 pseudo norm of the solution. However, since the l0 regularization leads to NP-hard problems, the l1 approximation is usually preferred. Our first contribution consists of combining the two norms in a Bayesian framework, using a Bernoulli-Laplace prior. A Markov chain Monte Carlo (MCMC) algorithm is used to estimate the parameters of the model jointly with the source location and intensity. Comparing the results, in several scenarios, with those obtained with sLoreta and the weighted l1 norm regularization shows interesting performance, at the price of a higher computational complexity. Our Bernoulli-Laplace model solves the source localization problem at one instant of time. However, it is biophysically well-known that the brain activity follows spatiotemporal patterns. Exploiting the temporal dimension is therefore interesting to further constrain the problem. Our second contribution consists of formulating a structured sparsity model to exploit this biophysical phenomenon. Precisely, a multivariate Bernoulli-Laplacian distribution is proposed as an a priori distribution for the dipole locations. A latent variable is introduced to handle the resulting complex posterior and an original Metropolis-Hastings sampling algorithm is developed. The results show that the proposed sampling technique improves significantly the convergence. A comparative analysis of the results is performed between the proposed model, an l21 mixed norm regularization and the Multiple Sparse Priors (MSP) algorithm. Various experiments are conducted with synthetic and real data. Results show that our model has several advantages including a better recovery of the dipole locations. The previous two algorithms consider a fully known leadfield matrix. However, this is seldom the case in practical applications. Instead, this matrix is the result of approximation methods that lead to significant uncertainties. Our third contribution consists of handling the uncertainty of the lead-field matrix. The proposed method consists in expressing this matrix as a function of the skull conductivity using a polynomial matrix interpolation technique. The conductivity is considered as the main source of uncertainty of the lead-field matrix. Our multivariate Bernoulli-Laplacian model is then extended to estimate the skull conductivity jointly with the brain activity. The resulting model is compared to other methods including the techniques of Vallaghé et al and Guttierez et al. Our method provides results of better quality without requiring knowledge of the active dipole positions and is not limited to a single dipole activation.
26

Desenvolvimento de algoritmo de imagens absolutas de tomografia por impedância elétrica para uso clínico. / Development of an absolute image algorithm for electrical impedance tomography for clinical application.

Erick Darío León Bueno de Camargo 20 May 2013 (has links)
A Tomografia de Impedância Elétrica é uma técnica de obtenção de imagens não invasiva que pode ser usada em aplicações clínicas para estimar a impeditividade dos tecidos a partir de medidas elétricas na superfície do corpo. Matematicamente este é um problema inverso, não linear e mal posto. Geralmente é usado um filtro espacial Gaussiano passa alta como método de regularização para resolver o problema inverso. O objetivo principal deste trabalho é propor o uso de informação estatística fisiológica e anatômica da distribuição de resistividades dos tecidos do tórax, também chamada de atlas anatômico, em conjunto com o filtro Gaussiano como métodos de regularização. A metodologia proposta usa o método dos elementos finitos e o algoritmo de Gauss-Newton para reconstruir imagens de resistividade tridimensionais. A Teoria do Erro de Aproximação é utilizada para reduzir os erros relacionados à discretização e dimensões da malha de elementos finitos. Dados de tomografia de impedância elétrica e imagens de tomografia computadorizada coletados in vivo em um suíno com diferentes alterações fisiológicas pulmonares foram utilizados para validar o algoritmo proposto. As imagens obtidas foram consistentes com os fenômenos de atelectasia, derrame pleural, pneumotórax e variações associadas a diferentes níveis de pressão durante a ventilação mecânica. Os resultados mostram que a reconstrução de imagens de suínos com informação clínica significativa é possível quando tanto o filtro Gaussiano quanto o atlas anatômico são usados como métodos de regularização. / Electrical Impedance Tomography is a non invasive imaging technique that can be used in clinical applications to infer living tissue impeditivity from boundary electrical measurements. Mathematically this is an non-linear ill-posed inverse problem. Usually a spatial high-pass Gaussian filter is used as a regularization method for solving the inverse problem. The main objective of this work is to propose the use of physiological and anatomical priors of tissue resistivity distribution within the thorax, also known as anatomical atlas, in conjunction with the Gaussian filter as regularization methods. The proposed methodology employs the finite element method and the Gauss-Newton algorithm in order to reconstruct three-dimensional resistivity images. The Approximation Error Theory is used to reduce discretization effects and mesh size errors. Electrical impedance tomography data and computed tomography images of physiological pulmonary changes collected in vivo in a swine were used to validate the proposed method. The images obtained are compatible with atelectasis, pneumothorax, pleural effusion and different ventilation pressures during mechanical ventilation. The results show that image reconstruction from swines with clinically significant information is feasible when both the Gaussian filter and the anatomical atlas are used as regularization methods.
27

Estudo sobre a utilização de tomografia acústica para a reconstrução de campos internos de temperatura / Study about the utilization of acoustic tomography to reconstruct the internal temperature distribution

Érica Regina Filletti 27 November 2002 (has links)
Esta dissertação apresenta um estudo sobre a utilização de tomografia acústica para reconstruir a distribuição interna de temperaturas de um corpo ou escoamento. Para tanto, o problema inverso foi modelado matematicamente a partir da equação de propagação acústica e de um funcional de erro quantificando a sensibilidade dos perfis de pressão acústica externa relativamente a variações na distribuição interna de impedância acústica. Simulações numéricas foram realizadas em um modelo de um problema real, tendo sido testadas duas técnicas de excitação, a clássica tipo Dirac e uma estratégia otimizada segundo um perfil triangular. / This work presents a study about the utilization of acoustic tomography to reconstruct the internal temperature distribution of a body or a flow. To do this, the inverse problem was mathematically modeled from the acoustic propagation equation and a error functional quantifying the sensitivity of external acoustic pressure profile according to changes in the internal acoustic impedance distribution. Numerical simulations were done in a real problem model, two excitation techniques were tested, the classical Dirac type and a optimized strategy with a triangular profile.
28

Etude de quelques modèles en imagerie photoacoustique / Study of some models in photoacoustic imaging

Vauthrin, Margaux 03 July 2017 (has links)
Cette thèse porte sur l'étude de la méthode d'imagerie photoacoustique, une nouvelle modalité hybride permettant de combiner la haute résolution de l'imagerie par ultrasons et le contraste de l'imagerie optique. Nous y étudions en particulier le problème inverse associé et sa résolution : il se décompose en l'inversion de l'équation d'ondes et en celle de l'équation de diffusion optique, dont le but est de retrouver les paramètres optiques du milieu. Dans la première partie de cette étude nous développons un modèle permettant de prendre en compte les variations de la vitesse acoustique dans le milieu biologique. En effet, la plupart des méthodes d'inversion supposent une vitesse acoustique constante, ce qui est à l'origine d'erreurs dans les reconstructions. La deuxième partie de la thèse porte sur une étude mathématique du phénomène de limitation de la profondeur de l'imagerie photoacoustique. Nous calculons une estimation de stabilité du problème inverse dans le cas d'un milieu stratifié et nous montrons que la reconstruction se dégrade avec la profondeur. Nous étudions dans la dernière partie le phénomène photoacoustique en présence de nanoparticules métalliques : ces marqueurs permettent d'amplifier par des résonances le signal photoacoustique généré autour d'elles. Elles permettent ainsi une meilleure visibilité des tissus en profondeur. Nous explicitons ici le modèle mathématique de génération du signal photoacoustique, ainsi que la résolution théorique du problème inverse photoacoustique dans ce contexte. / This thesis work is related to photoacoustic imaging techniques which are new multiwave modalities in medical imaging that combine both high resolution of ultrasounds and contrast of optical methods. Weprecisely studied the inverse problem that consists of determining the optical coefficients of biologicaltissues from measurement of acoustic waves generated by the photoacoustic effect. The photoacoustic inverse problem proceeds in two steps.We first retrieve the initial pressure from the measurement of the pressure wave on a part of the boundary of the sample. The first inversion takes then the form of a linear inverse source problem and provides internal data for the optical waves that are more sensitive to the contrast of the absorption and diffusion coefficients. In a second step we recover the optical coefficients from the acquired internal data.The aim of this work is to study the two inversions in different contexts. In the first part, we develop a model that takes into account the variations of the acoustic speed in the medium. Indeed, most of the inversion methods suppose that the acoustic speed is constant, and this assumption can lead to errors in the reconstruction of the optical coefficients. The second part of this work is the derivation of stability estimates for the photoacoustic inverse problem in a layered medium. We prove that the reconstruction is getting worse with depth. This is one of the main drawbacks of the photacoustic method, the imaging depth is limited to a few centimeters. The last part is about photoacoustic generation with plasmonic nanoparticles. They enhance the photoacoustic signal around them, so that we can investigate the tissue more deeply. We derive the mathematical model of the photoacoustic generation by heating nanoparticles, and we solve the photoacoustic inverse problem in this context.
29

Numerical modeling and inversion of geophysical electromagnetic measurements using a thin plate model

Pirttijärvi, M. (Markku) 08 November 2003 (has links)
Abstract The thesis deals with numerical methods designed for the modeling and inversion of geophysical electromagnetic (EM) measurements using a conductive thin plate model. The main objectives are to study the EM induction problem in general and to develop practical interpretation tools for mineral prospecting in particular. The starting point is a linearized inversion method based on the singular value decomposition and a new adaptive damping method. The inversion method is introduced to the interpretation of time-domain EM (TEM) measurements using a thin plate in free-space. The central part of the thesis is a new approximate modeling method, which is based on an integral equation approach and a special lattice model. At first the modeling method is applied to the interpretation of frequency-domain EM (FEM) data using a thin plate in conductive two-layered earth. After this time-domain responses are modeled applying a Fourier-sine transform of broadband FEM computations. The results demonstrate that the approximate computational method can model the geophysical frequency and time-domain EM responses of a thin conductor in conductive host medium with sufficient accuracy, and that the inversion method can provide reliable estimates for the model parameters. The fast forward computation enables interactive interpretation of FEM data and feasible forward modeling of TEM responses. The misfit function mapping and analysis of the singular value decomposition have provided additional information about the sensitivity, resolution, and the correlation behavior of the thin plate parameters.
30

Elektrická impedanční tomografie měkkých tkání: Řešení přímé a obrácené úlohy / Electrical impedance tomography of soft tissue: Forward and inverse modelling

Pšenka, Marek January 2017 (has links)
Electrical impedance tomography of soft tissue: Forward and inverse modelling The diploma thesis builds the neccesary apparatus to formulate and solve the inverse problem of Eletrical Impedance Tomography (EIT), including strategies to remedy the ill-conditioning of the problem. The problem itself lies in determining the structure of a body of interest by driving a set of electrical currents through electrodes connected to its surface. The aim of the thesis is to investigate possible utility of this method in medical applications, namely scanning for malignancies in the female breast, by studying the interaction of tissue with the electromagnetic field and by preparing a set of correspoding numerical experiments. An approximate characterization of the method's sensitivity with respect to noise is derived based on the most basic set of such numerical experiments, which were prepared by a complete software solution called prs4D developed by the author and his advisor, while some aspects of its implementation are included in the thesis.

Page generated in 0.0706 seconds