51 |
Analyse statistique et interprétation automatique de données diagraphiques pétrolières différées à l’aide du calcul haute performance / Statistical analysis and automatic interpretation of oil logs using high performance computingBruned, Vianney 18 October 2018 (has links)
Dans cette thèse, on s'intéresse à l’automatisation de l’identification et de la caractérisation de strates géologiques à l’aide des diagraphies de puits. Au sein d’un puits, on détermine les strates géologiques grâce à la segmentation des diagraphies assimilables à des séries temporelles multivariées. L’identification des strates de différents puits d’un même champ pétrolier nécessite des méthodes de corrélation de séries temporelles. On propose une nouvelle méthode globale de corrélation de puits utilisant les méthodes d’alignement multiple de séquences issues de la bio-informatique. La détermination de la composition minéralogique et de la proportion des fluides au sein d’une formation géologique se traduit en un problème inverse mal posé. Les méthodes classiques actuelles sont basées sur des choix d’experts consistant à sélectionner une combinaison de minéraux pour une strate donnée. En raison d’un modèle à la vraisemblance non calculable, une approche bayésienne approximée (ABC) aidée d’un algorithme de classification basé sur la densité permet de caractériser la composition minéralogique de la couche géologique. La classification est une étape nécessaire afin de s’affranchir du problème d’identifiabilité des minéraux. Enfin, le déroulement de ces méthodes est testé sur une étude de cas. / In this thesis, we investigate the automation of the identification and the characterization of geological strata using well logs. For a single well, geological strata are determined thanks to the segmentation of the logs comparable to multivariate time series. The identification of strata on different wells from the same field requires correlation methods for time series. We propose a new global method of wells correlation using multiple sequence alignment algorithms from bioinformatics. The determination of the mineralogical composition and the percentage of fluids inside a geological stratum results in an ill-posed inverse problem. Current methods are based on experts’ choices: the selection of a subset of mineral for a given stratum. Because of a model with a non-computable likelihood, an approximate Bayesian method (ABC) assisted with a density-based clustering algorithm can characterize the mineral composition of the geological layer. The classification step is necessary to deal with the identifiability issue of the minerals. At last, the workflow is tested on a study case.
|
52 |
Estudo do problema inverso em balanço populacional aplicado a degradação de polímeros. / Study of the inverse problem in population balances applied to polymer degradation.Murilo Uliana 13 December 2011 (has links)
Algoritmos computacionais e análise matemática têm sido grandes aliados na determinação de informação quantitativa extraída de observações experimentais. No presente trabalho, estudou-se a aplicação da metodologia do problema inverso em balanço populacional que descreve como varia a distribuição de tamanhos de moléculas poliméricas durante diferentes processos de degradação de polímeros. A evolução da distribuição durante o processo de quebra pode ser descrita matematicamente por equação de balanço populacional. No assim chamado problema inverso, as distribuições medidas experimentalmente são usadas para estimar os parâmetros do balanço populacional que descrevem, por exemplo, como as taxas de quebra variam ao longo do comprimento da cadeia e como variam com o tamanho da cadeia. Este problema inverso é conhecido por seu intrínseco mal condicionamento numérico. Um algoritmo previamente desenvolvido na literatura para problemas de quebra de gotas em emulsões líquidas, baseado no conceito de auto-similaridade das distribuições, foi adaptado e aplicado no presente trabalho para o problema de quebra de cadeias poliméricas durante a degradação do polímero. Dados experimentais de diferentes processos de degradação, obtidos da literatura, foram testados: degradação de polipropileno por radicais livres gerados por peróxidos, degradação de dextrana por hidrólise ácida, degradação ultra-sônica de dextrana, degradação mecânica por cisalhamento de poliestireno, degradação enzimática de guar, e degradação ultrassônica de guar. As distribuições de taxa de quebra obtidas para os diferentes sistemas foram analisadas e interpretadas em termos das particularidades e do mecanismo de cada tipo de processo de degradação, visando um melhor entendimento fundamental dos processos. / Computational algorithms are used to obtain quantitative information from experimental observations. The aim of the present work was the application of the methodology of inverse problem in population balance used to describe the evolution of the chain length distribution in different polymer degradation processes. The time evolution of the chain length distribution during the polymer breakage can be mathematically described by a population balance equation. In the so-called inverse problem, experimentally measured distributions are used to estimate the parameters of the population balance, such as the distribution of breakage rate along the chain and as function of the chain length. The inverse problem is known to be an ill-conditioned numerical problem. An algorithm previously developed in the literature for liquid droplet breakage in liquid emulsions, based on the concept of self-similarity of the distributions, was adapted and applied in the present work for the problem of polymer scission during polymer degradation. Experimental data of degradation of different polymers were taken from the literature and used to test the procedure: free-radical degradation of polypropylene acid hydrolysis of dextran, ultrasonic degradation of dextran, shear-induced mechanical degradation of polystyrene, enzymatic hydrolysis of guar; and ultrasonic degradation of guar. The breakage rate distribution obtained for the different systems were analyzed and interpreted in terms of the particularities and chemical mechanisms involved in the different degradation processes, aiming at a better understanding of the fundamentals governing the processes.
|
53 |
Classification par réseaux de neurones dans le cadre de la scattérométrie ellipsométrique / Neural classification in ellipsometric scatterometryZaki, Sabit Fawzi Philippe 12 December 2016 (has links)
La miniaturisation des composants impose à l’industrie de la micro-électronique de trouver des techniques de caractérisation fiables rapides et si possible à moindre coût. Les méthodes optiques telles que la scattérométrie se présentent aujourd’hui comme des alternatives prometteuses répondant à cette problématique de caractérisation. Toutefois, l’ensemble des méthodes scattérométriques nécessitent un certain nombre d’hypothèses pour assurer la résolution d’un problème inverse et notamment la connaissance de la forme géométrique de la structure à tester. Le modèle de structure supposé conditionne la qualité même de la caractérisation. Dans cette thèse, nous proposons l’utilisation des réseaux de neurones comme outils d’aide à la décision en amont de toute méthode de caractérisation. Nous avons validé l’utilisation des réseaux de neurones dans le cadre de la reconnaissance des formes géométriques de l’échantillon à tester par la signature optique utilisée dans toute étape de caractérisation scattérométrique. Tout d’abord, le cas d’un défaut lithographique particulier lié à la présence d’une couche résiduelle de résine au fond des sillons est étudié. Ensuite, nous effectuons une analyse de détection de défaut de modèle utilisé dans la résolution du problème inverse. Enfin nous relatons les résultats obtenus dans le cadre de la sélection de modèles géométriques par réseaux de neurones en amont d’un processus classique de caractérisation scattérométrique. Ce travail de thèse a montré que les réseaux de neurones peuvent bien répondre à la problématique de classification en scattérométrie ellipsométrique et que l’utilisation de ces derniers peut améliorer cette technique optique de caractérisation / The miniaturization of components in the micro-electronics industry involves the need of fast reliable technique of characterization with lower cost. Optical methods such as scatterometry are today promising alternative to this technological need. However, scatterometric method requires a certain number of hypothesis to ensure the resolution of an inverse problem, in particular the knowledge of the geometrical shape of the structure under test. The assumed model of the structure determines the quality of the characterization. In this thesis, we propose the use of neural networks as decision-making tools upstream of any characterization method. We validated the use of neural networks in the context of recognition of the geometrical shapes of the sample under testing by the use of optical signature in any scatterometric characterization process. First, the case of lithographic defect due to the presence of a resist residual layer at the bottom of the grooves is studied. Then, we carry out an analysis of model defect in the inverse problem resolution. Finally, we report results in the context of selection of geometric models by neural networks upstream of a classical scatterometric characterization process. This thesis has demonstrated that neural networks can well answer the problem of classification in ellipsometric scatterometry and their use can improve this optical characterization technique
|
54 |
Model-Based Iterative Reconstruction and Direct Deep Learning for One-Sided Ultrasonic Non-Destructive EvaluationHani A. Almansouri (5929469) 16 January 2019 (has links)
<p></p><p>One-sided ultrasonic non-destructive evaluation (UNDE) is extensively
used to characterize structures that need to be inspected and maintained from
defects and flaws that could affect the performance of power plants, such as
nuclear power plants. Most UNDE systems send acoustic pulses into the structure
of interest, measure the received waveform and use an algorithm to reconstruct
the quantity of interest. The most widely used algorithm in UNDE systems is the
synthetic aperture focusing technique (SAFT) because it produces acceptable
results in real time. A few regularized inversion techniques with linear models
have been proposed which can improve on SAFT, but they tend to make simplifying
assumptions that show artifacts and do not address how to obtain
reconstructions from large real data sets. In this thesis, we present two
studies. The first study covers the model-based iterative reconstruction (MBIR)
technique which is used to resolve some of the issues in SAFT and the current
linear regularized inversion techniques, and the second study covers the direct
deep learning (DDL) technique which is used to further resolve issues related
to non-linear interactions between the ultrasound signal and the specimen.</p>
<p>In the first study, we propose a model-based iterative
reconstruction (MBIR) algorithm designed for scanning UNDE systems. MBIR
reconstructs the image by optimizing a cost function that contains two terms:
the forward model that models the measurements and the prior model that models
the object. To further reduce some of the artifacts in the results, we enhance
the forward model of MBIR to account for the direct arrival artifacts and the
isotropic artifacts. The direct arrival signals are the signals received
directly from the transmitter without being reflected. These signals contain no
useful information about the specimen and produce high amplitude artifacts in
regions close to the transducers. We resolve this issue by modeling these direct
arrival signals in the forward model to reduce their artifacts while
maintaining information from reflections of other objects. Next, the isotropic
artifacts appear when the transmitted signal is assumed to propagate in all
directions equally. Therefore, we modify our forward model to resolve this issue
by modeling the anisotropic propagation. Next, because of the significant
attenuation of the transmitted signal as it propagates through deeper regions,
the reconstruction of deeper regions tends to be much dimmer than closer
regions. Therefore, we combine the forward model with a spatially variant prior
model to account for the attenuation by reducing the regularization as the
pixel gets deeper. Next, for scanning large structures, multiple scans are
required to cover the whole field of view. Typically, these scans are performed
in raster order which makes adjacent scans share some useful correlations.
Reconstructing each scan individually and performing a conventional stitching
method is not an efficient way because this could produce stitching artifacts
and ignore extra information from adjacent scans. We present an algorithm to
jointly reconstruct measurements from large data sets that reduces the
stitching artifacts and exploits useful information from adjacent scans. Next,
using simulated and extensive experimental data, we show MBIR results and
demonstrate how we can improve over SAFT as well as existing regularized
inversion techniques. However, even with this improvement, MBIR still results
in some artifacts caused by the inherent non-linearity of the interaction
between the ultrasound signal and the specimen.</p>
<p>In the second study, we propose DDL, a non-iterative model-based
reconstruction method for inverting measurements that are based on non-linear
forward models for ultrasound imaging. Our approach involves obtaining an
approximate estimate of the reconstruction using a simple linear back-projection
and training a deep neural network to refine this to the actual reconstruction.
While the technique we are proposing can show significant enhancement compared
to the current techniques with simulated data, one issue appears with the
performance of this technique when applied to experimental data. The issue is a
modeling mismatch between the simulated training data and the real data. We
propose an effective solution that can reduce the effect of this modeling
mismatch by adding noise to the simulation input of the training set before
simulation. This solution trains the neural network on the general features of
the system rather than specific features of the simulator and can act as a
regularization to the neural network. Another issue appears similar to the
issue in MBIR caused by the attenuation of deeper reflections. Therefore, we
propose a spatially variant amplification technique applied to the
back-projection to amplify deeper regions. Next, to reconstruct from a large
field of view that requires multiple scans, we propose a joint deep neural
network technique to jointly reconstruct an image from these multiple scans.
Finally, we apply DDL to simulated and experimental ultrasound data to
demonstrate significant improvements in image quality compared to the
delay-and-sum approach and the linear model-based reconstruction approach.</p><br><p></p>
|
55 |
Modélisation et imagerie électrocardiographiques / Modeling and imaging of electrocardiographic activityEl Houari, Karim 14 December 2018 (has links)
L'estimation des solutions du problème inverse en Électrocardiographie (ECG) représente un intérêt majeur dans le diagnostic et la thérapie d'arythmies cardiaques par cathéter. Ce dernier consiste à fournir des images 3D de la distribution spatiale de l'activité électrique du cœur de manière non-invasive à partir des données anatomiques et électrocardiographiques. D'une part ce problème est rendu difficile à cause de son caractère mal-posé. D'autre part, la validation des méthodes proposées sur données cliniques reste très limitée. Une alternative consiste à évaluer ces méthodes sur des données simulées par un modèle électrique cardiaque. Pour cette application, les modèles existants sont soit trop complexes, soit ne produisent pas un schéma de propagation cardiaque réaliste. Dans un premier temps, nous avons conçu un modèle cœur-torse basse-résolution qui génère des cartographies cardiaques et des ECGs réalistes dans des cas sains et pathologiques. Ce modèle est bâti sur une géométrie coeur-torse simplifiée et implémente le formalisme monodomaine en utilisant la Méthode des Éléments Finis (MEF). Les paramètres ont été identifiés par une approche évolutionnaire et leur influence a été analysée par une méthode de criblage. Dans un second temps, une nouvelle approche pour résoudre le problème inverse a été proposée et comparée aux méthodes classiques dans les cas sains et pathologiques. Cette méthode utilise un a priori spatio-temporel sur l'activité électrique cardiaque ainsi que le principe de contradiction afin de trouver un paramètre de régularisation adéquat. / The estimation of solutions of the inverse problem of Electrocardiography (ECG) represents a major interest in the diagnosis and catheter-based therapy of cardiac arrhythmia. The latter consists in non-invasively providing 3D images of the spatial distribution of cardiac electrical activity based on anatomical and electrocardiographic data. On the one hand, this problem is challenging due to its ill-posed nature. On the other hand, validation of proposed methods on clinical data remains very limited. Another way to proceed is by evaluating these methods performance on data simulated by a cardiac electrical model. For this application, existing models are either too complex or do not produce realistic cardiac patterns. As a first step, we designed a low-resolution heart-torso model that generates realistic cardiac mappings and ECGs in healthy and pathological cases. This model is built upon a simplified heart torso geometry and implements the monodomain formalism by using the Finite Element Method (FEM). Parameters were identified using an evolutionary approach and their influence were analyzed by a screening method. In a second step, a new approach for solving the inverse problem was proposed and compared to classical methods in healthy and pathological cases. This method uses a spatio-temporal a priori on the cardiac electrical activity and the discrepancy principle for finding an adequate regularization parameter.
|
56 |
Tensor tomographyDesai, Naeem January 2018 (has links)
Rich tomography is becoming increasingly popular since we have seen a substantial increase in computational power and storage. Instead of measuring one scalar for each ray, multiple measurements are needed per ray for various imaging modalities. This advancement has allowed the design of experiments and equipment which facilitate a broad spectrum of applications. We present new reconstruction results and methods for several imaging modalities including x-ray diffraction strain tomography, Photoelastic tomography and Polarimet- ric Neutron Magnetic Field Tomography (PNMFT). We begin with a survey of the Radon and x-ray transforms discussing several procedures for inversion. Furthermore we highlight the Singular Value Decomposition (SVD) of the Radon transform and consider some stability results for reconstruction in Sobolev spaces. We then move onto define the Non-Abelian Ray Transform (NART), Longitudinal Ray Transform (LRT), Transverse Ray Transform (TRT) and the Truncated Trans- verse Ray Transform (TTRT) where we highlight some results on the complete inver- sion procedure, SVD and mention stability results in Sobolev spaces. Thereafter we derive some relations between these transforms. Next we discuss the imaging modali- ties in mind and relate the transforms to their specific inverse problems, primarily being linear. Specifically, NART arises in the formulation of PNMFT where we want to im- age magnetic structures within magnetic materials with the use of polarized neutrons. After some initial numerical studies we extend the known Radon inversion presented by experimentalists, reconstructing fairly weak magnetic fields, to reconstruct PNMFT data up to phase wrapping. We can recover the strain field tomographically for a polycrystalline material using diffraction data and deduce that a certain moment of that data corresponds to the TRT. Quite naturally the whole strain tensor can be reconstructed from diffraction data measured using rotations about six axes. We develop an innovative explicit plane-by-plane filtered back-projection reconstruction algorithm for the TRT, using data from rotations about three orthogonal axes and state the reasoning why two- axis data is insufficient. For the first time we give the first published results of TRT reconstruction. To complete our discussion we present Photoelastic tomography which relates to the TTRT and implement the algorithm discussing the difficulties that arise in reconstructing data. Ultimately we return to PNMFT highlighting the nonlinear inverse problem due to phase wrapping. We propose an iterative reconstruction algorithm, namely the Modified Newton Kantarovich method (MNK) where we keep the Jacobian (FreÌchet derivative) fixed at the first step. However, this is shown to fail for large angles suggesting to develop the Newton Kantarovich (NK) method where we update the Jacobian at each step of the iteration process.
|
57 |
Um problema inverso na modelagem da difusão do calor / An inverse problem in modeling the diffusion of heatJhoab Pessoa de Negreiros 24 August 2010 (has links)
O presente trabalho aborda um problema inverso associado a difus~ao de calor em
uma barra unidimensional. Esse fen^omeno e modelado por meio da equac~ao diferencial par-
cial parabolica ut = uxx, conhecida como equac~ao de difus~ao do calor. O problema classico
(problema direto) envolve essa equac~ao e um conjunto de restric~oes { as condic~oes inicial
e de contorno {, o que permite garantir a exist^encia de uma soluc~ao unica. No problema
inverso que estudamos, o valor da temperatura em um dos extremos da barra n~ao esta
disponvel. Entretanto, conhecemos o valor da temperatura em um ponto x0 xo no interior
da barra. Para aproximar o valor da temperatura no intervalo a direita de x0, propomos e
testamos tr^es algoritmos de diferencas nitas: diferencas regressivas, leap-frog e diferencas
regressivas maquiadas. / This work deals with an inverse problem for the heat diusion in a bar of size L.
This one-dimensional phenomenum is modeled by the parabolic partial dierential equation
ut = uxx, known as the heat diusion equation. The classic problem (Direct Problem)
involves this equation coupled to a set of constraints { initial and boundary conditions { in
such a way as to guarantee a unique solution for it. The inverse problem hereby considered
may be described in the following way: at one bar extreme point the temperature is un-
known, but it is given at a xed interior point for all time. Three nite dierence algorithms
(backward dierences, leap-frog, disguised backward dierences) are proposed and tested to
approximate solutions for this problem.
Keywords: Diusion equation. Finite dierences. Inverse problem.
|
58 |
Um método de identificação de fontes de vibração em vigas. / A method of identification of sources of vibrations in beams.Nunes, Luis Flávio Soares 22 November 2012 (has links)
Neste trabalho, procuramos resolver o problema direto da equação da viga de Euler- Bernoulli bi-engastada com condições iniciais nulas. Estudamos o problema inverso da viga, que consiste em identificar a fonte de vibração, modelada como um elemento em L2, usando como dado a velocidade de um ponto arbitrário da viga, durante um intervalo de tempo arbitrariamente pequeno. A relevância deste trabalho na Engenharia encontra-se, por exemplo, na identificação de danos estruturais em vigas. / In this work, we try to solve the direct problem of the clamped-clamped Euler- Bernoulli beam equation, with zero initial conditions. We study the inverse problem of the beam, consisting in the identification of the source of vibration, shaped as an element in L2, using as data the speed from an arbitrary point of the beam, during a time interval arbitrarily small. The relevance of this work in Engineering, for example, is in the identification of structural damage in beams.
|
59 |
Mathematical Modeling of Immune Responses to Hepatitis C Virus InfectionRamirez, Ivan 01 December 2014 (has links)
An existing mathematical model of ordinary differential equations was studied to better understand the interactions between hepatitis C virus (HCV) and the immune system cells in the human body. Three possible qualitative scenarios were explored: dominant CTL response, dominant antibody response, and coexistence. Additionally, a sensitivity analysis was carried out to rank model parameters for each of these scenarios. Therapy was addressed as an optimal control problem. Numerical solutions of optimal controls were computed using a forward-backward sweep scheme for each scenario. Model parameters were estimated using ordinary least squares fitting from longitudinal data (serum HCV RNA measurements) given in reported literature.
|
60 |
APPROXIMATIONS IN RECONSTRUCTING DISCONTINUOUS CONDUCTIVITIES IN THE CALDERÓN PROBLEMLytle, George H. 01 January 2019 (has links)
In 2014, Astala, Päivärinta, Reyes, and Siltanen conducted numerical experiments reconstructing a piecewise continuous conductivity. The algorithm of the shortcut method is based on the reconstruction algorithm due to Nachman, which assumes a priori that the conductivity is Hölder continuous. In this dissertation, we prove that, in the presence of infinite-precision data, this shortcut procedure accurately recovers the scattering transform of an essentially bounded conductivity, provided it is constant in a neighborhood of the boundary. In this setting, Nachman’s integral equations have a meaning and are still uniquely solvable.
To regularize the reconstruction, Astala et al. employ a high frequency cutoff of the scattering transform. We show that such scattering transforms correspond to Beltrami coefficients that are not compactly supported, but exhibit certain decay at infinity. For this class of Beltrami coefficients, we establish that the complex geometric optics solutions to the Beltrami equation exist and exhibit the same subexponential decay as described in the 2006 work of Astala and Päivärinta. This is a first step toward extending the inverse scattering map of Astala and Päivärinta to non-compactly supported conductivities.
|
Page generated in 0.0544 seconds