• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 98
  • 66
  • 16
  • 11
  • 5
  • 2
  • 2
  • Tagged with
  • 392
  • 392
  • 136
  • 55
  • 54
  • 54
  • 53
  • 45
  • 39
  • 38
  • 34
  • 32
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Problèmes inverses pour le diagnostic de câbles électriques à partir de mesures de réflectométrie / Inverse problems for diagnosis of electric cables from reflectometry measurement

Berrabah, Nassif 08 November 2017 (has links)
Les câbles électriques sont présents dans de nombreux produits et systèmes où ils sont utilisés pour transmettre des données ou transporter de l'énergie. Ces liaisons sont la plupart du temps installées pour des durées d'exploitation longues au cours desquelles elles doivent subir l'usure du temps, ainsi que celle résultant d'un environnement parfois agressif. Alors que les câbles électriques assurent des fonctions essentielles et dans certains cas critiques, ils sont aussi sujets à des défaillances qui découlent des contraintes qu'ils endurent. Ceci explique la nécessité de surveiller leur état, afin de détecter au plus tôt les défauts naissants et d'intervenir avant qu'ils ne dégénèrent en dommages dont les conséquences peuvent être préjudiciable et économiquement lourdes. L'entreprise EDF est particulièrement concernée par cette problématique dans la mesure ou elle exploite des longueurs considérables de câbles pour le transport et la distribution d'électricité sur tout le territoire bien sûr, mais aussi au sein des centrales qui produisent l'électricité, pour alimenter les différents organes, et acheminer commandes et mesures. L'entreprise, attentive à ce que ces câbles soient en bon état de fonctionnement mène plusieurs travaux, d'une part pour étudier leur vieillissement et modes de dégradation, et d'autre part pour développer des méthodes et outils pour la surveillance et le diagnostic de ces composants essentiels. Le projet EDF CAIMAN (Cable AgIng MANagement) commandé par le SEPTEN (Service Etudes et Projets Thermiques Et Nucléaires) traite de ces questions, et les travaux présentés dans cette thèse ont été conduits dans ce cadre et sont le fruit d'une collaboration avec Inria (Institut National de Recherche en Informatique et Automatique). Partant du constat que les méthodes de diagnostic de câbles existantes à l'heure actuelle ne donnent pas pleine satisfaction, nous nous sommes donné pour objectif de développer des outils nouveaux. En effet, les techniques actuelles reposent sur différents moyens dont des tests destructifs, des prélèvements pour analyse en laboratoire, et des mesures sur site mais qui ne permettent pas de diagnostiquer certains défauts. Parmi les techniques non destructives, la réflectométrie, dont le principe est d'injecter un signal électrique à une extrémité du câble et d'analyser les échos, souffre aussi de certaines de ces limitations. En particulier, les défauts non-francs restent encore difficiles à détecter. Toutefois les travaux qui se multiplient autour de cette technique tentent d'en améliorer les performances, et certains obtiennent des résultats prometteurs. Les chercheurs de l'Inria qui travaillent sur le sujet ont développé des algorithmes pour exploiter des mesures de réflectométrie. En résolvant un problème inverse, les paramètres d'un modèle de câble sont estimés et servent alors d'indicateurs de l'état de dégradation du câble testé. L'objectif de cette thèse est d'étendre ces méthodes pour répondre aux besoins spécifiques d'EDF. Un des principaux défis auquel nous avons apporté une solution est la prise en compte des pertes ohmiques dans la résolution du problème inverse. Plus spécifiquement, notre contribution principale est une méthode d'estimation du profil de résistance linéique d'un câble. Cette estimation permet de révéler les défauts résistifs qui produisent souvent des réflexions faibles dans les réflectogrammes habituels. Une seconde contribution vise à améliorer la qualité des données utilisées par cette méthode d'estimation. Ainsi, nous proposons un pré-traitement des mesures dont le but est de gommer l'effet de la désadaptation des instruments aux câbles ou celui des connecteurs. Ces travaux apportent de nouveaux outils pour l'exploitation des mesures de réflectométrie et des solutions pour le diagnostic de certains défauts encore difficiles à détecter aujourd'hui. / Electric cables are ubiquitous in many devices and systems where they are used for data or power transmission. These connection links are most often installed for long periods of operation during which they are subject to aging and sometimes exposed to harsh environments. While electric cables fulfill important and sometimes even critical functions, they might fail due to the hard constraints they have to endure. This motivates the need for monitoring tools, in order to detect early faults and to intervene as soon as possible, before they mutate into heavier damage whose consequences can be detrimental and expensive. EDF company is very affected by this problematic insofar as it operates significant lengths of cables for energy distribution, but also in power plant for power supply of the diverse apparatus, to route data and to transmit measurement. The company has been leading several studies regarding cable aging, cable faults, and wire diagnosis methods. The CAIMAN project (Cable AgIng MANagement), sponsored by the Engineering Department of Nuclear and Thermal Projects (SEPTEN), deals with these questions. The work presented in this dissertation were led in this context and results from a collaboration with Inria (French National Institute for Research in Applied Mathematics and Computer Sciences). Starting from the observation that existing cable diagnosis methods do not offer full satisfaction, we targeted the goal of developping new tools to improve the state of the art. Existing techniques rely on a range of tests, some of which are destructive or involve in-lab investigations, but these still cannot detect some kind of faults. Among major techniques, reflectometry has the most promising results. This technique consists in the same principle as a radar. One sends a wave down a cable from one end. Then the reflected signal is analysed searching for signs of faults. Yet, this method also suffers some limitations and soft faults remain hard to detect. Researchers and industries multiply the investigations in the domain of reflectometry-based techniques, and some get interesting results. Scientists from Inria developped algorithms for cable parameter estimation from reflectometry measurements, following an inverse-problem approach. The goal of our work was to extend these methods to meet the specific needs of EDF. One of the main challenges we coped with was to take into account electric losses in the resolution of the inverse problem. Our main contribution is a method to estimate the per unit length resistance profile of a cable. This estimation reveals resistive faults that most often only produce weak reflections in reflectometry measurements. Other contributions concerns the improvement of the method based on pre-processing of the data whose role is to erase the effect of impedance mismatches. This work breaks new grounds in the domain of reflectometry-based wire diagnosis techniques.
232

Estudo numérico sobre a determinação de parâmetros em um sólido elástico-linear e incompressível / A numerical study about the determination of parameters in an incompressible and linearly elastic solid

Prado, Edmar Borges Theóphilo 09 June 2008 (has links)
A teoria de elasticidade linear clássica é utilizada no modelamento de problemas da física médica relacionados com a determinação de parâmetros elásticos de tecidos biológicos a partir da medição in vivo, ou, in vitro dos deslocamentos, ou, das deformações. Baseados em observações experimentais, as quais revelam que os tecidos biológicos anômalos têm comportamento mecânico diferente dos tecidos biológicos sadios, os pesquisadores têm modelado estes tecidos como sólidos elástico-lineares, isotrópicos, heterogêneos e incompressíveis. Neste trabalho, analisa-se uma classe de problemas planos relacionados à determinação do módulo de elasticidade ao cisalhamento µ de tecidos biológicos e propõe-se um procedimento numérico não-iterativo para obter soluções aproximadas para estes problemas a partir de campos de deslocamentos conhecidos de ensaios possíveis de serem realizados em laboratório. Os ensaios são estáticos e são simulados numericamente via método dos elementos finitos. Apresentam-se resultados obtidos das distribuições de µ em cilindros retos, longos e de seção retangular contendo inclusões cilíndricas circulares centradas, ou, excêntricas. Consideram-se inclusões mais, ou, menos rígidas do que o meio elástico circundante. Adicionalmente, os resultados obtidos no presente trabalho são comparados com resultados de outros pesquisadores que utilizam ensaios dinâmicos. Neste sentido, dois casos de inclusão circular centrada são resolvidos com as condições de contorno adaptadas do caso dinâmico para o caso estático. Resolve-se finalmente o caso de uma inclusão de forma geométrica complexa e seis vezes mais rígida do que o entorno. O cilindro contendo esta inclusão está submetido às condições de contorno propostas neste trabalho e também às condições de contorno adaptadas do caso dinâmico. Em todos os casos analisados os resultados são satisfatórios, apesar do emprego de uma quantidade reduzida de elementos finitos na reconstrução de µ. Deve-se ressaltar que nenhum método de regularização foi utilizado para tratar os deslocamentos obtidos dos ensaios simulados. Este trabalho é de grande interesse na detecção de tumores cancerígenos, tais como tumores nos seios e na próstata, e no diagnóstico diferenciado de tecidos biológicos. / The theory of classical linear elasticity is used to model of problems in medical physics that are related to the determination of elastic parameters of biological tissues from the measurement in vivo, or, in vitro of either displacements or strains. Based on experimental observations, which indicate that the abnormal biological tissues have different mechanical behavior from normal biological tissues, researchers have modeled these tissues as an incompressible, heterogeneous, and isotropic linear elastic solid. In this work a class of plane problems related to the determination of the shear elastic modulus µ of biological tissues is examined. A non-iterative numerical procedure to obtain an approximate solution to these problems from known displacement fields is proposed. The displacement fields are obtained from experiments that are possible to reproduce in laboratory. The experiments are quasi-static and are simulated numerically using the finite element method. Results for the distribution of µ in long, straight cylinders of rectangular cross-sections, containing either centered or eccentric circular inclusions that are more, or, less stiff than the surrounding elastic medium, are presented. Additionally, the results obtained in this study are compared with results of other researchers who use dynamical experiments. In this sense, two cases of centered circular inclusions are solved by using an adaptation of the dynamical case to the static case. Finally, the case of an inclusion with a complex geometry that is six times more rigid than the surrounding medium is solved. In all cases analyzed, the results are satisfactory, despite the fact that they were obtained with a reduced number of finite elements. It should be noted that no method of regularization has been used to treat the displacement data obtained from the simulated experiments. This work is of great interest in the detection of cancerous tumours, such as those in the breasts and in the prostate, and in the differential diagnosis of biological tissues.
233

Quelques contributions à la modélisation numérique de structures élancées pour l'informatique graphique / Some contributions to the numerical modeling of slender structures for computer graphics

Casati, Romain 26 June 2015 (has links)
Il est intéressant d'observer qu'une grande partie des objets déformables qui nous entourent sont caractérisés par une forme élancée : soit filiforme, comme les cheveux, les plantes, les fils ; soit surfacique, comme le papier, les feuilles d'arbres, les vêtements ou la plupart des emballages. Simuler (numériquement) la mécanique de telles structures présente alors un intérêt certain : cela permet de prédire leur comportement dynamique, leur forme statique ou encore les efforts qu'elles subissent. Cependant, pour pouvoir réaliser correctement ces simulations, plusieurs problèmes se posent. Les modèles (mécaniques, numériques) utilisés doivent être adaptés aux phénomènes que l'on souhaite reproduire ; le modèle mécanique choisi doit pouvoir être traité numériquement ; enfin, il est nécessaire de connaître les paramètres du modèle qui permettront de reproduire l'instance du phénomène souhaitée. Dans cette thèse nous abordons ces trois points, dans le cadre de la simulation de structures élancées.Dans la première partie, nous proposons un modèle discret de tiges de Kirchhoff dynamiques, de haut degré, basé sur des éléments en courbures et torsion affines par morceaux : les Super-Clothoïdes 3D. Cette discrétisation spatiale est calculée de manière précise grâce à une méthode dédiée, adaptée à l'arithmétique flottante, utilisant des développements en séries entières. L'utilisation des courbures et de la torsion comme degrés de liberté permet d'aboutir à un schéma d'intégration stable grâce à une implicitation, à moindres frais, des forces élastiques. Le modèle a été utilisé avec succès pour simuler la croissance de plantes grimpantes ou le mouvement d'une chevelure. Nos comparaisons avec deux modèles de référence de la littérature ont montré que pour des tiges bouclées, notre approche offre un meilleur compromis en termes de précision spatiale, de richesse de mouvements générés et d'efficacité en temps de calcul.Dans la seconde partie, nous nous intéressons à l'élaboration d'un algorithme capable de retrouver la géométrie au repos (non déformée) d'une coque en contact frottant, connaissant sa forme à l'équilibre et les paramètres physiques du matériau qui la compose. Un tel algorithme trouve son intérêt lorsque l'on souhaite simuler un objet pour lequel on dispose d'une géométrie (numérisée) « à l'équilibre » mais dont on ne connaît pas la forme au repos. En informatique graphique, un exemple d'application est la modélisation de vêtements virtuels sous la gravité et en contact avec d'autres objets : simplement à partir de la forme objectif et d'un simulateur de vêtement, le but consiste à identifier automatiquement les paramètres du simulateur tels que la forme d'entrée corresponde à un équilibre mécanique stable. La formulation d'un tel problème inverse comme un problème aux moindres carrés nous permet de l'attaquer avec la méthode de l'adjoint. Cependant, la multiplicité des équilibres, donnant au problème direct son caractère mal posé, nous conduit à « guider » la méthode en pénalisant les équilibres éloignés de la forme objectif. On montre enfin qu'il est possible de considérer du contact et du frottement solide dans l'inversion, en reformulant le calcul d'équilibres en un problème d'optimisation sous contraintes coniques, et en adaptant la méthode de l'adjoint à ce cas non-régulier. Les résultats que nous avons obtenus sont très encourageants et nous ont permis de résoudre des cas complexes où l'algorithme se comportait de manière intuitive. / It is interesting to observe that many of the deformable objects around us are characterized by a slender structure: either in one dimension, like hair, plants, strands, or in two dimensions, such as paper, the leaves of trees or clothes. Simulating the mechanical behavior of such structures numerically is useful to predict their static shape, their dynamics, or the stress they undergo. However, to perform these simulations, several problems need to be addressed. First, the model (mechanical, numerical) should be adapted to the phenomena which it is aimed at reproducing. Then, the chosen mechanical model should be discretized consistently. Finally, it is necessary to identify the parameters of the model in order to reproduce a specific instance of the phenomenon. In this thesis we shall discuss these three points, in the context of the simulation of slender structures.In the first part, we propose a discrete dynamic Kirchhoff rod model of high degree, based on elements with piecewise affine curvature and twist: the Super-Space-Clothoids. This spatial discretization is computed accurately through a dedicated method, adapted to floating-point arithmetic, using power series expansions. The use of curvature and twist as degrees of freedom allows us to make elastic forces implicit in the integration scheme. The model has been used successfully to simulate the growth of climbing plants or hair motion. Our comparisons with two reference models have shown that in the case of curly rods, our approach offers the best trade-off in terms of spatial accuracy, richness of motion and computational efficiency.In the second part, we focus on identifying the undeformed configuration of a shell in the presence of frictional contact forces, knowing its shape at equilibrium and the physical parameters of the material. Such a method is of utmost interest in Computer Graphics when, for example, a user often wishes to model a virtual garment under gravity and contact with other objects regardless of physics. The goal is then to interpret the shape and provide the right ingredients to the cloth simulator, so that the cloth is actually at equilibrium when matching the input shape. To tackle such an inverse problem, we propose a least squares formulation which can be optimized using the adjoint method. However, the multiplicity of equilibria, which makes our problem ill-posed, leads us to "guide" the optimization by penalizing shapes that are far from the target shape. Finally, we show how it is possible to consider frictional contact in the inversion process by reformulating the computation of equilibrium as an optimization problem subject to conical constraints. The adjoint method is also adjusted to this non-regular case. The results we obtain are very encouraging andhave allowed us to solve complex cases where the algorithm behaves intuitively.
234

Tomografia de escoamentos multifásicos por sensoriamento elétrico - desenvolvimento de algoritmos genéticos paralelos para a solução do problema inverso / Multiphase flow tomography by electrical sensing - development of parallel genetic algorithms for the solution of the inverse problem

Carosio, Grazieli Luiza Costa 15 December 2008 (has links)
A tomografia por sensoriamento elétrico representa uma técnica de grande potencial para a otimização de processos normalmente associados às indústrias do petróleo e química. Entretanto, o emprego de técnicas tomográficas em processos industriais envolvendo fluidos multifásicos ainda carece de métodos robustos e computacionalmente eficientes. Nesse contexto, o principal objetivo deste trabalho é contribuir para o desenvolvimento de métodos para a solução do problema tomográfico com base em algoritmos genéticos específicos para a fenomenologia do problema abordado (interação do campo elétrico com o campo hidrodinâmico), bem como a adaptação do algoritmo para processamento em paralelo. A idéia básica consiste em partir de imagens qualitativas, fornecidas por uma sonda de visualização direta, para formar um modelo da distribuição interna do contraste elétrico e refiná-lo iterativamente até que variáveis de controle resultantes do modelo numérico se igualem às suas homólogas, determinadas experimentalmente. Isso pode ser feito usando um funcional de erro, que quantifique a diferença entre as medidas externas não intrusivas (fluxo de corrente elétrica real) e as medidas calculadas no modelo numérico (fluxo de corrente elétrica aproximado). De acordo com a abordagem funcional adotada, pode-se modelar a reconstrução numérica do contraste elétrico como um problema de minimização global, cuja função objetivo corresponde ao funcional de erro convenientemente definido e o mínimo global representa a imagem procurada. A grande dificuldade está no fato do problema ser não linear e mal-posto, o que reflete na topologia da superfície de minimização, demandando um método especializado de otimização para escapar de mínimos locais, pontos de sela, mínimos de fronteira e regiões praticamente planas. Métodos de otimização poderosos, como os algoritmos genéticos, embora apresentem elevado esforço computacional na obtenção da imagem procurada, são melhor adaptáveis ao problema em questão. Desse modo, optou-se pelo uso de algoritmos genéticos paralelos nas arquiteturas mestre-escravo, ilha, celular e híbrida (combinando ilha e celular). O desempenho computacional dos algoritmos desenvolvidos foi testado em um problema de reconstrução da imagem tomográfica de um escoamento vertical a bolhas. De acordo com os resultados, a arquitetura híbrida é capaz de obter a imagem desejada com um desempenho computacional melhor, quando comparado ao desempenho das arquiteturas mestre-escravo, ilha e celular. Além disso, estratégias para melhorar a eficiência do algoritmo foram propostas, como a introdução de informações a priori, derivadas de conhecimento físico do problema tomográfico (fração de vazio e coeficiente de simetria do escoamento), a inserção de uma tabela hash para evitar o cálculo de soluções já encontradas, o uso de operadores de predação e de busca local. De acordo com os resultados, pode-se concluir que a arquitetura híbrida é um método apropriado para solução do problema de tomografia por impedância elétrica de escoamentos multifásicos. / Tomography by electrical sensing represents a technique of great potential for the optimization of processes usually associated with petroleum and chemical industries. However, the employment of tomographic techniques in industrial processes involving multiphase flows still lacks robust and computationally efficient methods. In this context, the main objective of this thesis is to contribute to the development of solution methods based on specific genetic algorithms for the phenomenology of the tomographic problem (interaction between electric and hydrodynamic fields), as well as the adaptation of the algorithm to parallel processing. From qualitative images provided by a direct imaging probe, the basic idea is to generate a model of electric contrast internal distribution and refine it repeatedly until control variables resulting from the numerical model equalize their counterparts, determined experimentally. It can be performed by using an error functional to quantify the difference between non-intrusive external measurements (actual electric current flow) and measurements calculated in a numerical model (approximate electric current flow). According to the functional approach, the numerical reconstruction of the electrical contrast can be treated as a global minimization problem in which the fitness function is an error functional conveniently defined and the global minimum corresponds to the sought image. The major difficulty lies in the nonlinear and ill-posed nature of the problem, which reflects on the topology of the minimization surface, demanding a specialized optimization method to escape from local minima, saddle points, boundary minima and almost plane regions. Although powerful optimization methods, such as genetic algorithms, require high computational effort to obtain the sought image, they are best adapted to the problem in question, therefore parallel genetic algorithms were employed in master-slave, island, cellular and hybrid models (combining island and cellular). The computational performance of the developed algorithms was tested in a tomographic image reconstruction problem of vertical bubble flow. According to the results, the hybrid model can obtain the sought image with a better computational performance, when compared with the other models. Besides, strategies to improve the algorithm efficiency, such as the introduction of a priori information derived from the physical knowledge of the tomographic problem (void fraction and symmetry coefficient of the flow), the insertion of a hash table to avoid the calculation of solutions already found, the use of predation and local search operators were proposed. According to the results, it is possible to conclude that the hybrid model is an appropriate method for solving the electrical impedance tomography problem of multiphase flows.
235

Estudo teórico da condução de calor e desenvolvimento de um sistema para a avaliação de fluidos de corte em usinagem / Theoretical study of heat conduction and development of a system for evaluation of cutting fluids in machining

Luchesi, Vanda Maria 30 March 2011 (has links)
Em decorrência ao grande crescimento e evolução dos processos de usinagem e a demanda para adequação ambiental, novos fluidos de corte tem sido aplicados. Uma comprovação de sua eficiência em refrigerar a peça, e a ferramenta melhorando a produtividade do processo ainda é necessária. O presente trabalho propõe o estudo e o desenvolvimento de um sistema para avaliar a eficácia de fluidos de corte em operações de usinagem. Inicia-se com uma abordagem matemática da modelagem do processo de dissipação de calor em operações de usinagem. Em seguida prossegue-se com uma investigação de diferentes maneiras de solução do modelo proposto. Experimentos práticos foram realizados no laboratório de Otimização de Processos de Fabricação - OPF. A partir dos dados obtidos foi realizada uma análise assintótica das equações diferencias parciais que governam o modelo. Finalizando, o modelo selecionado foi aplicado no fresamento do aço AISI 4340 endurecido usinado sob alta velocidade. / Due to the rapid growth and development of machining processes there has been a demand for environmental sustainability and news cutting fluids have been applied. A reliable assessment of their efficiency in cooling the workpiece, tools and improving productivity is still a requirement. The present thesis presents a theoretical study and a proposal of a system to assess the effectiveness of cutting fluids applied to machining operation. It begins using a mathematical approach to model the heat propagation during machining operations. Then, it continues with an investigation into different ways to solve the proposed theorical model. Machining experiments using realistic cutting operations were also conducted at the Laboratory for Optimization of Manufacturing Processes - OPF. From the experimental data, was carried out an asymptotic analysis of partial differential equations, which govern the mathematical model. Finally, the selected model will be applied to a milling operation using High Speed Machining (HSM) technique on hardened steel AISI 4340.
236

Analytic and Numerical Methods for the Solution of Electromagnetic Inverse Source Problems

Popov, Mikhail January 2001 (has links)
No description available.
237

ポテンシャル流れ場の領域最適化解析

片峯, 英次, Katamine, Eiji, 畔上, 秀幸, Azegami, Hideyuki 01 1900 (has links)
No description available.
238

ポテンシャル流れ場の形状同定解析(圧力分布規定問題と力法による解法)

片峯, 英次, Katamine, Eiji, 畔上, 秀幸, Azegami, Hideyuki, 山口, 正太郎, Yamaguchi, Syohtaroh 04 1900 (has links)
No description available.
239

Atmospheric Tomography Using Satellite Radio Signals

Flores Jiménez, Alejandro 04 February 2000 (has links)
Los sistemas de posicionamiento global GNSS (GPS y GLONASS) se han convertido en una herramienta básica para obtener medidas geodésicas de la Tierra y en una fuente de datos para el estudio atmosférico. Proporcionan cobertura global y permanente y por la precisión, exactitud y densidad de datos, las señales radio transmitidas pueden ser usadas para la representación espacio-temporal de la atmósfera.La tecnología de los receptores GPS ha evolucionado con una sorprendente rapidez, resultando en instrumentos con suficiente calidad de medida para ser utilizados en estudios geodésicos, comparables a los resultados de técnicas como la interferometría de muy larga base (VLBI), y estudios atmosféricos cuyos resultados pueden ser usados en meteorología.En la tesis Tomografía Atmosférica utilizando Señales Radio de Satélites nos hemos centrado en el uso del sistema GPS por disponer mayor cantidad y calidad de referencias y herramientas para el procesado de los datos. No obstante, se ha demostrado la posibilidad de extender el concepto a cualquier sistema de transmisión radio desde satélite como sondeador atmosférico. La estructura de la tesis se ha dividido en dos áreas: el procesado de datos GPS para extraer información referente a los parámetros atmosféricos de interés, y la aplicación de técnicas tomográficas para la resolución de problemas inversos. En particular, la tomografía se ha aplicado a la ionosfera y la atmósfera neutra. En ambos casos, los resultados tienen un innegable impacto socio-económico: a) la monitorización del estado ionosférico es fundamental por las perturbaciones que la ionosfera provoca en las transmisiones radio que la atraviesan, y b) la estimación del contenido de vapor de agua de la troposfera es de utilidad en la predicción meteorológica y climática.La tomografía ionosférica se empezó a desarrollar usando únicamente datos de la red global IGS. A continuación se mejoró la resolución vertical mediante la utilización de datos de ocultaciones del experimento GPS/MET. La mejora de la resolución se ve limitada a la región en la que estos datos existen. Finalmente, se utilizaron datos de altimetría del satélite TOPEX/POSEIDON para mejorar los mapas y para demostrar la posibilidad de calibración instrumental de los altímetros radar usando técnicas tomográficas.La aplicación a la troposfera se obtuvo tras la mejora y refinamiento tanto del procesado de datos GPS como del proceso de inversión tomográfica. Los primeros resultados se obtuvieron mediante los datos experimentales de la red permanente en Kilauea, Hawaii, por la configuración particular de los receptores. Estos resultados demostraban la capacidad de obtener representaciones espacio-temporales de la troposfera mediante datos GPS. El análisis de los datos de la campaña REGINA, realizada en el Onsala Space Observatory, nos permitió la descripción de un fenómeno meteorológico complejo mediante la tomografía troposférica usando datos GPS y su verificación por comparación directa con medidas realizadas por radiosondeo.En conclusión, se ha demostrado la posibilidad de aplicar tomografía a la atmósfera utilizando señales radio de satélites y, en particular, la constelación GPS. / The Global Navigation Satellite Systems (GPS and GLONASS) have become a basic tool to obtain geodetic measurements of the Earth and a source of data for the atmospheric analysis. Since these systems provide a global, dense and permanent coverage with precise and accurate data, the radio signals they transmit can be used for the spatio-temporal representation of the atmosphere.GPS receivers technology has evolved at a surprising pace: nowadays they have sufficient measurement quality as to be used in geodetic studies, together with other techniques such as the Very Long Base Interferometry (VLBI), and in atmospheric studies whose results can be input into meteorological analysis.In the thesis "Atmospheric Tomography Using Satellite Radio Signals" we have focused on the use of GPS system due to the better quality and quantity of references and tools for the data processing. This notwithstanding, we have proven the possibility to broaden the concept to include any other radio signal transmitting satellite system as an atmospheric sounder. The thesis has been divided into two main areas: GPS data processing to extract the information related to the atmospheric parameters under study, and the implementation of tomographic techniques to the solution of the inverse problem. In particular, tomography has been applied to the ionosphere and to the neutral atmosphere. In both cases, results have a socio-economic impact: a) monitoring the ionosphere is essential for radio transmissions across it because of the perturbations it may produce on the signal, and b) estimating water vapour content in the troposphere is highly useful for meteorological and climate forecastFor the ionospheric tomography we initally only used the data from the global IGS network. Vertical resolution was afterwards improved using the occultation data of the GPS/MET experiment. The improvement, however, was limited to the region where these data existed. Finally, we used altimeter data from the TOPEX/POSEIDON satellite to improve the maps and to prove the radar altimeter calibration capability of the tomographic technique.The application to the troposphere was possible after the improvement and refinement of both the GPS data processing and the tomographic inversion. The first results were obtained using the experimental data from the permanent network in Kilauea, Hawaii. The particular geometry of the receivers in this local network made it highly suited for these initial results, which proved the possibility of obtaining spatio-temporal representations of the troposphere using GPS data. The data analysis of the REGINA campaign, which took place at the Onsala Space Observatory, provided the description of a complex meteorological phenomenon using only GPS data tropospheric tomography. We verified the results with a direct comparison with radiosonde data.Concluding, we have demonstrated the capabilities of atmospheric tomography using satellite radio signals, with particular emphasis on the GPS signals.
240

Automatic history matching in Bayesian framework for field-scale applications

Mohamed Ibrahim Daoud, Ahmed 12 April 2006 (has links)
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.

Page generated in 0.0651 seconds