Spelling suggestions: "subject:"inverse 3methods"" "subject:"inverse 4methods""
1 |
Inverse Stochastic Moment Analysis of Transient Flow in Randomly Heterogeneous MediaMalama, Bwalya, Malama, Bwalya January 2006 (has links)
A geostatistical inverse method of estimating hydraulic parameters of a heterogeneous porous medium at discrete points in space, called pilot points, is presented. In this inverse method the parameter estimation problem is posed as a nonlinear optimization problem with a likelihood based objective function. The likelihood based objective function is expressed in terms of head residuals at head measurement locations in the flow domain, where head residuals are the differences between measured and model-predicted head values. Model predictions of head at each iteration of the optimization problem are obtained by solving a forward problem that is based on nonlocal conditional ensemble mean flow equations. Nonlocal moment equations make possible optimal deterministic predictions of fluid flow in randomly heterogenous porous media as well as assessment of the associated predictive uncertainty. In this work, the nonlocal moment equations are approximated to second order in the standard deviation of log-transformed hydraulic conductivity, and are solved using the finite element method. To enhance computational efficiency, computations are carried out in the complex Laplace-transform space, after which the results are inverted numerically to the real temporal domain for analysis and presentation. Whereas a forward solution can be conditioned on known values of hydraulic parameters, inversion allows further conditioning of the solution on measurements of system state variables, as well as for the estimation of unknown hydraulic parameters. The Levenberg-Marquardt algorithm is used to solve the optimization problem. The inverse method is illustrated through two numerical examples where parameter estimates and the corresponding predictions of system state are conditioned on measurements of head only, and on measurements of head and log-transformed hydraulic conductivity with prior information. An example in which predictions of system state are conditioned only on measurements of log-conductivity is also included for comparison. A fourth example is included in which the estimation of spatially constant specific storage is demonstrated. In all the examples, a superimposed mean uniform and convergent transient flow field through a bounded square domain is used. The examples show that conditioning on measurements of both head and hydraulic parameters with prior information yields more reliable (low uncertainty and good fit) predictions of system state, than when such information is not incorporated into the estimation process.
|
2 |
Quantifying ocean mixing from hydrographic dataZika, Jan David, Climate & Environmental Dynamics Laboratory, Faculty of Science, UNSW January 2010 (has links)
The relationship between the general circulation of the ocean and, along-isopycnal and vertical mixing is explored. Firstly, advection down isopycnal tracer gradients is related to mixing in specific regions of the ocean. Secondly, a general inverse method is developed for estimating both mixing and the general circulation. Two examples of down gradient advection are explored. Firstly the region of Mediterranean outflow in the North Atlantic. Given a known transport of warm salty water out of the Mediterranean Sea and the mean hydrography of the eastern North Atlantic, the vertical structure of the along-isopycnal mixing coefficient, K, and the vertical mixing coefficient, D, is revealed. Secondly, the Southern Ocean Meridional Overturning Circulation, SMOC, is investigated. There, relatively warm salty water is advected southward, along-isopycnals, toward fresher cooler surface waters. The strength and structure of the SMOC is related to K and D by considering advection down along-isopycnal gradients of temperature and potential vorticity. The ratio of K to D and their magnitudes are identified. A general tool is developed for estimating the ocean circulation and mixing; the \textit{tracer-contour inverse method}. Integrating along contours of constant tracer on isopycnals, differences in a geostrophic streamfunction are related to advection and hence to mixing. This streamfunction is related in the vertical, via an analogous form of the depth integrated thermal wind equation. The tracer-contour inverse method combines aspects of the box, beta spiral and Bernoulli methods. The tracer-contour inverse method is validated against the output of a layered model and against in-situ observations from the eastern North Atlantic. The method accurately reproduces the observed mixing rates and reveals their vertical structure.
|
3 |
Caractérisation thermique de l'équipement roue et frein aéronautique hautes performances par voies théorique numérique et expérimentale / Thermal characterization of wheel and brake aeronautical equipment at high performance by theoretical numerical and experimental approachesKeruzoré, Nicolas 06 December 2018 (has links)
Pour l’équipementier qui développe des roues et des freins d’aéronefs, le comportement thermique de l’équipement constitue un point de design majeur intervenant dans la conception. Cette discipline est aujourd’hui au centre des efforts de progression, car le concepteur est challengé sur la diminution de la masse du système. De ce fait, les limites en températures sont plus fréquemment atteintes, ce qu’il faut désormais anticiper dès l’appel d’offre pour éviter les itérations de conception. Cependant, les conditions dans lesquelles opère le système ainsi que son comportement thermique, sont mal connus et mal maîtrisés. Le caractère prédictif des simulations numériques faites pour dimensionner la structure, dépend directement de la précision du modèle et avec laquelle sont introduites, les conditions aux limites imposées en service à la structure.Aujourd’hui Safran ne dispose pas d’outil, ni de moyen suffisamment fiable pour prédire dès la phase de pré-dimensionnement, le comportement thermique de l’ensemble Roue & Frein. Il est connu que le design du frein et de la roue ont une influence réciproque sur la cinétique thermique de l’ensemble.Savoir prédire le comportement qualitatif du produit, en réponse aux sollicitations demandées par l’avionneur, permet de faire en amont des choix technologiques dont l’impact sur la thermique sera connu. Ainsi, la conception est dé-risquée d’éventuelles itérations de design pouvant retarder de plusieurs années la certification d’un avion.L’objet de cette thèse, est de proposer des solutions pour reproduire qualitativement la thermique Roue & Frein d’un avion, en prenant en compte des paramètres physiques associés aux solutions technologiques employées. Nous illustrons également que ces outils sont aussi un moyen de connaître les conditions dans lesquelles opère le système, lorsque l’on connaît à l’avance sa réponse en température en prenant le problème de manière inverse. / For the equipment manufacturer who develops aircraft wheels and brakes, the thermal behavior of the equipment refers to a major reference design point. This discipline is today at the center of concerns, because the designer is challenged on the system mass improvements. As a result, the temperature limits are more frequently reached, which must now be anticipated as early as possible.The pre-design phase should now also allow avoiding design iterations. However, the conditions under which the system operates and its thermal behavior, are poorly understood and poorly controlled. The predictive nature of the numerical simulations used to design the structure, depends directly on the model’s accuracy and on the in service boundary conditions imposed to the system.Today, Safran does not have any enough reliable tool or means to predict the thermal behavior of the Wheels & Brake assembly right from the pre-design phase. It is known that the design of the brake and the wheel have a reciprocal influence on the thermal kinetics of the system. Knowing how to predict the thermal behavior of the product, in response to stresses requested by the aircraft manufacturer, allows upstream technological choices whose impact on the thermal kinetics will be known. Thus, the design is disregarded of possible design iterations that could delay the aircraft certification by several years.The purpose of this thesis is to propose solutions to qualitatively reproduce the thermal behavior of an aircraft braking system, taking into account physical parameters associated with technological solutions. We also illustrate that these tools are also a way of knowing the conditions under which the system operates, when one knows in advance its temperature response by taking the problem in the opposite way.
|
4 |
Análise de sinais de tomografia por coerência óptica: equação LIDAR e métodos de inversão / Optical coherence tomography signal analysis: LIDAR like equation and inverse methodsAmaral, Marcello Magri 12 December 2012 (has links)
A Tomografia por Coerência Óptica (OCT) baseia-se na propriedade de retroespalhamento dos meios para construir imagens tomográficas do interior das amostras. De maneira similar, a técnica LIDAR (Light Detection and Range) faz uso desta propriedade para determinar as características da atmosfera, em especial o coeficiente de extinção do sinal. Explorar esta similaridade permitiu aplicar métodos de inversão utilizados na técnica LIDAR às imagens OCT, permitindo construir imagens de coeficiente de extinção, resultado inédito até o momento. Este trabalho teve o objetivo de estudar, propor, desenvolver e implementar algoritmos de métodos de inversão do sinal OCT para determinação do coeficiente de extinção em função da profundidade. Foram utilizados três métodos de inversão, da inclinação, do ponto de contorno e da profundidade óptica, com implementação em ambiente LABView® . Estudo dos erros associados aos métodos de inversão foi realizado e, amostras reais (homogêneas e estratificadas) foram utilizadas para análises em uma e duas dimensões. As imagens de coeficiente de extinção obtidas pelo método da profundidade óptica claramente foram capazes de diferenciar o ar da amostra. As imagens foram estudadas empregando PCA e análise de clusters que avaliou a robustez da técnica em determinar o valor do coeficiente de extinção da amostra. Além disso, o método da profundidade óptica proposto foi empregado para estudar a hipótese de que existe correlação entre o coeficiente de extinção do sinal e a desmineralização de esmalte dental durante o processo cariogênico. Com a aplicação desta metodologia foi possível observar a variação do coeficiente de extinção em função da profundidade e sua correlação com a variação da microdureza, além de mostrar que em camadas mais profundas o valor do coeficiente de extinção valor tende ao de um dente sadio, comportando-se da mesma maneira que a microdureza do dente. / Optical Coherence Tomography (OCT) is based on the media backscatering properties in order to obtain tomographic images. In a similar way, LIDAR (Light Detection and Range) technique uses these properties to determine atmospheric characteristics, specially the signal extinction coeficient. Exploring this similarity allowed the application of signal inversion methods to the OCT images, allowing to construct images based in the extinction coeficient, original result until now. The goal of this work was to study, propose, develop and implement algorithms based on OCT signal inversion methodologies with the aim of determine the extinction coeficient as a function of depth. Three inversion methods were used and implemented in LABView® : slope, boundary point and optical depth. Associated errors were studied and real samples (homogeneous and stratified) were used for two and three dimension analysis. The extinction coeficient images obtained from the optical depth method were capable to diferentiate air from the sample. The images were studied applying PCA and cluster analysis that established the methodology strength in determining the sample´s extinction coefficient value. Moreover, the optical depth methodology was applied to study the hipothesis that there is some correlation between signal extinction coeficient and the enamel teeth demineralization during a cariogenic process. By applying this methodology, it was possible to observe the variation of the extinction coefficient as depth function and its correlation with microhardness variation, showing that in deeper layers its values tends to a healthy tooth values, behaving as the same way that the microhardness.
|
5 |
Quantification des processus responsables de l’accélération des glaciers émissaires par méthodes inverses / Quantifying the processes at the root of the observed acceleration of icestreams from inverse methodsMosbeux, Cyrille 05 December 2016 (has links)
Le réchauffement climatique actuel a une conséquence directe sur la perte de masse des calottes polaires. Reproduire les mécanismes responsables de cette perte de masse et prévoir la contribution des calottes à l’élévation du niveau des océans d’ici la fin du siècle est dès lors l’un des défis majeurs de la modélisation de l’écoulement des calottes polaires. Les modèles d’écoulement permettent de réaliser de telles prévisions mais ces simulations, à court terme, sont très sensibles à leur état initial habituellement construit à partir d’observations de terrain. Malheureusement, certains paramètres comme le frottement entre la glace et le socle rocheux ainsi que la topographie basale sont souvent méconnus à cause du manque d’observations directes ou des larges incertitudes liées à ces observations. Améliorer la connaissance de ces deux paramètres à la fois pour le Groenland et l’Antarctique est donc un pré-requis pour réaliser des projections fiables. Les méthodes d’assimilation de données et les méthodes inverses permettent alors de surmonter ce problème.Cette thèse présente deux algorithmes d’assimilation de données permettant de mieux contraindre simultanément le frottement basal et la topographie basale à partir d’observations de surface. L’un des algorithmes est entièrement basé sur la méthode adjointe tandis que le second se base sur une méthode cyclique couplant l’inversion du frottement basal avec la méthode adjointe et l’inversion de la géométrie basale à l’aide de la relaxation newtonienne. Les deux algorithmes ont été implémentés dans le modèle d’écoulement de glace éléments finis Elmer/Ice et testés dans une expérience jumelle qui montre une nette amélioration de la connaissance des deux paramètres recherchés. L’application des deux algorithmes à la région de la Terre de Wilkes réduit l’incertitude liée aux conditions basales en permettant, par exemple, d’obtenir plus de détails sur la géométrie basale en comparaison avec les modèles numériques de terrain habituels. De plus la reconstruction simultanée du frottement et de la géométrie basale permet de réduire significativement les anomalies de divergence de flux habituellement obtenues lors de l’inversion du frottement seul. Nous étudions finalement l’impact des conditions basales ainsi inversées sur des simulations pronostiques afin de comparer la capacité des deux algorithmes à mieux contraindre la contribution future des calottes polaires à l’augmentation du niveau des océans. / The current global warming has direct consequences on ice-sheet mass loss. Reproducing the responsible mechanisms and forecasting the potential ice-sheets contribution to 21st century sea level rise is one of the major challenges in ice-sheet and ice flow modelling. Ice flow models are now routinely used to forecast the potential ice-sheets contribution to sea level rise. Such short term simulations are very sensitive to model initial state, usually build from field observations. However, some parameters, such as the basal friction between icesheet and bedrock as well as the basal topography, are still badly known because of a lake of direct observations or large uncertainty on measurements. Improving the knowledge of these two parameters for Greenland and Antarctica is therefore a prerequisite for making reliable projections. Data assimilation and inverse methods have been developed in order to overcome this problem. This thesis presents two different assimilation algorithms to better constrain simulaneouslybasal friction and bedrock elevation parameters using surface observations. The first algorithm is entierly based on adjoint method while the second algorithm uses a cycling method coupling inversion of basal friction with adjoint method and inversion of bedrock topography with nudging method. Both algorithms have been implemented in the finite element ice sheet and ice flow model Elmer/Ice and tested in a twin experiment showing a clear improvement of both parameters knowledge. The application of both algorithms to regions such as the Wilkes Land in Antartica reduces the uncertainty on basal conditions, for instance providing more details to the bedrock geometry when compared to usual DEM. Moreover,the reconstruction of both bedrock elevation and basal friction significantly decreases ice flux divergence anomalies when compared to classical methods where only friction is inversed. We finaly sudy the impact of such inversion on pronostic simulation in order to compare the efficiency of the two algorithms to better constrain future ice-sheet contribution to sea level rise.
|
6 |
Análise de sinais de tomografia por coerência óptica: equação LIDAR e métodos de inversão / Optical coherence tomography signal analysis: LIDAR like equation and inverse methodsMarcello Magri Amaral 12 December 2012 (has links)
A Tomografia por Coerência Óptica (OCT) baseia-se na propriedade de retroespalhamento dos meios para construir imagens tomográficas do interior das amostras. De maneira similar, a técnica LIDAR (Light Detection and Range) faz uso desta propriedade para determinar as características da atmosfera, em especial o coeficiente de extinção do sinal. Explorar esta similaridade permitiu aplicar métodos de inversão utilizados na técnica LIDAR às imagens OCT, permitindo construir imagens de coeficiente de extinção, resultado inédito até o momento. Este trabalho teve o objetivo de estudar, propor, desenvolver e implementar algoritmos de métodos de inversão do sinal OCT para determinação do coeficiente de extinção em função da profundidade. Foram utilizados três métodos de inversão, da inclinação, do ponto de contorno e da profundidade óptica, com implementação em ambiente LABView® . Estudo dos erros associados aos métodos de inversão foi realizado e, amostras reais (homogêneas e estratificadas) foram utilizadas para análises em uma e duas dimensões. As imagens de coeficiente de extinção obtidas pelo método da profundidade óptica claramente foram capazes de diferenciar o ar da amostra. As imagens foram estudadas empregando PCA e análise de clusters que avaliou a robustez da técnica em determinar o valor do coeficiente de extinção da amostra. Além disso, o método da profundidade óptica proposto foi empregado para estudar a hipótese de que existe correlação entre o coeficiente de extinção do sinal e a desmineralização de esmalte dental durante o processo cariogênico. Com a aplicação desta metodologia foi possível observar a variação do coeficiente de extinção em função da profundidade e sua correlação com a variação da microdureza, além de mostrar que em camadas mais profundas o valor do coeficiente de extinção valor tende ao de um dente sadio, comportando-se da mesma maneira que a microdureza do dente. / Optical Coherence Tomography (OCT) is based on the media backscatering properties in order to obtain tomographic images. In a similar way, LIDAR (Light Detection and Range) technique uses these properties to determine atmospheric characteristics, specially the signal extinction coeficient. Exploring this similarity allowed the application of signal inversion methods to the OCT images, allowing to construct images based in the extinction coeficient, original result until now. The goal of this work was to study, propose, develop and implement algorithms based on OCT signal inversion methodologies with the aim of determine the extinction coeficient as a function of depth. Three inversion methods were used and implemented in LABView® : slope, boundary point and optical depth. Associated errors were studied and real samples (homogeneous and stratified) were used for two and three dimension analysis. The extinction coeficient images obtained from the optical depth method were capable to diferentiate air from the sample. The images were studied applying PCA and cluster analysis that established the methodology strength in determining the sample´s extinction coefficient value. Moreover, the optical depth methodology was applied to study the hipothesis that there is some correlation between signal extinction coeficient and the enamel teeth demineralization during a cariogenic process. By applying this methodology, it was possible to observe the variation of the extinction coefficient as depth function and its correlation with microhardness variation, showing that in deeper layers its values tends to a healthy tooth values, behaving as the same way that the microhardness.
|
7 |
On the influence of indenter tip geometry on the identification of material parameters in indentation testingGuo, Weichao 08 December 2010 (has links)
ABSTRACT
The rapid development of structural materials and their successful applications in various sectors of industry have led to increasing demands for assessing their mechanical properties in small volumes. If the size dimensions are below micron, it is difficult to perform traditional tensile and compression tests at such small scales. Indentation testing as one of the advanced technologies to characterize the mechanical properties of material has already been widely employed since indentation technology has emerged as a cost-effective, convenient and non-destructive method to solve this problem at micro- and nanoscales.
In spite of the advances in indentation testing, the theory and development on indentation testing are still not completely mature. Many factors affect the accuracy and reliability of identified material parameters. For instance, when the material properties are determined utilizing the inverse analysis relying on numerical modelling, the procedures often suffer from a strong material parameter correlation, which leads to a non-uniqueness of the solution or high errors in parameter identification. In order to overcome that problem, an approach is proposed to reduce the material parameter correlation by designing appropriate indenter tip shapes able to sense indentation piling-up or sinking-in occurring in non-linear materials.
In the present thesis, the effect of indenter tip geometry on parameter correlation in material parameter identification is investigated. It may be helpful to design indenter tip shapes producing a minimal material parameter correlation, which may help to improve the reliability of material parameter identification procedures based on indentation testing combined with inverse methods.
First, a method to assess the effect of indenter tip geometry on the identification of material parameters is proposed, which contains a gradient-based numerical optimization method with sensitivity analysis. The sensitivities of objective function computed by finite difference method and by direct differentiation method are compared. Subsequently, the direct differentiation method is selected to use because it is more reliable, accurate and versatile for computing the sensitivities of the objective function.
Second, the residual imprint mappings produced by different indenters are investigated. In common indentation experiments, the imprint data are not available because the indenter tip itself shields that region from access by measurement devices during loading and unloading. However, they include information about sinking-in and piling-up, which may be valuable to reduce the correlation of material parameter. Therefore, the effect of the imprint data on identification of material parameters is investigated.
Finally, some strategies for improvement of the identifiability of material parameter are proposed. Indenters with special tip shapes and different loading histories are investigated. The sensitivities of material parameters toward indenter tip geometries are evaluated on the materials with elasto-plastic and elasto-visoplastic constitutive laws.
The results of this thesis have shown that first, the correlations of material parameters are related to the geometries of indenter tip shapes. The abilities of different indenters for determining material parameters are significantly different. Second, residual imprint mapping data are proved to be important for identification of material parameters, because they contain the additional information about plastic material behaviour. Third, different loading histories are helpful to evaluate the material parameters of time-dependent materials. Particularly, a holding cycle is necessary to determine the material properties of time-dependent materials. These results may be useful to enable a more reliable material parameter identification.
|
8 |
Use of inverse modeling in air quality managementAkhtar, Farhan Hussain 21 August 2009 (has links)
Inverse modeling has been used in the past to constrain atmospheric model parameters, particularly emission estimates, based upon ambient measurements. Here, inverse modeling is applied to air quality planning by calculating how emissions should change to achieve desired reduction in air pollutants. Specifically, emissions of nitrogen oxides (NOx = NO + NO2) are adjusted to achieve reductions in tropospheric ozone, a respiratory irritant, during an historic episode of elevated concentrations in urban Atlanta, GA. Understanding how emissions should change in aggregate without specifying discrete abatement options is particularly applicable to long-term and regional air pollution management. Using a cost/benefit approach, desired reductions in ozone concentrations are found for a future population in Atlanta, GA. The inverse method is applied to find NOx emission adjustments to reach this desired reduction in air pollution. An example of how emissions adjustments may aid the planning process in two neighborhoods is demonstrated using urban form indicators from a land use and transportation database. Implications of this method on establishing regional and market-based air quality management systems in light of recent legal decisions are also discussed. Both ozone and secondary particulate matter with diameters of less than 2.5μm (PM2.5) are formed in the atmosphere from common precursor species. Recent assessments of air quality management policies have stressed the need for pollutant abatement strategies addressing these mutual sources. The relative contribution of several important precursor species (NOx, sulfur dioxide, ammonia, and anthropogenic volatile organic compounds) to the formation of ozone and secondary PM2.5 in Atlanta during May 2007 - April 2008 is simulated using CMAQ/DDM-3D. This sensitivity analysis is then used to find adjustments in emissions of precursor species to achieve goal reductions for both ozone and secondary PM2.5 during a summertime episode of elevated concentrations. A discussion of the implications of these controls on air pollutant concentrations during the remaining year follows.
|
9 |
Estimação das propriedades termofísicas de leite integral através da técnica analítico-experimental. / Estimation of whole milk thermophysical properties by experimental-analytical techniqueOliveira, Edilma Pereira 31 July 2013 (has links)
Made available in DSpace on 2015-05-08T14:59:47Z (GMT). No. of bitstreams: 1
ArquivoTotalEdilma.pdf: 3342457 bytes, checksum: ab038da16334ba7acdbbca9aab59c08b (MD5)
Previous issue date: 2013-07-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The estimation of the thermodynamic and transport properties have been studied by many researchers due to the need to input data in calculations and optimization equipment projects that involve adding and removing energy. Transient methods are widely used for determination of thermal transport properties. Usually, these methods are used in homogeneous materials to measure various thermophysical properties, simultaneously or separately. This work deals with the solution of an inverse problem, to estimate thermal properties parameters of whole milk. The direct problem is solved analytically using the Classical Integral Transform Technique (CITT). The proposed algorithm allows the estimation of the thermal conductivity at the interface of a medium composed of three layers through measurements of temperature distribution in one surface, resulting from a transient thermal pulse on the opposite surface. The thermal perturbation of the sample was done using a device known as micro flash LFA model 457, manufactured by Netzsch. The results are shown in terms of thermal diffusivity and thermal conductivity and are compared with values available in the literature for products of the same nature, thus providing thermal properties data of whole milk to be applied in dimensioning calculations of industrial processes. / A estimação das propriedades termodinâmicas e de transportes tem sido objeto de estudo de diversos pesquisadores devido à necessidade de seu conhecimento para alimentar os códigos de cálculos de otimização e projetos de equipamentos que envolvam adição e remoção de energia. Métodos transientes são largamente utilizados para determinação de propriedades térmica de transporte. Normalmente estes métodos são usados em meios homogêneos para mensurar várias propriedades termofísicas, simultaneamente ou separadamente. A análise de materiais compostos de múltiplas camadas é mais complicada. O algoritmo proposto permite a estimativa da difusividade térmica na interface de um meio composto de três camadas através da medida da distribuição detemperatura transiente resultante de um pulso térmico na superfície oposta. Esse trabalho trata da solução de um problema inverso de estimação de parâmetros para estimar as propriedades térmicas de produtos lácteos. O problema direto é resolvido analiticamente utilizando a Técnica da Transformada Integral Clássica (CITT). O procedimento experimental consiste de submeter as amostra de produtos lácteos confinada em uma cavidade cilíndrica a uma perturbação térmica de curta duração em uma das faces (anterior) e medir a evolução da temperatura transiente na outra face (face posterior). Para a perturbação térmica da amostra foi utilizado um dispositivo, denominado de micro-flash, modelo LFA 457, fabricado pela Netzsch. O pulso de curta duração é originário de um sistema de potencia que libera uma energia equivalente a 15 Joules por pulso e registra a evolução da temperatura através de um sensor infravermelho, tipo InSb-IR. Esseequipamento dispõe de um sistema de geração de pulso laser. Os resultados são mostradosem termos da difusividade térmica e da condutividade térmica e são comparados com osvalores disponíveis na literatura para produtos da mesma natureza.
|
10 |
Détermination de propriétés des glaciers polaires par modélisation numérique et télédétection, / Ice sheet properties inferred by combining numerical modeling and remote sensing dataMorlighem, Mathieu 22 December 2011 (has links)
Les calottes polaires, ou inlandsis, sont parmi les principaux contributeurs à la montée des océans. Ces systèmes dynamiques gagnent de la masse par accumulation de neige, et en perdent par fonte au contact de l’océan et à la surface, ainsi que par le vêlage d’icebergs. Depuis plus de trois décennies, les observations ont montré que les calottes polaires de l’Antarctique et du Groenland perdent plus de masse qu’ils n’en gagnent. L’évolution des glaciers suite à ce déséquilibre de masse est devenue aujourd’hui l’une des problématiques les plus importantes des implications du changement climatique. Le Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC) a identifié la contribution des glaciers comme l’un des facteurs clés d’incertitude de prédiction de l’élévation du niveau des mers. La modélisation numérique est le seul outil efficace pour répondre à cette question. Cependant, modéliser l’écoulement de glace à l’échelle du Groenland ou de l’Antarctique représente un défi à la fois scientifique et technique. Deux aspects clés de l’amélioration de la modélisation des glaciers sont abordés dans cette thèse. Le premier consiste à déterminer certaines propriétés non mesurables de la glace par méthode inverse. La friction ou la rigidité des barrières de glace, sont des paramètres qui ne peuvent être mesurés directement et doivent donc être déduits à partir d’observations par télédétection. Nous appliquons ici ces inversions pour trois modèles d’écoulement de glace de complexité croissante: le modèle bidimensionnel de MacAyeal/Morland, le modèle dit d’ordre supérieur de Blatter/Pattyn et le modèle full-Stokes. Les propriétés ainsi calculées sont ensuite utilisées pour initialiser des modèles grande-échelle et pour déterminer le degré de complexité minimum nécessaire pour reproduire correctement la dynamique des glaciers. Le second aspect abordé dans ce travail est l’amélioration de la consistance des données pour la modélisation numérique. Les données disponibles sont souvent issues de campagnes de mesures s’étalant sur plusieurs années et dont résolutions spatiales varient, ce qui rend leur utilisation pour des simulations numériques difficiles. Nous présentons ici un algorithme basé sur la conservation de la masse et les méthodes inverses pour construire des épaisseurs de glace qui sont consistantes avec les mesures de vitesse. Cette approche empêche la redistribution artificielle de masse qu’engendrent généralement les autres méthodes de cartographie de l’épaisseur de glace, ce qui améliore considérablement l’initialisation des modèles d’écoulement de glace. Les avancées présentées ici sont des étapes importantes afin de mieux caractériser de manière précise les glaciers et de modéliser leur évolution de manière réaliste. / Ice sheets are amongst the main contributors to sea level rise. They are dynamic systems; they gain mass by snow accumulation, and lose it by melting at the ice-ocean interface, surface melting and iceberg calving at the margins. Observations over the last three decades have shown that the Greenland and Antarctic ice sheets have been losing more mass than they gain. How the ice sheets respond to this negative mass imbalance has become today one of the most urgent questions in understanding the implications of global climate change. The Intergovernmental Panel on Climate Change (IPCC) has indeed identified the contribution of the ice sheets as a key uncertainty in sea level rise projections. Numerical modeling is the only effective way of addressing this problem. Yet, modeling ice sheet flow at the scale of Greenland and Antarctica remains scientifically and technically very challenging. This thesis focuses on two major aspects of improving ice sheet numerical models. The first consists of determining non-observable ice properties using inverse methods. Some parameters, such as basal friction or ice shelf hardness, are difficult to measure and must be inferred from remote sensing observations. Inversions are developed here for three ice flow models of increasing complexity: MacAyeal/Morland’s shelfy-stream model, Blatter/Pattyn’s higher order model and the full-Stokes model. The inferred parameters are then used to initialize large-scale ice sheet models and to determine the minimum level of complexity required to capture ice dynamics correctly. The second aspect addressed in this work is the improvement of dataset consistency for ice sheet modeling. Available datasets are often collected at different epochs and at varying spatial resolutions, making them not readily usable for numerical simulations. We devise here an algorithm based on the conservation of mass principle and inverse methods to construct ice thicknesses that are consistent with velocity measurements. This approach therefore avoids the artificial mass redistributions that occur in existing algorithms for mapping ice thickness, hence considerably improving ice sheet model initialization. The advances made here are important steps towards the ultimate objective of accurate characterization of ice sheets and the realistic modeling of their evolution.
|
Page generated in 0.0459 seconds