• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 11
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

[en] SEISMIC RISK ANALYSIS OF STRUCTURES AND SYSTEM COMPONENTS / [pt] ANÁLISE PROBABILÍSTICA DE SEGURANÇA SÍSMICA DE SISTEMAS E COMPONENTES ESTRUTURAIS

ANDREIA ABREU DINIZ DE ALMEIDA 28 May 2002 (has links)
[pt] Apresenta-se uma metodologia geral para a avaliação do risco sísmico em sistemas estruturais de engenharia civil com particularizações para edifícios e, a seguir, executam- se aplicações para exemplificar a proposta e para o desenvolvimento de procedimentos complementares aos adotados na prática por métodos determinísticos. Para tal, considera-se a excitação sísmica como um processo aleatório fracamente estacionário, definido por uma função de densidade de espectro de potência da aceleração do movimento do terreno e, no domínio da frequência, determinam-se funções semelhantes para as respostas estruturais. Considera-se, a seguir, o problema de primeira ultrapassagem, de acordo com a solução de Vanmarcke, para determinar a distribuição de probabilidade das respostas estruturais permanecerem abaixo dos níveis numericamente especificados, designados de barreiras. A partir dessas probabilidades, prossegue-se para desenvolver: - uma metodologia para análise de risco sísmico de estruturas prediais, incluindo uma fase preliminar de avaliação da ameaça sísmica para o território nacional; - recursos para avaliação do compromisso probabilístico entre uma função de densidade de espectro de potência da excitação sísmica, para uma região, e um espectro de resposta de projeto proposto para o mesmo local; - o conceito e o procedimento para geração de um espectro de resposta uniformemente provável a ser utilizado para análise do sistema principal e de um espectro de resposta acoplado uniformemente provável para o caso dos sistemas secundários; - uma comparação entre a capacidade a ações horizontais eólicas de estruturas prediais correntes, no Brasil, e o significado dessa resistência para os requisitos de demanda decorrente dos sismos prováveis; - um procedimento para geração de uma função de densidade de espectro de potência objetivo associada a um espectro de resposta de projeto prescrito. A implementação computacional da análise estrutural no domínio da frequência utiliza parcialmente o programa SASSI-2000 e a análise probabilística usa os programas APESS e CA desenvolvidos internamente ao trabalho. / [en] One presents a general methodology to the evaluation of the seismic risk to civil engineering structures, with emphasis to building systems, and in sequence a series of applications is made to exemplify this proposal and to develop complementary procedures to the deterministic structural analysis. On this way, one considers the structural seismic excitation as a weakly stationary random process mainly defined by a ground acceleration power density function and one determines, in the frequency domain, similar structural response quantity functions. One applies, to this response functions, the first passage problem solution according to Vanmarcke and so determining the distribution probability functions of these maximum structural response quantities to remain below specified numerical levels, which are called barriers. From these probability distributions, one proceeds to develop: − a methodology to the structural seismic risk analysis, including a previous phase to define the seismic hazard over the Brazilian territory; − tools to evaluated the probabilistic compromise between a power spectrum density function of the seismic excitation for a region and a design response spectrum proposed to the same area; − the idea of a uniformly probable design response spectrum and the procedures to generate this function to be used in the analyses of the primary system; and to produce a uniformly probable coupled response spectrum for the analyses of secondary systems; − a comparison between the wind horizontal action capacity of regular building structures in Brazil and the capability which would be expected to the seismic hazard; − a procedure to generate a target power spectrum density function for the seismic hazard probabilistically associated to a prescribed design response spectrum for a site. The computational support to the frequency domain structural analysis is taken partially from SASSI-2000 program and the probabilistic paths are made by APESS and CA programs which have been developed inside this work.
12

Robust, precise and reliable simultaneous localization and mapping for and underwater robot. Comparison and combination of probabilistic and set-membership methods for the SLAM problem / Localisation et cartographie en simultané fiable, précise et robuste d'un robot sous-marin

Nicola, Jérémy 18 September 2017 (has links)
Dans cette thèse on s'intéresse au problème de la localisation d'un robot sous-marin et de la cartographie en simultané d'un jeu de balises acoustiques installées sur le fond marin, en utilisant un distance-mètre acoustique et une centrale inertielle. Nous nous focalisons sur les deux approches principales utilisées pour résoudre ce type de problème: le filtrage de Kalman et le filtrage ensembliste basé sur l'analyse par intervalles. Le filtre de Kalman est optimal quand les équations d'état du robot sont linéaires et les bruits sont additifs, Gaussiens. Le filtrage par intervalles ne modélise pas les incertitudes dans un cadre probabiliste, et ne fait qu'une seule hypothèse sur leur nature: elles sont bornées. De plus, l'approche utilisant les intervalles permet la propagation rigoureuse des incertitudes, même quand les équations sont non linéaires. Cela résulte en une estimation hautement fiable, au prix d'une précision réduite. Nous montrons que dans un contexte sous-marin, quand le robot est équipé avec une centrale inertielle de haute précision, une partie des équations du SLAM peut raisonnablement être considérée comme linéaire avec un bruit Gaussien additif, en faisant le terrain de jeu idéal d'un filtre de Kalman. De l'autre côté, les équations liées aux observations du distance-mètre acoustique sont bien plus problématiques: le système n'est pas observable, les équations sont non linéaires, et les outliers sont fréquents. Ces conditions sont idéales pour une approche à erreur bornées basée sur l'analyse par intervalles. En prenant avantage des propriétés des bruits Gaussiens, cette thèse réconcilie le traitement probabiliste et ensembliste des incertitudes pour les systèmes aussi bien linéaires que non linéaires sujets à des bruits Gaussiens additifs. En raisonnant de manière géométrique, nous sommes capables d'exprimer la partie des équations du filtre de Kalman modélisant la dynamique du véhicule dans un cadre ensembliste. De la même manière, un traitement plus rigoureux et précis des incertitudes est décrit pour la partie des équations du filtre de Kalman liée aux mesures de distances. Ces outils peuvent ensuite être combinés pour obtenir un algorithme de SLAM qui est fiable, précis et robuste. Certaines des méthodes développées dans cette thèse sont illustrées sur des données réelles. / In this thesis, we work on the problem of simultaneously localizing an underwater robot while mapping a set of acoustic beacons lying on the seafloor, using an acoustic range-meter and an inertial navigation system. We focus on the two main approaches classically used to solve this type of problem: Kalman filtering and set-membership filtering using interval analysis. The Kalman filter is optimal when the state equations of the robot are linear, and the noises are additive, white and Gaussian. The interval-based filter do not model uncertainties in a probabilistic framework, and makes only one assumption about their nature: they are bounded. Moreover, the interval-based approach allows to rigorously propagate the uncertainties, even when the equations are non-linear. This results in a high reliability in the set estimate, at the cost of a reduced precision.We show that in a subsea context, when the robot is equipped with a high precision inertial navigation system, a part of the SLAM equations can reasonably be seen as linear with additive Gaussian noise, making it the ideal playground of a Kalman filter. On the other hand, the equations related to the acoustic range-meter are much more problematic: the system is not observable, the equations are non-linear, and the outliers are frequent. These conditions are ideal for a set-based approach using interval analysis.By taking advantage of the properties of Gaussian noises, this thesis reconciles the probabilistic and set-membership processing of uncertainties for both linear and non-linear systems with additive Gaussian noises. By reasoning geometrically, we are able to express the part of the Kalman filter equations linked to the dynamics of the vehicle in a set-membership context. In the same way, a more rigorous and precise treatment of uncertainties is described for the part of the Kalman filter linked to the range-measurements. These two tools can then be combined to obtain a SLAM algorithm that is reliable, precise and robust. Some of the methods developed during this thesis are demonstrated on real data.
13

Evaluation et amélioration des méthodes de chaînage de données / Evaluation and improvement of data chaining methods

Li, Xinran 29 January 2015 (has links)
Le chaînage d’enregistrements est la tâche qui consiste à identifier parmi différentes sources de données les enregistrements qui concernent les mêmes entités. En l'absence de clé d’identification commune, cette tâche peut être réalisée à l’aide d’autres champs contenant des informations d’identifications, mais dont malheureusement la qualité n’est pas parfaite. Pour ce faire, de nombreuses méthodes dites « de chaînage de données » ont été proposées au cours des dernières décennies.Afin d’assurer le chaînage valide et rapide des enregistrements des mêmes patients dans le cadre de GINSENG, projet qui visait à mettre en place une infrastructure de grille informatique pour le partage de données médicales distribuées, il a été nécessaire d’inventorier, d’étudier et parfois d’adapter certaines des diverses méthodes couramment utilisées pour le chaînage d’enregistrements. Citons notamment les méthodes de comparaison approximative des champs d’enregistrement selon leurs épellations et leurs prononciations, les chaînages déterministe et probabiliste d’enregistrements, ainsi que leurs extensions. Ces méthodes comptent des avantages et des inconvénients qui sont ici clairement exposés.Dans la pratique, les champs à comparer étant souvent imparfaits du fait d’erreurs typographiques, notre intérêt porte particulièrement sur les méthodes probabilistes de chaînage d’enregistrements. L’implémentation de ces méthodes probabilistes proposées par Fellegi et Sunter (PRL-FS) et par Winkler (PRL-W) est précisément décrite, ainsi que leur évaluation et comparaison. La vérité des correspondances des enregistrements étant indispensable à l’évaluation de la validité des résultats de chaînages, des jeux de données synthétiques sont générés dans ce travail et des algorithmes paramétrables proposés et détaillés.Bien qu’à notre connaissance, le PRL-W soit une des méthodes les plus performantes en termes de validité de chaînages d’enregistrements en présence d’erreurs typographiques dans les champs contenant les traits d’identification, il présente cependant quelques caractéristiques perfectibles. Le PRL-W ne permet par exemple pas de traiter de façon satisfaisante le problème de données manquantes. Notons également qu’il s’agit d’une méthode dont l’implémentation n’est pas simple et dont les temps de réponse sont difficilement compatibles avec certains usages de routine. Certaines solutions ont été proposées et évaluées pour pallier ces difficultés, notamment plusieurs approches permettant d’améliorer l’efficacité du PRL-W en présence de données manquantes et d’autres destinées à optimiser les temps de calculs de cette méthode en veillant à ce que cette réduction du temps de traitement n’entache pas la validité des décisions de chaînage issues de cette méthode. / Record linkage is the task of identifying which records from different data sources refer to the same entities. Without the common identification key among different databases, this task could be performed by comparison of corresponding fields (containing the information for identification) in records to link. To do this, many record linkage methods have been proposed in the last decades.In order to ensure a valid and fast linkage of the same patients’ records for GINSENG, a research project which aimed to implement a grid computing infrastructure for sharing medical data, we first studied various commonly used methods for record linkage. These are the methods of approximate comparison of fields in record according to their spellings and pronunciations; the deterministic and probabilistic record linkages and their extensions. The advantages and disadvantages of these methods are clearly demonstrated.In practice, as fields to compare are sometimes subject to typographical errors, we focused on probabilistic record linkage. The implementation of these probabilistic methods proposed by Fellegi and Sunter (PRL-FS) and Winkler (PRL-W) is described in details, and also their evaluation and comparison. Synthetic data sets were used in this work for knowing the truth of matches to evaluate the linkage results. A configurable algorithm for generating synthetic data was therefore proposed.To our knowledge, the PRL-W is one of the most effective methods in terms of validity of linkages in the presence of typographical errors in the field. However, the PRL-W does not satisfactorily treat the missing data problem in the fields, and the implementation of PRL-W is complex and has a computational time that impairs its opportunity in routine use. Solutions are proposed here with the objective of improving the effectiveness of PRL-W in the presence of missing data in the fields. Other solutions are tested to simplify the PRL-W algorithm and both reduce computational time and keep and optimal linkage accuracy.Keywords:
14

Analysis of partial safety factor method based on reliability analysis and probabilistic methods

Salehi, Hamidreza 22 January 2020 (has links)
The partial safety factor method is the main safety concept applied across structural design standards. This method is also presented in EN-1990 as the basis of structural design in Europe. In the review of this code for the new generation of Eurocodes, analysis of the partial safety factor method seems necessary. The origin of the partial safety factor method is related to probabilistic methods and reliability analysis. Therefore, the latter is selected as tools for the evaluation of the partial safety factor method in the EN-1990 framework. Consequently this research begins with an explanation of the background of partial safety factor methods and reliability analysis. Different aspects of this safety concept are investigated through this study. The analysis strategy is based on the study of partial safety factor method according to the different part of EN-1990. The research is divided into two main parts, according to the basic components of limit state functions: load and resistance. Aspects related to loading are investigated first. The available load combinations and the recommended partial factors are investigated based on their reliability levels. The load combinations are compared with each other according to the sustainability of their design. An increased factor for the application of snow load is proposed to overcome safety problems related to snow load on structures. Consequently, a proposal for simplifying these load combinations is offered and verified according to reliability analysis. In the final step, regarding the load’s partial factors, a method of calibration is proposed, based on Monte Carlo reliability analysis. Afterwards, the aspects related to the resistance are analyzed. Resistances depend mostly on experimental data. Therefore, the relationship between the partial safety factor of resistance and test numbers is investigated. A probabilistic analysis based on Annex D of EN-1990 is then applied to calculate the model uncertainty partial factor and the resistance partial factor for a database from masonry shear walls. A comparison is made to show the influence of different way of partial safety factor utilization in a limit state function.:1 Introduction 2 Partial safety factor method and EN-1990 3 Reliability analysis 4 Load combinations and partial safety factors 5 Resistance partial safety factor 6 Summary and outlook
15

[es] CONFIABILIDAD Y PROBABILIDAD EN GEOTECNIA DE FUNDACIONES SUPERFICIALES / [pt] CONFIABILIDADE E PROBABILIDADE EM GEOTECNIA DE FUNDAÇÕES SUPERFICIAIS / [en] CONFIABILITY AND PROBABILITY IN GEOTECHNICS OF SHALLOW FOUNDATION

ROMULO CASTELLO HENRIQUES RIBEIRO 30 July 2001 (has links)
[pt] O interesse por análises de confiabilidade em geotecnia tem aumentado muito nos últimos anos. Métodos probabilísticos tem sido utilizados com o objetivo de racionalizar a quantificação das incertezas existentes na geotecnia. Neste âmbito, o presente trabalho apresenta um resumo com os conceitos básicos de probabilidade necessários para a compreensão do assunto. Desenvolve-se o Método do Segundo Momento de Primeira Ordem para quantificação da confiabilidade inerente ao desempenho de fundações. Metodologias são propostas para racionalizar a adoção de fatores de segurança quanto à ruptura de fundações superficiais e quantificar o risco associado à probabilidade do recalque estimado ser superior ao recalque admissível. Os exemplos de cálculo são apresentados com base no desempenho de fundações protótipo submetidas a provas de carga superficiais executadas no campo experimental 1 da PUC-Rio. / [en] The interest for reliability analysis in geotechnical engineering has been growing up in the last two decades. Probabilistic methods are generally used as a way of rationalizing the analysis of the uncertainties presents in the geotechnical properties. The First Order Second Moment method (FOSM) was applied in order to quantify the reliability of foundations. A methodology is proposed for quantifying the probability of faylure of shallow foundations and also the probability of settlements larger than the allowable design value. Example calculations are presented with basis on the results of prototype footings tested in the experimental research site at PUC-Rio. / [es] EL interés por análisis de confiabilidad en geotecnia ha aumentado mucho en los últimos años. Métodos probabilísticos han sido utilizados con el objetivo de racionalizar la cuantificación de las incertezas que existen en la geotecnia. En este ámbito, el presente trabajo presenta un resumen de los conceptos básicos de las probabilidades, necesarios para la comprensión del asunto. Se desarrolla el Método del Segundo Momento de Primer Orden para cuantificar la confiabilidad inherente al desempeño de fundaciones. Se proponen metodologías para racionalizar la adopción de factores de seguridad y cuantificar el riesgo asociado a la probabilidad de que el recalco estimado sea superior al recalco admisible. Los ejemplos de cálculo presentados tienen como base el desempeño de fundaciones prototipos, sometidas a pruebas de carga superficiales, ejecutadas en el campo experimental 1 de la PUC-Rio.
16

Random planar structures and random graph processes

Kang, Mihyun 27 July 2007 (has links)
Diese Habilitationsschrift richtete auf zwei diskrete Strukturen aus: planare Strukturen und zufällige Graphen-Prozesse. Zunächst werden zufällige planare Strukturen untersucht, mit folgende Gesichtspunkte: - Wieviele planare Strukturen gibt es? - Wie kann effizient eine zufällige planare Struktur gleichverteilt erzeugt werden? - Welche asymptotischen Eigenschaften hat eine zufällige planare Struktur mit hoher Wahrscheinlichkeit? Um diese Fragen zu beantworten, werden die planaren Strukturen in Teile mit höherer Konnektivität zerlegt. Für die asymptotische Enumeration wird zuerst die Zerlegung als das Gleichungssystem der generierenden Funktionen interpretiert. Auf dem Gleichungssystem wird dann Singularitätenanalyse angewendet. Für die exakte Enumeration und zufällige Erzeugung wird die rekursive Methode verwendet. Für die typischen Eigenschaften wird die probabilistische Methode auf asymptotischer Anzahl angewendet. Des Weiteren werden zufällige Graphen-Prozesse untersucht. Zufällige Graphen wurden zuerst von Erdos und Renyi eingeführt und untersucht weitgehend seitdem. Ein zufälliger Graphen-Prozess ist eine Markov-Kette, deren Zustandsraum eine Menge der Graphen mit einer gegebenen Knotenmenge ist. Der Prozess fängt mit isolierten Konten an, und in jedem Ablaufschritt entsteht ein neuer Graph aus dem aktuellen Graphen durch das Hinzufügen einer neuen Kante entsprechend einer vorgeschriebenen Regel. Typische Fragen sind: - Wie ändert sich die Wahrscheinlichkeit, dass ein von einem zufälligen Graphen-Prozess erzeugter Graph zusammenhängend ist? - Wann erfolgt der Phasenübergang? - Wie groß ist die größte Komponente? In dieser Habilitationsschrift werden diese Fragen über zufällige Graphen-Prozesse mit Gradbeschränkungen beantwortet. Dafür werden probabilistische Methoden, insbesondere Differentialgleichungsmethode, Verzweigungsprozesse, Singularitätsanalyse und Fourier-Transformationen, angewendet. / This thesis focuses on two kinds of discrete structures: planar structures, such as planar graphs and subclasses of them, and random graphs, particularly graphs generated by random processes. We study first planar structures from the following aspects. - How many of them are there (exactly or asymptotically)? - How can we efficiently sample a random instance uniformly at random? - What properties does a random planar structure have, with high probability? To answer these questions we decompose the planar structures along the connectivity. For the asymptotic enumeration we interpret the decomposition in terms of generating functions and derive the asymptotic number, using singularity analysis. For the exact enumeration and the uniform generation we use the recursive method. For typical properties of random planar structures we use the probabilistic method, together with the asymptotic numbers. Next we study random graph processes. Random graphs were first introduced by Erdos and Renyi and studied extensively since. A random graph process is a Markov chain whose stages are graphs on a given vertex set. It starts with an empty graph, and in each step a new graph is obtained from a current graph by adding a new edge according to a prescribed rule. Recently random graph processes with degree restrictions received much attention. In the thesis, we study random graph processes where the minimum degree grows quite quickly with the following questions in mind: - How does the connectedness of a graph generated by a random graph process change as the number of edges increases? - When does the phase transition occur? - How big is the largest component? To investigate the random graph processes we use the probabilistic method, Wormald''s differential equation method, multi-type branching processes, and the singularity analysis.
17

Inferência bayesiana na avaliação da segurança de fundações em estacas de deslocamento. / Bayesian inference in the assessment of precast piles foundations safety.

Santos, Marcio de Souza 05 April 2007 (has links)
O tema \"segurança de fundações\" tem merecido especial atenção, tanto na lide acadêmica quanto na prática profissional, em virtude da necessidade de se buscar soluções cada vez mais otimizantes para a dicotomia custo versus segurança, soluções essas que diferem pela forma de tratamento das incertezas envolvidas no projeto e execução das fundações. As provas de carga sobre as fundações têm desempenhado papel central na redução dessas incertezas. Ultimamente, tem-se discutido muito, particularmente no âmbito da revisão da NBR 6122, o papel das provas de carga na redução das incertezas inerentes a qualquer obra de fundações. Se é ponto pacífico que as provas de carga devem reduzir as incertezas, já não há consenso quanto aos níveis dessa redução em função do tipo e da quantidade de provas de carga, nem tampouco como a variabilidade dos resultados das provas de carga efetuadas em dada obra influenciam no fator de segurança. Desta feita, o presente trabalho apresenta uma formulação consistente para combinação das previsões de capacidade de carga de estacas de deslocamento com as informações derivadas da realização de provas de carga estáticas conduzidas até a ruptura ou ensaios de carregamento dinâmico, propiciando a atualização racional dos indicadores de segurança, segundo os conceitos da inferência bayesiana. Os resultados permitem consignar que a inferência bayesiana se apresenta com grande vantagem em relação à inferência clássica, pois permite a incorporação das informações anteriores existentes, muitas vezes de caráter subjetivo, sendo menos dependente de amostragem. Além disso, a inferência bayesiana se mostrou um instrumento legítimo para a incorporação dos resultados de provas de carga, decorrendo em medidas de segurança fundamentadas e com paralelo na prática da engenharia de fundações. / The theme of \"foundations safety\" deserves special attention in theory and practice, due to the need to find optimized solutions, which balance cost and safety, solutions that differ in their method of uncertainty treatment. An important way of coping with these uncertainties is the proof pile load tests. Recently, the importance of proof load tests in the reduction of uncertainty has been widely discussed, mainly in the context of the NBR 6122 update. Despite their importance, there is no universally accepted standard regarding how the type and the number of proof load tests influence the safety factor. This work uses the bayesian inference concepts to present a consistent approach, which matches predictions of precast pile capacity with proof pile load tests results, thus providing a rational updating of safety indicators. The results of this study lead to the conclusion that bayesian inference methods have advantages compared with classical approaches. Since they allow the consideration of previous information, sometimes of a subjective nature, these methods do not require as large a sample as frequency approaches do. Furthermore, these methods have proved to be more robust than classical approaches whilst providing results which are consistent with current practice of foundation engineering.
18

Probabilistic Methods In Information Theory

Pachas, Erik W 01 September 2016 (has links)
Given a probability space, we analyze the uncertainty, that is, the amount of information of a finite system, by studying the entropy of the system. We also extend the concept of entropy to a dynamical system by introducing a measure preserving transformation on a probability space. After showing some theorems and applications of entropy theory, we study the concept of ergodicity, which helps us to further analyze the information of the system.
19

Dynamics, information and computation / Dynamique, information et calcul

Delvenne, Jean-Charles 16 December 2005 (has links)
"Dynamics" is very roughly the study of how objects change in time; for instance whether an electrical circuit goes to equilibrium, due to thermal dissipation. By "information", we mean how helpful it is to observe an object in order to know it better, for instance how many binary digits we can acquire on the value of a voltage by an appropriate measure. A "computation" is a physical process, e.g. the flow of current into a complex set of transistors, that after some time eventually gives us the solution of a mathematical problem (such as "Is 13 prime?"). We are interested to various relations between these concepts. In a first chapter, we unify some arguments in the literature to show that a whole class of quantities of dynamical systems are uncomputable. For instance the topological entropy of tilings and Turing machines. Then we propose a precise meaning to the statement "This dynamical system is a computer", at least for symbolic systems, such as cellular automata. We also show, for instance, that a "computer" must be dynamically unstable, and can even be chaotic. In a third chapter, we compare how complicated it is to control a system according whether we can acquire information on it ("feedback") or not ("open loop"). We are specifically interested in finite-state systems. In last chapter we show how to control a scalar linear system when only a finite amount of information can be acquired at every step of time.
20

Origine et transport des sédiments dans un bassin versant alpin englacé (Glacier des Bossons, France) : une quantification couplant mesures hydro-sédimentaires haute-résolution, suivi radio-fréquence de galets, teneur en nucléides cosmogéniques et méthodes probabilistes / Origin and transport of sediments in an alpine glaciated catchment (Bossons glacier, France) : a quantification combining hydro-sedimentary data, radio-frequency identification of pebbles, cosmogenic nuclides content and probabilistic methods

Guillon, Hervé 17 May 2016 (has links)
Agents érosifs parmi les plus efficaces, les glaciers réagissent dynamiquement aux variations climatiques et entraînent à l’aval des modifications importantes des flux de sédiments. Dans les Alpes, et dans le cadre du réchauffement climatique actuel, se pose la question de l’évolution de la charge sédimentaire provenant de bassins versants partiellement englacés. L’export détritique issu d’un tel environnement résulte de processus d’érosion affectant plusieurs domaines géomorphologiques : les parois supra glaciaires, le substratum couvert de glace et la zone pro glaciaire à l’aval du glacier. Aussi, l’intention de ce travail de recherche doctorale est de caractériser l’origine et le transport des sédiments dans les bassins versants de deux torrents issus du glacier des Bossons (massif du Mont-Blanc, France).Dans ce but, les composantes du flux de sédiment issu des domaines supra glaciaire, sous-glaciaire et proglaciaire sont séparées et quantifiées à partir de méthodes innovantes :i. L’utilisation de la concentration en nucléides cosmogéniques comme marqueur du trans-port à la surface du glacier ;ii. L’analyse combinée de données météorologiques et de mesures hydro-sédimentaire à haute résolution temporelle (2 min) complétées par des modèles linéaires multivariés ;iii. La mise en oeuvre d’une méthode probabiliste adjointe à une application à l’échelle pluri-annuelle de l’estimation des flux sédimentaires par source ;iv. Le traçage radio-fréquence de particules grossières dans la zone pro glaciaire associé à une analyse dans le cadre d’un modèle de transport stochastique.A travers des outils numériques, l’application des méthodologies présentées apporte une estimation des taux d’érosion des domaines supra glaciaire, sous-glaciaire et pro glaciaire, et contraint le transfert des sédiments dans le bassin versant.Ainsi, dans la partie terminale du glacier, 52±14 à 9±4% de la charge supra glaciaire est transférée vers le réseau de drainage sous-glaciaire. Par ailleurs, l’évolution de ce dernier au cours de la saison de fonte entraîne sur une courte période l’export de la production sédimentaire hivernale. De plus, la configuration du drainage sous le glacier et sa dynamique de retrait contrôlent la remobilisation d’un stock sédimentaire sous-glaciaire plus ancien. Ces processus expliquent le contraste entre les taux moyens d’érosion sous-glaciaire des deux torrents instrumentés, respectivement 0.63 ± 0.37 et 0.38 ± 0.22 mm/an . Ces valeurs sont inférieures à la création topographique tectonique, ∼1.5 mm/an , et du même ordre de grandeur que le taux moyen d’érosion des parois surplombants le glacier, évalué à 0.76 ± 0.34 mm/an.A l’aval du glacier, les versants ne sont pas efficacement connectés au torrent proglaciaire et le glacier reste la source principale de l’export sédimentaire. Ainsi, en l’absence d’événements extrêmes, l’apport du domaine pro glaciaire correspond à 13 ± 10% de l’export sédimentaire total du bassin versant. Par ailleurs, la zone proglaciaire agit comme un tampon sédimentaire fonctionnant d’une échelle quotidienne à annuelle pour les silts et les sables, et à une échelle décennale pour les particules plus grossières. Au total, malgré un retrait glaciaire récent et rapide, le bassin versant du glacier des Bossons présente actuellement une dynamique paraglaciaire limitée dont l’intensité correspond à un taux moyen d’érosion proglaciaire de 0.25±0.20 mm/an. Enfin, sur l’ensemble du bassin versant, la dynamique sédimentaire est multi-fréquentielle et amortie par des stockages intermédiaires. / Among the most efficient agents of erosion, glaciers react dynamically to climate change, leading to a significant adjustment of downstream sediment flux. Present-day global warming raises the question regarding the evolution of the sediment load originating from partially glaciated catchment. The detrital export from such environment results from erosion processes operating within distinct geomorphological domains : supraglacial rockwalls, ice-covered substratum and the proglacial area, downstream from the glacier. The general intent of this doctoral research is therefore to characterize the origin and transport of sediments in the watersheds of two streams draining Bossons glacier (Mont-Blanc massif, France).For this purpose, the components of the sediment flux coming from supraglacial, subglacial and proglacial domains are separated and quantified by innovating methods:i. Using the terrestrial cosmogenic nuclides concentrations as evidence of a supraglacialtransport;ii. Combining meteorological data and hydro-sedimentary data acquired at a high timeresolution (2 min) and completed by multi-linear models;iii. Estimating sediment flux by source for 7 years and with a probabilistic method;iv. Associating radio-frequency identification of pebbles in the proglacial area with a stochas-tic transport analysis.Through numerical tools, applying the presented methodologies provides erosion rates of thesupraglacial, subglacial and proglacial domains, and determines the sediment transfer mecha-nisms within the catchment.Thus in the terminal part of the glacier, 52±14 to 9±4% of the supraglacial load is transferred to the subglacial drainage network. Moreover, its evolution throughout the melt season leads to the export of the winter sediment production during a limited period. Furthermore, the drainage configuration beneath the glacier and its retreat control the remobilization of a long-term sediment stock. These processes explain the contrast between the mean subglacial erosion rates of the two monitored streams, 0.63 ± 0.37 et 0.38 ± 0.22 mm/yr, respectively. This values are lower than the tectonic uplift, ∼1.5 mm/an, and of the same order of magnitude than the mean erosion rate of supraglacial rockwalls, evaluated at 0.76 ± 0.34 mm/an.Downstream from the glacier, hillslopes are not efficiently connected to the proglacial stream and the glacier is the main source of the sediment export. Hence, without extreme events, the input from proglacial domain corresponds to 13 ± 10% of the total sediment export from the catchment. Besides, the proglacial area acts as a buffer functioning from the daily to the year scales for fine particles, and at a decennial scale for coarser particles. In total, despite the rapid recent retreat of the glacier, the Bossons catchment exhibits a limited paraglacialdynamic whose intensity corresponds to a mean proglacial erosion rate of 0.25±0.20 mm/an. Finally, at the catchment scale, the sediment dynamic is multi-frequential and buffered by storage and release mechanisms.

Page generated in 0.0551 seconds