671 |
An Application of an In-Depth Advanced Statistical Analysis in Exploring the Dynamics of Depression, Sleep Deprivation, and Self-EsteemGaffari, Muslihat 01 August 2024 (has links) (PDF)
Depression, intertwined with sleep deprivation and self-esteem, presents a significant challenge to mental health worldwide. The research shown in this paper employs advanced statistical methodologies to unravel the complex interactions among these factors. Through log-linear homogeneous association, multinomial logistic regression, and generalized linear models, the study scrutinizes large datasets to uncover nuanced patterns and relationships. By elucidating how depression, sleep disturbances, and self-esteem intersect, the research aims to deepen understanding of mental health phenomena. The study clarifies the relationship between these variables and explores reasons for prioritizing depression research. It evaluates how statistical models, such as log-linear, multinomial logistic regression, and generalized linear models, shed light on their intricate dynamics. Findings offer insights into risk and protective factors associated with these variables, guiding tailored interventions for individuals in psychological distress. Additionally, policymakers can utilize these insights to develop comprehensive strategies promoting mental health and well-being at a societal level.
|
672 |
Statistical methods for variant discovery and functional genomic analysis using next-generation sequencing dataTang, Man 03 January 2020 (has links)
The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data, allowing the identification of biomarkers in early disease diagnosis and driving the transformation of most disciplines in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. This dissertation focuses on modeling ``omics'' data in various NGS applications with a primary goal of developing novel statistical methods to identify sequence variants, find transcription factor (TF) binding patterns, and decode the relationship between TF and gene expression levels. Accurate and reliable identification of sequence variants, including single nucleotide polymorphisms (SNPs) and insertion-deletion polymorphisms (INDELs), plays a fundamental role in NGS applications. Existing methods for calling these variants often make simplified assumption of positional independence and fail to leverage the dependence of genotypes at nearby loci induced by linkage disequilibrium. We propose vi-HMM, a hidden Markov model (HMM)-based method for calling SNPs and INDELs in mapped short read data. Simulation experiments show that, under various sequencing depths, vi-HMM outperforms existing methods in terms of sensitivity and F1 score. When applied to the human whole genome sequencing data, vi-HMM demonstrates higher accuracy in calling SNPs and INDELs. One important NGS application is chromatin immunoprecipitation followed by sequencing (ChIP-seq), which characterizes protein-DNA relations through genome-wide mapping of TF binding sites. Multiple TFs, binding to DNA sequences, often show complex binding patterns, which indicate how TFs with similar functionalities work together to regulate the expression of target genes. To help uncover the transcriptional regulation mechanism, we propose a novel nonparametric Bayesian method to detect the clustering pattern of multiple-TF bindings from ChIP-seq datasets. Simulation study demonstrates that our method performs best with regard to precision, recall, and F1 score, in comparison to traditional methods. We also apply the method on real data and observe several TF clusters that have been recognized previously in mouse embryonic stem cells. Recent advances in ChIP-seq and RNA sequencing (RNA-Seq) technologies provides more reliable and accurate characterization of TF binding sites and gene expression measurements, which serves as a basis to study the regulatory functions of TFs on gene expression. We propose a log Gaussian cox process with wavelet-based functional model to quantify the relationship between TF binding site locations and gene expression levels. Through the simulation study, we demonstrate that our method performs well, especially with large sample size and small variance. It also shows a remarkable ability to distinguish real local feature in the function estimates. / Doctor of Philosophy / The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data and bring out innovations in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. In this dissertation, we mainly focus on three problems closely related to NGS and its applications: (1) how to improve variant calling accuracy, (2) how to model transcription factor (TF) binding patterns, and (3) how to quantify of the contribution of TF binding on gene expression. We develop novel statistical methods to identify sequence variants, find TF binding patterns, and explore the relationship between TF binding and gene expressions. We expect our findings will be helpful in promoting a better understanding of disease causality and facilitating the design of personalized treatments.
|
673 |
Strategic Stochastic Coordination and Learning In Regular Network GamesWei, Yi 19 May 2023 (has links)
Coordination is a desirable feature in many multi-agent systems, such as robotic, social and economic networks, allowing the execution of tasks that would be impossible by individual agents. This thesis addresses two problems in stochastic coordination where each agent make decisions strategically, taking into account the decisions of its neighbors over a regular network.
In the first problem, we study the coordination in a team of strategic agents choosing to undertake one of the multiple tasks. We adopt a stochastic framework where the agents decide between two distinct tasks whose difficulty is randomly distributed and partially observed. We show that a Nash equilibrium with a simple and intuitive linear structure exists for textit{diffuse} prior distributions on the task difficulties. Additionally, we show that the best response of any agent to an affine strategy profile can be nonlinear when the prior distribution is not diffuse. Then, we state an algorithm that allows us to efficiently compute a data-driven Nash equilibrium within the class of affine policies.
In the second problem, we assume that the payoff structure of the coordination game corresponds to a single task allocation scenario whose difficulty is perfectly observed. Since there are multiple Nash equilibria in this game, the agents must use a distributed stochastic algorithm know as textit{log linear learning} to play it multiple times.
First, we show that this networked coordination game is a potential game. Moreover, we establish that for regular networks, the convergence to a Nash equilibrium depends on the ratio between the task-difficulty parameter and the connectivity degree according to a threshold rule. We investigate via simulations the interplay between rationality and the degree of connectivity of the network. Our results show counter-intuitive behaviors such as the existence of regimes in which agents in a network with larger connectivity require less rational agents to converge to the Nash equilibrium with high probability. Simultaneously, we examined the characteristics of both regular graphical coordination games and non-regular graphical games using this particular bi-matrix game model. / Master of Science / This thesis focuses on addressing two problems in stochastic coordination among strategic agents in multi-agent systems, such as robotic, social, and economic networks. The first problem studies the coordination among agents when they need to choose between multiple tasks whose difficulties are randomly distributed and partially observed. The thesis shows the existence of a Nash equilibrium with a linear structure for certain prior distributions, and presents an algorithm to efficiently compute a data-driven Nash equilibrium within a specific class of policies. The second problem assumes a single task allocation scenario, whose difficulty is perfectly observed, and investigates the use of a distributed stochastic algorithm known as log-linear learning to converge to a Nash equilibrium. The thesis shows that the convergence to a Nash equilibrium depends on the task-difficulty parameter and the connectivity degree of the network, and explores the influence of rationality of the agents and the connectivity of the network on the learning process. Overall, the thesis provides insights into the challenges and opportunities in achieving coordination among strategic agents in multi-agent systems.
|
674 |
Genetic Basis of Control in Fruit Mass Via Pedicel Characteristics in Apple PopulationsJairam Baba Danao (19172569) 18 July 2024 (has links)
<p dir="ltr">Pedicels are the slender stalks that attach the fruit to the plant. They play a crucial role in fruit development. The characteristics of the pedicel comprise complex traits that are controlled by multiple genes. To study whether genetic control of fruit mass was via control of pedicel characteristics, we used two unique hybrid apple populations: ‘20 Ounce’ x ‘Prairie Fire’ and ‘Edward VII’ x ‘Prairie Fire’. Both ‘20 Ounce’ and ‘Edward VII’ x ‘Prairie Fire’ produce large fruit over 200 g, whereas ‘Prairie Fire’ is a small-fruited crabapple with fruit size less than 2 g. These populations offer the potential to investigate how pedicel attributes relate to apple fruit size. Previous work established a correlation between pedicel characteristics and apple fruit mass. Specifically, pedicel length showed an inverse relationship, while pedicel diameter was directly related to fruit mass. Shorter and broader pedicels were expected to offer the least resistance to water and nutrient flows. We hypothesize that among the genes that control fruit mass, some govern pedicel length and diameter. Quantitative Trait Loci (QTLs) analysis (Linkage mapping) was performed, and 5 QTLs were associated with characteristics such as fruit mass, pedicel length and pedicel diameter with LOD scores of 4 and above. That being said, no common region was associated with both the fruit mass and pedicel characteristics. This does not support our hypothesis and suggests that different regions may be controlling all these traits. Knowledge of QTLs and subsequently genes that affect fruit mass and pedicel characteristics in apple have potential applications in apple breeding and fruit production. The identification and manipulation of these genes holds the promise of developing new apple cultivars with improved pedicel traits and ultimately fruit mass and enhanced fruit quality.</p>
|
675 |
Heterogeneous Sensor Data based Online Quality Assurance for Advanced Manufacturing using Spatiotemporal ModelingLiu, Jia 21 August 2017 (has links)
Online quality assurance is crucial for elevating product quality and boosting process productivity in advanced manufacturing. However, the inherent complexity of advanced manufacturing, including nonlinear process dynamics, multiple process attributes, and low signal/noise ratio, poses severe challenges for both maintaining stable process operations and establishing efficacious online quality assurance schemes.
To address these challenges, four different advanced manufacturing processes, namely, fused filament fabrication (FFF), binder jetting, chemical mechanical planarization (CMP), and the slicing process in wafer production, are investigated in this dissertation for applications of online quality assurance, with utilization of various sensors, such as thermocouples, infrared temperature sensors, accelerometers, etc. The overarching goal of this dissertation is to develop innovative integrated methodologies tailored for these individual manufacturing processes but addressing their common challenges to achieve satisfying performance in online quality assurance based on heterogeneous sensor data. Specifically, three new methodologies are created and validated using actual sensor data, namely,
(1) Real-time process monitoring methods using Dirichlet process (DP) mixture model for timely detection of process changes and identification of different process states for FFF and CMP. The proposed methodology is capable of tackling non-Gaussian data from heterogeneous sensors in these advanced manufacturing processes for successful online quality assurance.
(2) Spatial Dirichlet process (SDP) for modeling complex multimodal wafer thickness profiles and exploring their clustering effects. The SDP-based statistical control scheme can effectively detect out-of-control wafers and achieve wafer thickness quality assurance for the slicing process with high accuracy.
(3) Augmented spatiotemporal log Gaussian Cox process (AST-LGCP) quantifying the spatiotemporal evolution of porosity in binder jetting parts, capable of predicting high-risk areas on consecutive layers. This work fills the long-standing research gap of lacking rigorous layer-wise porosity quantification for parts made by additive manufacturing (AM), and provides the basis for facilitating corrective actions for product quality improvements in a prognostic way.
These developed methodologies surmount some common challenges of advanced manufacturing which paralyze traditional methods in online quality assurance, and embody key components for implementing effective online quality assurance with various sensor data. There is a promising potential to extend them to other manufacturing processes in the future. / Ph. D. / This dissertation work develops novel online quality assurance methodologies for advanced manufacturing using various sensor data. Four advanced manufacturing processes, including fused filament fabrication, binder jetting, chemical mechanical planarization, and wafer slicing process, are investigated in this research. The developed methodologies address some common challenges in the aforementioned processes, such as nonlinear process dynamics and high variety in sensor data dimensions, which have severely hindered the effectiveness of traditional online quality assurance methods. Consequently, the proposed research accomplishes satisfying performance in defect detection and quality prediction for the advanced manufacturing processes.
In this dissertation, the research methodologies are constructed in both space and time domains based on different types of sensor data. Sensor data representation and integration for a variety of data formats (e.g., online data stream, profile data, image data) with the dimensionality covering a wide range (from ~100 to ~105 ) are researched to extract effective features that are sensitive to manufacturing process defects; the devised methods, based on the extracted features, utilize spatiotemporal analysis to realize timely detection and accurate prediction of process defects. These integrated methodologies have a promising potential to be extended to other advanced manufacturing processes for efficacious process monitoring and quality assurance.
The accomplished work in this dissertation is an effective effort towards sustainable operations of advanced manufacturing. The achieved performance not only enables improvement in defect detection and quality prediction, but also lays the foundation for future implementation of corrective actions that can automatically mitigate the process defects.
|
676 |
Anomaly Detection in Telecom Service Provider Network Infrastructure Security Logs using an LSTM Autoencoder : Leveraging Time Series Patterns for Improved Anomaly Detection / Avvikelsedetektering i säkerhetsloggar för nätverksinfrastruktur hos en telekomtjänstleverantör med en LSTM Autoencoder : Uttnyttjande av tidsseriemönster för förbättrad avvikelsedetekteringVlk, Vendela January 2024 (has links)
New regulations are placed on Swedish Telecom Service Providers (TSPs) due to a rising concern for safeguarding network security and privacy in the face of ever-evolving cyber threats. These regulations demand that Swedish telecom companies expand their data security strategies with proactive security measures. Logs, serving as digital footprints in IT infrastructure, play a crucial role in identifying anomalies that could indicate security breaches. Deep Learning (DL) has been used to detect anomalies in logs due to its ability to discern intricate patterns within the data. By leveraging deep learning-based models, it is not only possible to identify anomalies but also to predict and mitigate potential threats within the telecom network. An LSTM autoencoder was implemented to detect anomalies in two separate multivariate temporal log datasets; the BETH cybersecurity dataset, and a Cisco log dataset that was created specifically for this thesis. The empirical results in this thesis show that the LSTM autoencoder reached an ROC AUC of 99.5% for the BETH dataset and 76.6% for the Cisco audit dataset. The use of an additional anomaly detection aid in the Cisco audit dataset let the model reach an ROC AUC of 99.6%. The conclusion that could be drawn from this work was that the systematic approach to developing a deep learning model for anomaly detection in log data was efficient. However, the study’s findings raise crucial considerations regarding the appropriateness of various log data for deep learning models used in anomaly detection. / Nya föreskrifter har införts för svenska telekomtjänsteleverantörer på grund av en ökad angelägenhet av att säkerställa nätverkssäkerhet och integritet inför ständigt föränderliga cyberhot. Dessa föreskrifter kräver att svenska telekomföretag utvidgar sina dataskyddsstrategier med proaktiva säkerhetsåtgärder. Loggar, som fungerar som digitala fotspår inom IT-infrastruktur, spelar en avgörande roll för att identifiera avvikelser som kan tyda på säkerhetsintrång. Djupinlärning har använts för att upptäcka avvikelser i loggar på grund av dess förmåga att urskilja intrikata mönster inom data. Genom att utnyttja modeller baserade på djupinlärning är det inte bara möjligt att identifiera avvikelser utan även att förutsäga samt mildra konsekvenserna av potentiella hot inom telekomnätet. En LSTM-autoencoder implementerades för att upptäcka avvikelser i två separata multivariata tidsserielogguppsättningar; BETH-cybersäkerhetsdatauppsättningen och en Cisco-loggdatauppsättning som skapades specifikt för detta arbete. De empiriska resultaten i denna avhandling visar att LSTM-autoencodern uppnådde en ROC AUC på 99.5% för BETH-datauppsättningen och 76.6% för Cisco-datauppsättningen. Användningen av ett ytterligare avvikelsedetekteringsstöd i Cisco-datauppsättningen möjliggjorde att modellen uppnådde en ROC AUC på 99.6%. Slutsatsen som kunde dras från detta arbete var att den systematiska metoden för att utveckla en djupinlärningsmodell för avvikelsedetektering i loggdata var effektiv. Dock väcker studiens resultat kritiska överväganden angående lämpligheten av olika loggdata för djupinlärningsmodeller som används för avvikelsedetektering.
|
677 |
Использование моделей глубокого обучения для обнаружения аномалий в логах в процессе разработки программного обеспечения : магистерская диссертация / Utilizing deep learning models to detect log anomalies during software developmentДивенко, А. С., Divenko, A. S. January 2024 (has links)
Данная работа посвящена применению моделей глубокого обучения для решения этой проблемы в процессе разработки программного обеспечения. Разработан стенд для имитации процесса разработки ПО, на котором были сгенерированы синтетические данные логов из различных сервисов. Объединение разнородных логов позволило создать реалистичный набор данных со скрытыми зависимостями для более сложной задачи поиска аномалий. На созданном наборе данных были применены модели глубокого обучения DeepLog, LogAnomaly и LogBERT. Для каждой модели выполнено обучение и оценка эффективности обнаружения аномалий на тестовой выборке. Разработанный стенд может усложняться и использоваться для дальнейших исследований в области применения глубокого обучения к задаче поиска аномалий в логах в процессе разработки ПО. / This paper focuses on the application of deep learning models to address this problem in the software development. A simulation framework was developed to imitate the software development by generating synthetic log data from different services. Combining heterogeneous logs allowed the creation of a realistic dataset with hidden dependencies for a more complex anomaly search task. DeepLog, LogAnomaly and LogBERT deep learning models were applied on the created dataset. For each model, training and evaluation of anomaly detection performance on a test sample was performed. The developed framework can be extended and used for further research in the application of deep learning to the task of searching for anomalies in logs during software development.
|
678 |
Evolution et diversité de la forme du cochon entre l'âge du Fer et la période moderne en Gaule et en France. Régionalisme, acquisitions zootechniques et implications historiques. / Morphological changes in domestic pig between the Iron Age and modern times in Gaul and France : Regionalisms, zootechnical acquisitions and historical implicationsDuval, Colin 11 September 2015 (has links)
Cette étude archéozoologique porte sur les variations morphologiques de différents éléments du squelette porcin (Sus scrofa domesticus) et s’appuie sur deux présupposés majeurs : (1) les changements morphologiques de l’animal sont le reflet des évolutions du contexte socio-économique ; (2) il existe une grande diversité des formes porcines, conditionnée par différents facteurs d’influence (environnement, stratégies agropastorales, habitudes culturelles, etc.). L’analyse menée, sur la base de tels postulats, tend à mesurer puis décrire ces changements et cette diversité pour tenter d’en comprendre l’origine et les mécanismes de progression. Pour cela il a fallu observer les restes dentaires et osseux de plus de 600 sites archéologiques, situés sur les territoires gaulois, français et italien, datés de La Tène moyenne à la période moderne, à l’aide de différentes méthodes de morphométrie géométrique et traditionnelle. De cette manière, il a été possible de constituer un solide référentiel ostéométrique pour le porc et d’atteindre ici une meilleure compréhension des pratiques d’élevage aux périodes gauloise, romaine et médiévale / This archaeozoological study investigates morphological variations in different skeletal parts of the pig (Sus scrofa domesticus) according to two main assumptions: (1) the morphological changes of the animal reflect socio-economic changes; (2) there is a great diversity of pig’s forms, driven by different factors (environment, agropastoral strategies, cultural habits, etc.). Our analysis, based on these assumptions, tends to quantify and describe these changes and this diversification to try to understand their origin and mechanisms. For this we have observed the dental and skeletal remains of more than 600 archaeological sites, located on the Gallic, French and Italian territories, dated from La Tène moyenne to the modern period, using different methods of geometric and traditional morphometrics. In this way, we were able to build a strong osteometric framework for pigs and get new clues to a better understanding of farming practices for Gallic, Roman and medieval periods
|
679 |
Inferência e diagnóstico em modelos não lineares Log-Gama generalizadosSILVA, Priscila Gonçalves da 04 November 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-25T14:46:06Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5) / Made available in DSpace on 2017-04-25T14:46:06Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5)
Previous issue date: 2016-11-04 / Young e Bakir (1987) propôs a classe de Modelos Lineares Log-Gama Generalizados (MLLGG) para analisar dados de sobrevivência. No nosso trabalho, estendemos a classe de modelos propostapor Young e Bakir (1987) permitindo uma estrutura não linear para os parâmetros de regressão. A nova classe de modelos é denominada como Modelos Não Lineares Log-Gama Generalizados (MNLLGG). Com o objetivo de obter a correção de viés de segunda ordem dos estimadores de máxima verossimilhança (EMV) na classe dos MNLLGG, desenvolvemos uma expressão matricial fechada para o estimador de viés de Cox e Snell (1968). Analisamos, via simulação de Monte Carlo, os desempenhos dos EMV e suas versões corrigidas via Cox e Snell (1968) e através da metodologia bootstrap (Efron, 1979). Propomos também resíduos e técnicas de diagnóstico para os MNLLGG, tais como: alavancagem generalizada, influência local e influência global. Obtivemos, em forma matricial, uma expressão para o fator de correção de Bartlett à estatística da razão de verossimilhanças nesta classe de modelos e desenvolvemos estudos de simulação para avaliar e comparar numericamente o desempenho dos testes da razão de verossimilhanças e suas versões corrigidas em relação ao tamanho e poder em amostras finitas. Além disso, derivamos expressões matriciais para os fatores de correção tipo-Bartlett às estatísticas escore e gradiente. Estudos de simulação foram feitos para avaliar o desempenho dos testes escore, gradiente e suas versões corrigidas no que tange ao tamanho e poder em amostras finitas. / Young e Bakir (1987) proposed the class of generalized log-gamma linear regression models (GLGLM) to analyze survival data. In our work, we extended the class of models proposed by Young e Bakir (1987) considering a nonlinear structure for the regression parameters. The new class of models is called generalized log-gamma nonlinear regression models (GLGNLM). We also propose matrix formula for the second-order bias of the maximum likelihood estimate of the regression parameter vector in the GLGNLM class. We use the results by Cox and Snell (1968) and bootstrap technique [Efron (1979)] to obtain the bias-corrected maximum likelihood estimate. Residuals and diagnostic techniques were proposed for the GLGNLM, such as generalized leverage, local and global influence. An general matrix notation was obtained for the Bartlett correction factor to the likelihood ratio statistic in this class of models. Simulation studies were developed to evaluate and compare numerically the performance of likelihood ratio tests and their corrected versions regarding size and power in finite samples. Furthermore, general matrix expressions were obtained for the Bartlett-type correction factor for the score and gradient statistics. Simulation studies were conducted to evaluate the performance of the score and gradient tests with their corrected versions regarding to the size and power in finite samples.
|
680 |
Modélisation d'un phénomène pluvieux local et analyse de son transfert vers la nappe phréatique / Modeling a local phenomenon rainy and analysis of its transfer to groundwaterGolder, Jacques 24 July 2013 (has links)
Dans le cadre des recherches de la qualité des ressources en eau, l’étude du processus de transfert de masse du sol vers la nappe phréatique constitue un élément primordial pour la compréhension de la pollution de cette dernière. En effet, les éléments polluants solubles à la surface (produits liés aux activités humaines tels engrais, pesticides...) peuvent transiter vers la nappe à travers le milieu poreux qu’est le sol. Ce scénario de transfert de pollution repose sur deux phénomènes : la pluie qui génère la masse d’eau à la surface et la dispersion de celle-ci à travers le milieu poreux. La dispersion de masse dans un milieu poreux naturel comme le sol forme un sujet de recherche vaste et difficile aussi bien au plan expérimental que théorique. Sa modélisation constitue une préoccupation du laboratoire EMMAH, en particulier dans le cadre du projet Sol Virtuel dans lequel un modèle de transfert (modèle PASTIS) a été développé. Le couplage de ce modèle de transfert avec en entrée un modèle décrivant la dynamique aléatoire de la pluie est un des objectifs de la présente thèse. Ce travail de thèse aborde cet objectif en s’appuyant d’une part sur des résultats d’observations expérimentaux et d’autre part sur de la modélisation inspirée par l’analyse des données d’observation. La première partie du travail est consacrée à l’élaboration d’un modèle stochastique de pluie. Le choix et la nature du modèle sont basés sur les caractéristiques obtenus à partir de l’analyse de données de hauteur de pluie recueillies sur 40 ans (1968-2008) sur le Centre de Recherche de l’INRA d’Avignon. Pour cela, la représentation cumulée des précipitations sera assimilée à une marche aléatoire dans laquelle les sauts et les temps d’attente entre les sauts sont respectivement les amplitudes et les durées aléatoires entre deux occurrences d’événements de pluie. Ainsi, la loi de probabilité des sauts (loi log-normale) et celle des temps d’attente entre les sauts (loi alpha-stable) sont obtenus en analysant les lois de probabilité des amplitudes et des occurrences des événements de pluie. Nous montrons alors que ce modèle de marche aléatoire tend vers un mouvement brownien géométrique subordonné en temps (quand les pas d’espace et de temps de la marche tendent simultanément vers zéro tout en gardant un rapport constant) dont la loi de densité de probabilité est régie par une équation de Fokker Planck fractionnaire (FFPE). Deux approches sont ensuite utilisées pour la mise en œuvre du modèle. La première approche est de type stochastique et repose sur le lien existant entre le processus stochastique issu de l’équation différentielle d’Itô et la FFPE. La deuxième approche utilise une résolution numérique directe par discrétisation de la FFPE. Conformément à l’objectif principal de la thèse, la seconde partie du travail est consacrée à l’analyse de la contribution de la pluie aux fluctuations de la nappe phréatique. Cette analyse est faite sur la base de deux relevés simultanées d’observations de hauteurs de pluie et de la nappe phréatique sur 14 mois (février 2005-mars 2006). Une étude statistique des liens entre les signaux de pluie et de fluctuations de la nappe est menée comme suit : Les données de variations de hauteur de nappe sont analysées et traitées pour isoler les fluctuations cohérentes avec les événements de pluie. Par ailleurs, afin de tenir compte de la dispersion de masse dans le sol, le transport de la masse d’eau pluviale dans le sol sera modélisé par un code de calcul de transfert (modèle PASTIS) auquel nous appliquons en entrée les données de hauteurs de pluie mesurées. Les résultats du modèle permettent entre autre d’estimer l’état hydrique du sol à une profondeur donnée (ici fixée à 1.6m). Une étude de la corrélation entre cet état hydrique et les fluctuations de la nappe sera ensuite effectuée en complément à celle décrite ci-dessus pour illustrer la possibilité de modéliser l’impact de la pluie sur les fluctuations de la nappe / Within the research quality of water resources, the study of the process of mass transfer from soil to groundwater is a key element for understanding the pollution of the latter. Indeed, soluble contaminants to the surface (related to human activities such fertilizers, pesticides products ...) can transit to the web through the porous medium that is the ground. This scenario transfer pollution based on two phenomena: the rain that generates the body of water to the dispersion and the surface thereof through the porous medium. The dispersion of mass in a natural porous medium such as soil forms a subject of extensive research and difficult both experimental and theoretical grounds. Its modeling is a concern EMMAH laboratory, particularly in the context of Virtual Sol project in which a transfer model (PASTIS model) was developed. The coupling of this transfer model with input a model describing the dynamics of random rain is one of the objectives of this thesis. This thesis addresses this goal by relying in part on the results of experimental observations and also on modeling inspired by the analysis of observational data. The first part of the work is devoted to the development of a stochastic model of rain. The choice and nature of the model are based on the features obtained from the analysis of data collected rainfall over 40 years (1968-2008) on the Research Centre INRA Avignon. For this, the cumulative rainfall representation will be treated as a random walk in which the jumps and waiting times between jumps are the amplitudes and durations between two random occurrences of rain events. Thus, the probability jumps (log-normal distribution) and that of waiting between jumps (Law alpha-stable) time is obtained by analyzing the laws of probability amplitudes and occurrences of rain events. We show that the random walk model tends towards a subordinate in time geometric Brownian motion (when space step and time step walking simultaneously tend to zero while maintaining a constant ratio), the law of probability density is governed by a Fokker Planck fractional (FFPE). Two approaches are then used to implement the model. The first approach is based on stochastic type and the relationship between the stochastic process derived from the differential equation of Itô and FFPE. The second approach uses a direct numerical solution by discretization of the FFPE. Accordance with the main objective of the thesis, the second part of the work is devoted to the analysis of the contribution of rain to fluctuations in groundwater. We approach this analysis on the basis of two simultaneous records of observations of rainfall amounts and groundwater over 14 months (February 2005-March 2006). A statistical study of the relationship between the signals of rain and fluctuating water will be conducted. Data sheet height variations are analyzed and processed to isolate coherent fluctuations with rain events. In addition, to take account of the mass dispersion in the soil, the mass transport of storm water in the soil layer is modeled by a calculation code transfer (PASTIS model) which we apply input data measured heights of rain. The model results allow between another estimate soil water status at a given depth (here set at 1.6m). A study of the correlation between the water status and fluctuating water will then be performed in addition to that described above to illustrate the ability to model the impact of rain on the water table fluctuations
|
Page generated in 0.0322 seconds