• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

3D modeling in Petrel of geological CO2 storage site / 3D modellering i Petrel av geologiskt CO2 lagringsområde

Gunnarsson, Niklas January 2011 (has links)
If mitigation measures are not made to prevent global warming the consequences of a continued global climate change, caused by the use of fossil fuels, may be severe. Carbon Capture and Storage (CCS) has been suggested as a way of decreasing the global atmospheric emission of CO2. In the realms of MUSTANG, a four year (2009-2013) large-scale integrating European project funded by the EU FP7, the objective is to gain understanding of the performance as well as to develop improved methods and models for characterizing so- called saline aquifers for geological storage of CO2. In this context a number of sites of different geological settings and geographical locations in Europe are also analyzed and modeled in order to gain a wide understanding of CO2 storage relevant site characteristics. The south Scania site is included into the study as one example site with data coming from previous geothermal and other investigations. The objective of the Master's thesis work presented herein was to construct a 3D model for the south Scania site by using modeling/simulation software Petrel, evaluate well log data as well as carry out stochastic simulations by using different geostatistical algorithms and evaluate the benefits in this. The aim was to produce a 3D model to be used for CO2 injection simulation purposes in the continuing work of the MUSTANG project. The sequential Gaussian simulation algorithm was used in the porosity modeling process of the Arnager greensand aquifer with porosity data determined from neutron and gamma ray measurements. Five hundred realizations were averaged and an increasing porosity with depth was observed.   Two different algorithms were used for the facies modeling of the alternative multilayered trap, the truncated Gaussian simulation algorithm and the sequential indicator simulation algorithm. It was seen that realistic geological models were given when the truncated Gaussian simulation algorithm was used with a low-nugget variogram and a relatively large range. / Den antropogena globala uppvärmningen orsakad av användandet av fossila bränslen kan få förödande konsekvenser om ingenting görs. Koldioxidavskiljning och lagring är en åtgärd som föreslagits för att minska de globala CO2-utsläppen. Inom ramarna för MUSTANG, ett fyra år långt (2009-2013) integrerande projekt finansierat av EU FP7 (www.co2mustang.eu), utvecklas metoder, modeller och förståelse angående så kallade saltvattenakviferers lämplighet för geologisk koldioxidlagring. En del av projektet är att analysera ett antal representativa formationer i olika delar av Europa för att få kunskap angående förekommande koldioxidlagringsspecifika egenskaper hos saltvattenakviferer. Ett av områdena som har inkluderats är i sydvästra Skåne. Syftet med detta examensarbete var att konstruera en 3D modell över detta område med hjälp av modellerings/simuleringsprogrammet Petrel, utvärdera borrhålsdata samt genomföra stokastiska simuleringar med olika geostatistiska algoritmer och utvärdera dem. Målsättningen var att konstruera en modell för CO2 injiceringssimuleringar i det forstsatta arbetet inom MUSTANG-projektet. En algoritm av sekventiell Gaussisk typ användes vid porositetsmodelleringen av Arnager Grönsandsakviferen med porositetsdata erhållen från neutron- och gammastrålningsmätningar. Ett genomsnitt av femhundra realisationer gjordes och en porositetstrend som visade en ökning med djupet kunde åskådligöras. Två olika algoritmer användes vid faciesmodelleringen av den alternativa flerlagrade fällan: en algoritm av trunkerade Gaussisk typ och en sekventiell indikatorsimuleringsalgoritm. Resultaten tyder på att en realistisk geologisk modell kan erhållas vid användandet av den trunkerande algoritmen med ett låg-nugget variogram samt en förhållandevis lång range.
2

Inférence de réseaux pour modèles inflatés en zéro / Network inference for zero-inflated models

Karmann, Clémence 25 November 2019 (has links)
L'inférence de réseaux ou inférence de graphes a de plus en plus d'applications notamment en santé humaine et en environnement pour l'étude de données micro-biologiques et génomiques. Les réseaux constituent en effet un outil approprié pour représenter, voire étudier des relations entre des entités. De nombreuses techniques mathématiques d'estimation ont été développées notamment dans le cadre des modèles graphiques gaussiens mais aussi dans le cas de données binaires ou mixtes. Le traitement des données d'abondance (de micro-organismes comme les bactéries par exemple) est particulier pour deux raisons : d'une part elles ne reflètent pas directement la réalité car un processus de séquençage a lieu pour dupliquer les espèces et ce processus apporte de la variabilité, d'autre part une espèce peut être absente dans certains échantillons. On est alors dans le cadre de données inflatées en zéro. Beaucoup de méthodes d'inférence de réseaux existent pour les données gaussiennes, les données binaires et les données mixtes mais les modèles inflatés en zéro sont très peu étudiés alors qu'ils reflètent la structure de nombreux jeux de données de façon pertinente. L'objectif de cette thèse concerne l'inférence de réseaux pour les modèles inflatés en zéro. Dans cette thèse, on se limitera à des réseaux de dépendances conditionnelles. Le travail présenté dans cette thèse se décompose principalement en deux parties. La première concerne des méthodes d'inférence de réseaux basées sur l'estimation de voisinages par une procédure couplant des méthodes de régressions ordinales et de sélection de variables. La seconde se focalise sur l'inférence de réseaux dans un modèle où les variables sont des gaussiennes inflatées en zéro par double troncature (à droite et à gauche). / Network inference has more and more applications, particularly in human health and environment, for the study of micro-biological and genomic data. Networks are indeed an appropriate tool to represent, or even study, relationships between entities. Many mathematical estimation techniques have been developed, particularly in the context of Gaussian graphical models, but also in the case of binary or mixed data. The processing of abundance data (of microorganisms such as bacteria for example) is particular for two reasons: on the one hand they do not directly reflect reality because a sequencing process takes place to duplicate species and this process brings variability, on the other hand a species may be absent in some samples. We are then in the context of zero-inflated data. Many graph inference methods exist for Gaussian, binary and mixed data, but zero-inflated models are rarely studied, although they reflect the structure of many data sets in a relevant way. The objective of this thesis is to infer networks for zero-inflated models. In this thesis, we will restrict to conditional dependency graphs. The work presented in this thesis is divided into two main parts. The first one concerns graph inference methods based on the estimation of neighbourhoods by a procedure combining ordinal regression models and variable selection methods. The second one focuses on graph inference in a model where the variables are Gaussian zero-inflated by double truncation (right and left).
3

An empirical study of stability and variance reduction in DeepReinforcement Learning

Lindström, Alexander January 2024 (has links)
Reinforcement Learning (RL) is a branch of AI that deals with solving complex sequential decision making problems such as training robots, trading while following patterns and trends, optimal control of industrial processes, and more. These applications span various fields, including data science, factories, finance, and others[1]. The most popular RL algorithm today is Deep Q Learning (DQL), developed by a team at DeepMind, which successfully combines RL with Neural Network (NN). However, combining RL and NN introduces challenges such as numerical instability and unstable learning due to high variance. Among others, these issues are due to the“moving target problem”. To mitigate this problem, the target network was introduced as a solution. However, using a target network slows down learning, vastly increases memory requirements, and adds overheads in running the code. In this thesis, we conduct an empirical study to investigate the importance of target networks. We conduct this empirical study for three scenarios. In the first scenario, we train agents in online learning. The aim here is to demonstrate that the target network can be removed after some point in time without negatively affecting performance. To evaluate this scenario, we introduce the concept of the stabilization point. In thesecond scenario, we pre-train agents before continuing to train them in online learning. For this scenario, we demonstrate the redundancy of the target network by showing that it can be completely omitted. In the third scenario, we evaluate a newly developed activation function called Truncated Gaussian Error Linear Unit (TGeLU). For thisscenario, we train an agent in online learning and show that by using TGeLU as anactivation function, we can completely remove the target network. Through the empirical study of these scenarios, we conjecture and verify that a target network has only transient benefits concerning stability. We show that it has no influence on the quality of the policy found. We also observed that variance was generally higher when using a target network in the later stages of training compared to cases where the target network had been removed. Additionally, during the investigation of the second scenario, we observed that the magnitude of training iterations during pre-training affected the agent’s performance in the online learning phase. This thesis provides a deeper understanding of how the target networkaffects the training process of DQL, some of them - surrounding variance reduction- are contrary to popular belief. Additionally, the results have provided insights into potential future work. These include further explore the benefits of lower variance observed when removing the target network and conducting more efficient convergence analyses for the pre-training part in the second scenario.

Page generated in 0.1104 seconds