• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 12
  • Tagged with
  • 29
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Properties, Mechanisms and Predictability of Eddies in the Red Sea

Zhan, Peng 04 1900 (has links)
Eddies are one of the key features of the Red Sea circulation. They are not only crucial for energy conversion among dynamics at different scales, but also for materials transport across the basin. This thesis focuses on studying the characteristics of Red Sea eddies, including their temporal and spatial properties, their energy budget, the mechanisms of their evolution, and their predictability. Remote sensing data, in-situ observations, the oceanic general circulation model, and data assimilation techniques were employed in this thesis. The eddies in the Red Sea were first identified using altimeter data by applying an improved winding-angle method, based on which the statistical properties of those eddies were derived. The results suggested that eddies occur more frequently in the central basin of the Red Sea and exhibit a significant seasonal variation. The mechanisms of the eddies’ evolution, particularly the eddy kinetic energy budget, were then investigated based on the outputs of a long-term eddy resolving numerical model configured for the Red Sea with realistic forcing. Examination of the energy budget revealed that the eddies acquire the vast majority of kinetic energy through conversion of eddy available potential energy via baroclinic instability, which is intensified during winter. The possible factors modulating the behavior of the several observed eddies in the Red Sea were then revealed by conducting a sensitivity analysis using the adjoint model. These eddies were found to exhibit different sensitivities to external forcings, suggesting different mechanisms for their evolution. This is the first known adjoint sensitivity study on specific eddy events in the Red Sea and was hitherto not previously appreciated. The last chapter examines the predictability of Red Sea eddies using an ensemble-based forecasting and assimilation system. The forecast sea surface height was used to evaluate the overall performance of the short-term eddy predictability. Different ensemble sampling schemes were implemented, and the investigation among different schemes is followed by a discussion of performance and challenges based on the results of a case study. The thesis not only enhances understanding of the Red Sea dynamics, but also deepens knowledge of the physical-biological and air-sea interactions within the basin. Further, it is a stepping stone to building a robust regional operational system with refined forecasting skills.
2

Optimisation de réseaux de capteurs pour la caractérisation de source de rejets atmosphériques / Sensors networks optimization for the characterization of atmospheric releases source

Kouichi, Hamza 18 July 2017 (has links)
L’objectif principal de cette étude est de définir les méthodes nécessaires pour optimiser un réseau de surveillance conçu pour la caractérisation de source de rejets atmosphériques. L’optimisation consiste ici à déterminer le nombre et les positions optimales de capteurs à déployer afin de répondre à ce type de besoin. Dans ce contexte, l’optimisation est réalisée pour la première fois par un couplage entre la technique d’inversion de données dite de « renormalisation » et des algorithmes d’optimisation métaheuristique. La méthode d’inversion a été en premier lieu évaluée pour la caractérisation de source ponctuelle, et a permis ensuite, de définir des critères d’optimalité pour la conception des réseaux. Dans cette étude, le processus d’optimisation a été évalué dans le cadre d’expériences réalisées en terrain plat sans obstacles (DYCE) et en milieu urbain idéalisé (MUST). Trois problématiques ont été définies et testées sur ces expériences. Elles concernent (i) la détermination de la taille optimale d’un réseau permettant de caractériser une source de pollution, où une fonction coût (erreurs normalisées), traduisant l’écart entre les observations et les données modélisées, a été minimisée ; (ii) la conception optimale d’un réseau permettant de caractériser une source ponctuelle inconnue, pour une condition météorologique particulière. Dans ce contexte, une fonction coût entropique a été maximisée afin d’augmenter la quantité d’information fournie par le réseau ; (iii) la détermination d’un réseau optimal permettant de caractériser une source ponctuelle inconnue pour des configurations météorologiques multiples. Pour ce faire, une fonction coût entropique généralisée, que nous avons définie, a été maximisée. Pour ces trois problématiques, l’optimisation est assurée dans le cadre d’une approche d’optimisation combinatoire. La détermination de la taille optimale d’un réseau (problématique 1) s’est révélée particulièrement sensible aux différentes conditions expérimentales (hauteur et débit de la source, conditions de stabilité, vitesse et direction du vent, etc.). Nous avons noté pour ces expériences, que les performances des réseaux sont meilleures dans le cadre d’une dispersion sur terrain plat comparativement aux milieux urbains. Nous avons également montré que différentes architectures de réseaux pouvaient converger vers le même optimum (approché ou global). Pour la caractérisation de sources inconnues (problématiques 2 et 3), les fonctions coûts entropiques se sont avérées robustes et ont permis d’obtenir des réseaux optimaux performants (de tailles raisonnables) capables de caractériser différentes sources pour une ou plusieurs conditions météorologiques. / The main objective of this study is to define the methods required to optimize a monitoring network designed for atmospheric source characterization. The optimization consists in determining the optimal number and locations of sensors to be deployed in order to respond to such needs. In this context, the optimization is performed for the first time by a coupling between the data inversion technique named "renormalization" and the metaheuristic optimization algorithms. At first, the inversion method was evaluated for a point source, and then have allowed to define optimality criteria for networks design. In this study, the optimization process was evaluated in experiments carried out in flat terrain without obstacles (DYCE) and in an idealized urban environment (MUST). Three problems were defined and tested based on these experiments. These problems concern (i) the determination of the optimal network size for source characterization, for which a cost function (standard errors) estimating the gap between observations and modeled data, has been minimized; (ii) the optimal design of a network to retrieve an unknown point source for a particular meteorological condition. In this context, an entropy cost function has been maximized in order to increase the information’s amount provided by the network; (iii) the determination of an optimal network to reconstruct an unknown point source for multiple meteorological configurations. For this purpose, a generalized entropic cost function that we have defined, has been maximized. For these all problems, optimization is ensured within the framework of a combinatorial optimization approach. The determination of the optimal network size (problem 1) was highly sensitive to experimental conditions (source height and intensity, stability conditions, wind speed and direction, etc.). We have noted that the networks performance is better for a dispersion on flat terrain compared to the urban environments. We have also shown that different networks architectures can converge towards the same optimum (approximate or global). For unknown sources reconstruction (problems 2 and 3), the entropic cost functions have proven to be robust and allowed to obtain optimal networks (for reasonable sizes) capable of characterizing different sources for one or multiple meteorological conditions.
3

Réalisation de surface

Kow, Eric Gardent, Claire. January 2007 (has links) (PDF)
Thèse de doctorat : Informatique : Nancy 1 : 2007. / Titre provenant de l'écran-titre. Bibliogr.
4

Coherence for 3-dualizable objects

Araújo, Manuel January 2017 (has links)
A fully extended framed topological field theory with target in a symmetric monoidal n-catgeory C is a symmetric monoidal functor Z from Bord(n) to C, where Bord(n) is the symmetric monoidal n-category of n-framed bordisms. The cobordism hypothesis says that such field theories are classified by fully dualizable objects in C. Given a fully dualizable object X in C, we are interested in computing the values of the corresponding field theory on specific framed bordisms. This leads to the question of finding a presentation for Bord(n). In view of the cobordism hypothesis, this can be rephrased in terms of finding coherence data for fully dualizable objects in a symmetric monoidal n-category. We prove a characterization of full dualizability of an object X in terms of existence of a dual of X and existence of adjoints for a finite number of higher morphisms. This reduces the problem of finding coherence data for fully dualizable objects to that of finding coherence data for duals and adjoints. For n=3, and in the setting of strict symmetric monoidal 3-categories, we find this coherence data, and we prove the corresponding coherence theorems. The proofs rely on extensive use of a graphical calculus for strict monoidal 3-categories.
5

Advanced Time Integration Methods with Applications to Simulation, Inverse Problems, and Uncertainty Quantification

Narayanamurthi, Mahesh 29 January 2020 (has links)
Simulation and optimization of complex physical systems are an integral part of modern science and engineering. The systems of interest in many fields have a multiphysics nature, with complex interactions between physical, chemical and in some cases even biological processes. This dissertation seeks to advance forward and adjoint numerical time integration methodologies for the simulation and optimization of semi-discretized multiphysics partial differential equations (PDEs), and to estimate and control numerical errors via a goal-oriented a posteriori error framework. We extend exponential propagation iterative methods of Runge-Kutta type (EPIRK) by [Tokman, JCP 2011], to build EPIRK-W and EPIRK-K time integration methods that admit approximate Jacobians in the matrix-exponential like operations. EPIRK-W methods extend the W-method theory by [Steihaug and Wofbrandt, Math. Comp. 1979] to preserve their order of accuracy under arbitrary Jacobian approximations. EPIRK-K methods extend the theory of K-methods by [Tranquilli and Sandu, JCP 2014] to EPIRK and use a Krylov-subspace based approximation of Jacobians to gain computational efficiency. New families of partitioned exponential methods for multiphysics problems are developed using the classical order condition theory via particular variants of T-trees and corresponding B-series. The new partitioned methods are found to perform better than traditional unpartitioned exponential methods for some problems in mild-medium stiffness regimes. Subsequently, partitioned stiff exponential Runge-Kutta (PEXPRK) methods -- that extend stiffly accurate exponential Runge-Kutta methods from [Hochbruck and Ostermann, SINUM 2005] to a multiphysics context -- are constructed and analyzed. PEXPRK methods show full convergence under various splittings of a diffusion-reaction system. We address the problem of estimation of numerical errors in a multiphysics discretization by developing a goal-oriented a posteriori error framework. Discrete adjoints of GARK methods are derived from their forward formulation [Sandu and Guenther, SINUM 2015]. Based on these, we build a posteriori estimators for both spatial and temporal discretization errors. We validate the estimators on a number of reaction-diffusion systems and use it to simultaneously refine spatial and temporal grids. / Doctor of Philosophy / The study of modern science and engineering begins with descriptions of a system of mathematical equations (a model). Different models require different techniques to both accurately and effectively solve them on a computer. In this dissertation, we focus on developing novel mathematical solvers for models expressed as a system of equations, where only the initial state and the rate of change of state as a function are known. The solvers we develop can be used to both forecast the behavior of the system and to optimize its characteristics to achieve specific goals. We also build methodologies to estimate and control errors introduced by mathematical solvers in obtaining a solution for models involving multiple interacting physical, chemical, or biological phenomena. Our solvers build on state of the art in the research community by introducing new approximations that exploit the underlying mathematical structure of a model. Where it is necessary, we provide concrete mathematical proofs to validate theoretically the correctness of the approximations we introduce and correlate with follow-up experiments. We also present detailed descriptions of the procedure for implementing each mathematical solver that we develop throughout the dissertation while emphasizing on means to obtain maximal performance from the solver. We demonstrate significant performance improvements on a range of models that serve as running examples, describing chemical reactions among distinct species as they diffuse over a surface medium. Also provided are results and procedures that a curious researcher can use to advance the ideas presented in the dissertation to other types of solvers that we have not considered. Research on mathematical solvers for different mathematical models is rich and rewarding with numerous open-ended questions and is a critical component in the progress of modern science and engineering.
6

Large-Scale Simulations Using First and Second Order Adjoints with Applications in Data Assimilation

Zhang, Lin 23 July 2007 (has links)
In large-scale air quality simulations we are interested in the influence factors which cause changes of pollutants, and optimization methods which improve forecasts. The solutions to these problems can be achieved by incorporating adjoint models, which are efficient in computing the derivatives of a functional with respect to a large number of model parameters. In this research we employ first order adjoints in air quality simulations. Moreover, we explore theoretically the computation of second order adjoints for chemical transport models, and illustrate their feasibility in several aspects. We apply first order adjoints to sensitivity analysis and data assimilation. Through sensitivity analysis, we can discover the area that has the largest influence on changes of ozone concentrations at a receptor. For data assimilation with optimization methods which use first order adjoints, we assess their performance under different scenarios. The results indicate that the L-BFGS method is the most efficient. Compared with first order adjoints, second order adjoints have not been used to date in air quality simulation. To explore their utility, we show the construction of second order adjoints for chemical transport models and demonstrate several applications including sensitivity analysis, optimization, uncertainty quantification, and Hessian singular vectors. Since second order adjoints provide second order information in the form of Hessian-vector product instead of the entire Hessian matrix, it is possible to implement applications for large-scale models which require second order derivatives. Finally, we conclude that second order adjoints for chemical transport models are computationally feasible and effective. / Master of Science
7

Discrete Adjoints: Theoretical Analysis, Efficient Computation, and Applications

Walther, Andrea 23 June 2008 (has links) (PDF)
The technique of automatic differentiation provides directional derivatives and discrete adjoints with working accuracy. A complete complexity analysis of the basic modes of automatic differentiation is available. Therefore, the research activities are focused now on different aspects of the derivative calculation, as for example the efficient implementation by exploitation of structural information, studies of the theoretical properties of the provided derivatives in the context of optimization problems, and the development and analysis of new mathematical algorithms based on discrete adjoint information. According to this motivation, this habilitation presents an analysis of different checkpointing strategies to reduce the memory requirement of the discrete adjoint computation. Additionally, a new algorithm for computing sparse Hessian matrices is presented including a complexity analysis and a report on practical experiments. Hence, the first two contributions of this thesis are dedicated to an efficient computation of discrete adjoints. The analysis of discrete adjoints with respect to their theoretical properties is another important research topic. The third and fourth contribution of this thesis focus on the relation of discrete adjoint information and continuous adjoint information for optimal control problems. Here, differences resulting from different discretization strategies as well as convergence properties of the discrete adjoints are analyzed comprehensively. In the fifth contribution, checkpointing approaches that are successfully applied for the computation of discrete adjoints, are adapted such that they can be used also for the computation of continuous adjoints. Additionally, the fifth contributions presents a new proof of optimality for the binomial checkpointing that is based on new theoretical results. Discrete adjoint information can be applied for example for the approximation of dense Jacobian matrices. The development and analysis of new mathematical algorithms based on these approximate Jacobians is the topic of the sixth contribution. Is was possible to show global convergence to first-order critical points for a whole class of trust-region methods. Here, the usage of inexact Jacobian matrices allows a considerable reduction of the computational complexity.
8

Discrete Adjoints: Theoretical Analysis, Efficient Computation, and Applications

Walther, Andrea 02 June 2008 (has links)
The technique of automatic differentiation provides directional derivatives and discrete adjoints with working accuracy. A complete complexity analysis of the basic modes of automatic differentiation is available. Therefore, the research activities are focused now on different aspects of the derivative calculation, as for example the efficient implementation by exploitation of structural information, studies of the theoretical properties of the provided derivatives in the context of optimization problems, and the development and analysis of new mathematical algorithms based on discrete adjoint information. According to this motivation, this habilitation presents an analysis of different checkpointing strategies to reduce the memory requirement of the discrete adjoint computation. Additionally, a new algorithm for computing sparse Hessian matrices is presented including a complexity analysis and a report on practical experiments. Hence, the first two contributions of this thesis are dedicated to an efficient computation of discrete adjoints. The analysis of discrete adjoints with respect to their theoretical properties is another important research topic. The third and fourth contribution of this thesis focus on the relation of discrete adjoint information and continuous adjoint information for optimal control problems. Here, differences resulting from different discretization strategies as well as convergence properties of the discrete adjoints are analyzed comprehensively. In the fifth contribution, checkpointing approaches that are successfully applied for the computation of discrete adjoints, are adapted such that they can be used also for the computation of continuous adjoints. Additionally, the fifth contributions presents a new proof of optimality for the binomial checkpointing that is based on new theoretical results. Discrete adjoint information can be applied for example for the approximation of dense Jacobian matrices. The development and analysis of new mathematical algorithms based on these approximate Jacobians is the topic of the sixth contribution. Is was possible to show global convergence to first-order critical points for a whole class of trust-region methods. Here, the usage of inexact Jacobian matrices allows a considerable reduction of the computational complexity.
9

SemTAG : une plate-forme pour le calcul sémantique à partir de Grammaires d'Arbres Adjoints

Parmentier, Yannick 06 April 2007 (has links) (PDF)
Dans cette thèse, nous proposons une architecture logicielle (SemTAG) permettant de réaliser un calcul sémantique pour grammaires d'Arbres Adjoints. Plus précisément, cette architecture fournit un environnement permettant de construire une représentation sémantique sous-spécifiée (Predicate Logic Unplugged (Bos, 1995)) à partir d'une grammaire et d'un énoncé.<br /><br />Afin de faciliter la gestion de grammaires de taille réelle, la plate-forme SemTAG intègre un compilateur de métagrammaires. Le rôle de ce compilateur est de produire semi-automatiquement une grammaire à partir d'une description factorisée. Cette description correspond à (a)~une hiérarchie de fragments d'arbres et (b)~des combinaisons de ces fragments au moyen d'un langage de contrôle. De plus, chaque arbre ainsi produit peut être équipé d'une interface syntaxe / sémantique à la (Gardent et Kallmeyer, 2003).<br /><br />La construction sémantique est réalisée à partir du résultat de l'analyse syntaxique. Cette analyse est fournie par un analyseur syntaxique tabulaire généré automatiquement à partir de la grammaire d'entrée au moyen du système DyALog (De La Clergerie, 2005). Cet analyseur produit une forêt de dérivation, qui encode toutes les dérivations, et à partir desquelles les unifications des indexes sémantiques sont extraites.<br /><br />Cette plate-forme a été évaluée en termes de couverture sémantique sur la test-suite TSNLP.
10

Parallélisation d'algorithmes variationnels d'assimilation de données en météorologie

Tremolet, Yannick 27 November 1995 (has links) (PDF)
Le problème de l'assimilation de données sous sa forme générale peut se formuler : "comment utiliser simultanément un modèle théorique et des observations pour obtenir la meilleure prévision météorologique ou océanographique ?", sa résolution est très coûteuse, pour la prochaine génération de modèles elle nécessitera une puissance de calcul de l'ordre de 10 Tflops. à l'heure actuelle, aucun calculateur n'est capable de fournir de telles performances mais cela devrait être possible dans quelques années, en particulier grâce aux ordinateurs parallèles à mémoire distribuée. Mais, la programmation de ces machines reste un processus compliqué et on ne connaît pas de méthode générale pour paralléliser de manière optimale un algorithme donné. Nous tenterons, de répondre au problème de la parallélisation de l'assimilation de données variationnelle, ce qui nous conduira à étudier la parallélisation d'algorithmes numériques d'optimisation assez généraux. Pour cela, nous étendrons la méthodologie de l'écriture des modèles adjoints au cas où le modèle direct est parallèle avec échanges de messages explicites. Nous étudierons les différentes approches possibles pour paralléliser la résolution du problème de l'assimilation de données : au niveau des modèles météorologiques direct et adjoints, au niveau de l'algorithme d'optimisation ou enfin au niveau du problème lui-même. Cela nous conduira à transformer un problème séquentiel d'optimisation sans contraintes en un ensemble de problèmes d'optimisation relativement indépendants qui pourront être résolus en parallèle. Nous étudierons plusieurs variantes de ces trois approches très générales et leur utilité dans le cadre du problème de l'assimilation de données. Nous terminerons par l'application des méthodes de parallélisation précédentes au modèle de Shallow Water et comparerons leurs performances. Nous présenterons également une parallélisation du modèle météorologique ARPS (Advanced Regional Prediction System).

Page generated in 0.0413 seconds