• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 7
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 54
  • 54
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Outils numériques pour la conception de mécanismes / Numerical tools for mechanism design

Hentz, Gauthier 18 September 2017 (has links)
Dans le contexte médico-chirurgical, la robotique peut être d’un grand intérêt pour des procédures plus sûres et plus précises. Les contraintes d’encombrement sont cependant très fortes et des mobilités complexes peuvent être nécessaires. A ce jour, la conception de mécanismes non conventionnels dédiés est alors difficile à réaliser faute d’outils génériques permettant une évaluation rapide de leurs performances. Cette thèse associe la continuation de haut-degré et la différentiation automatique pour répondre à cette problématique en introduisant une méthode de modélisation et un formalisme génériques pour la conception de mécanismes. Nos contributions concernent en particulier le développement d’outils numériques pour l’évaluation de l’espace de travail, et de la localisation et la nature des singularités d’un mécanisme, et une analyse de sensibilité de haut-degré. Ceux-ci sont évalués sur des mécanismes de référence. / In the medical and surgical background, robotics can be of great interest for safer and more accurate procedures. Size constraints are however strong and complex movements may be necessary. To date, the design of dedicated non-conventional mechanisms is then a difficult task because of a lack of generic tools allowing a fast evaluation of their performances. This thesis combines higher-order continuation and automatic differentiation to adress this issue through the introduction of a generic modelling method and a generic formalism for mechanism design. Our contributions especially concern the development of numerical tools for the evaluation of the workspace, of the singularity localization and nature, and for a higher-order sensitivity analysis. These tools are evaluated on reference mechanisms.
32

Modelisation directe et inverse d'ecoulements geophysiques viscoplastiques par methodes variationnelles : Application a la glaciologie / Direct and inverse modeling of viscoplastic geophysical flows using variational methods : Application to glaciology

Martin, Nathan 10 July 2013 (has links)
Un certain nombre d’écoulements géophysiques, tels que les écoulements de glace ou de lave magmatique, impliquent le mouvement gravitaire à faible nombre de Reynolds d’un fluide viscoplastique à surface libre sur un socle rocheux. Leur modélisation fait apparaître des lois de comportement rhéologique et des descriptions de leurs intéractions avec le socle rocheux qui reposent sur des paramétrisations empiriques. Par ailleurs, l’observation systématique de ce type d’écoulements avec une grande précision est rarement possible ; les données associées à l’observation de ces écoulements, principalement des données de surface (télédétections), peuvent être peu denses, manquantes ou incertaines. Elles sont aussi le plus souvent indirectes : des paramètres inconnus comme le glissement basal ou la rhéologie sont difficilement mesurables in situ.Ce travail de thèse s’attache à la modélisation directe et inverse de ces écoulements géophysiques, particulièrement les écoulements de glace, par des méthodes variationnelles à travers la résolution du problème de Stokes pour les fluides en loi de puissance.La méthode de résolution du problème direct (Stokes non-linéaire) repose sur le principe du minimum de dissipation qui mène à un problème variationnel de type point-selle à quatre champs pour lequel on montre l’existence de solutions. La condition d’incompressibilité et la loi de comportement représentent alors des contraintes associées au problème de minimisation. La recherche des points critiques du lagrangien correspondant est réalisée à l’aide d’un algorithme de type lagrangien augmenté, discrétisé par éléments finis triangles à trois champs. Cet algorithme conduit à un important gain tant en temps de calcul qu’en utilisation mémoire par rapport aux algorithmes classiques.On s’intéresse ensuite à la modélisation numérique inverse de ces fluides à l’aide du modèle adjoint et des deux principaux outils associés : l’analyse de sensibilité et l’assimilation de données. On étudie tout d’abord la modélisation rhéologique de ces fluides à travers les deux paramètres principaux de la loi de comportement : la consistance du fluide et l’exposant rhéologique. Des analyses de sensibilité sur ces paramètres définis localement, permettent de quantifier leurs poids relatifs au sein du modèle d’écoulement, en termes de vitesses de surface. L’identification de ces grandeurs est également réalisée. L’ensemble des résultats est résumé comme une méthodologie vers une “rhéométrie virtuelle” pouvant représenter une aide solide à la mesure rhéologique.Le glissement basal, paramètre majeur dans la dynamique de la glace, est investigué selon la même approche. Les analyses de sensibilité mettent en avant une capacité à voir à travers le caractère “filtré” et non-local de la transmission de la variabilité basale vers la surface, ouvrant des perspectives vers l’utilisation des sensibilités pour la définition de lieux d’intérêt pour l’observation et la mesure. Ce glissement basal, modélisation empirique d’un processus complexe et multiéchelle, est ensuite utilisé pour la comparaison avec une méthode inverse approximative courante en glaciologie (méthode négligeant la dépendance de la viscosité à la vitesse, i.e. la non-linéarité). Le modèle adjoint, obtenu par différentiation automatique et évalué par accumulation retour, permet de définir cette approximation comme un cas limite de la méthode inverse complète. Ce formalisme mène à une généralisation du processus d’évaluation numérique de l’état adjoint, ajustable en précision et en temps de calcul en fonction de la qualité des données et du niveau de détail souhaité dans la reconstruction.L’ensemble de ces travaux est associé au développement du logiciel DassFlow-Ice de simulation directe et inverse de fluides viscoplastiques à surface libre. Ce logiciel prospectif bidimensionnel, diffusé dans la communauté glaciologique, a donné lieu au développement d’une version tridimensionnelle. / Several geophysical flows, such as ice flows or lava flows, are described by a gravity-driven low Reynolds number movement of a free surface viscoplastic fluid over a bedrock. Their modeling involves constitutive laws, typically describing their rheological behavior or interactions with their bedrock, that lean on empirical parameterizations. Otherwise, the thorough observation of this type of flows is rarely possible; data associated to the observation of these flows, mainly remote-sensed surface data, can be sparse, missing or uncertain. They are also generally indirect : unknown parameters such as the basal slipperiness or the rheology are difficult to measure on the field.This PhD work focuses on the direct and inverse modeling of these geophysical flows described by the power-law Stokes model, specifically dedicated to ice flows, using variational methods.The solution of the direct problem (Stokes non-linear) is based on the principle of minimal dissipation that leads to a variational four-field saddle-point problem for which we ensure the existence of a solution. In this context, the incompressibility condition and the constitutive rheological law represent constraints associated to the minimization problem. The critical points of the corresponding Lagrangian are determined using an augmented Lagrangian type algorithm discretized using three- field finite elements. This algorithm provides an important time and memory saving compared to classical algorithms.We then focus on the inverse numerical modeling of these fluids using the adjoint model through two main associated tools : sensitivity analysis and data assimilation. We first study the rheological modeling through the two principal input parameters (fluid consistency and rheological exponent). Sensitivity analyses with respect to these locally defined parameters allow to quantify their relative weights within the flow model, in terms of surface velocities. Identification of these parameters is also performed. The results are synthetized as a methodology towards “virtual rheometry” that could help and support rheological measurements.The basal slipperiness, major parameter in ice dynamics, is investigated using the same approach. Sensitivity analyses demonstrate an ability to see beyond the ”filtered” and non-local transmission of the basal variability to the surface. Consequently these sensitivities can be used to help defining areas of interest for observation and measurement. This basal slipperiness, empirical modeling of a multiscale complex process, is then used to carry on a comparison with a so called “self-adjoint” method, common in glaciology (neglecting the dependency of the viscosity on the velocity, i.e. the non-linearity). The adjoint model, obtained by automatic differentiation and evaluated by reverse accumulation, leads to define this approximation as a limit case of the complete inverse method. This formalism allows to generalize the process of the numerical evaluation of the adjoint state into an incomplete adjoint method, adjustable in time and accuracy depending on the quality of the data and the level of detail required in the identification.All this work is associated to the development of DassFlow-Ice software dedicated to the direct and inverse numerical simulation of free-surface viscoplastic fluids. This bidimensional prospective software, distributed within the glaciological com- munity, serves as a model for the current development of the tridimensional version.
33

Deep learning exotic derivatives

Geirsson, Gunnlaugur January 2021 (has links)
Monte Carlo methods in derivative pricing are computationally expensive, in particular for evaluating models partial derivatives with regard to inputs. This research proposes the use of deep learning to approximate such valuation models for highly exotic derivatives, using automatic differentiation to evaluate input sensitivities. Deep learning models are trained to approximate Phoenix Autocall valuation using a proprietary model used by Svenska Handelsbanken AB. Models are trained on large datasets of low-accuracy (10^4 simulations) Monte Carlo data, successfully learning the true model with an average error of 0.1% on validation data generated by 10^8 simulations. A specific model parametrisation is proposed for 2-day valuation only, to be recalibrated interday using transfer learning. Automatic differentiation approximates sensitivity to (normalised) underlying asset prices with a mean relative error generally below 1.6%. Overall error when predicting sensitivity to implied volatililty is found to lie within 10%-40%. Near identical results are found by finite difference as automatic differentiation in both cases. Automatic differentiation is not successful at capturing sensitivity to interday contract change in value, though errors of 8%-25% are achieved by finite difference. Model recalibration by transfer learning proves to converge over 15 times faster and with up to 14% lower relative error than training using random initialisation. The results show that deep learning models can efficiently learn Monte Carlo valuation, and that these can be quickly recalibrated by transfer learning. The deep learning model gradient computed by automatic differentiation proves a good approximation of the true model sensitivities. Future research proposals include studying optimised recalibration schedules, using training data generated by single Monte Carlo price paths, and studying additional parameters and contracts.
34

Advanced Concepts for Automatic Differentiation based on Operator Overloading

Kowarz, Andreas 20 March 2008 (has links)
Mit Hilfe der Technik des Automatischen Differenzierens (AD) lassen sich für Funktionen, die als Programmquellcode gegeben sind, Ableitungsinformationen rechentechnisch effizient und mit geringem Aufwand für den Nutzer bereitstellen. Eine Variante der Implementierung von AD basiert auf der Überladung von Operatoren und Funktionen, die von vielen modernen Programmiersprachen ermöglicht wird. Durch Ausnutzung des Konzepts der Überladung wird eine interne Funktions-Repräsentation (Tape) generiert, die anschließend für die Ableitungsberechnung herangezogen wird. In der Dissertation werden neue Techniken erarbeitet, die eine effizientere Tape-Erstellung und die parallele Tape-Auswertung ermöglichen. Anhand von Laufzeituntersuchungen für numerische Beispiele werden die Möglichkeiten der neuen Techniken verdeutlicht. / Using the technique of Automatic Differentiation (AD), derivative information can be computed efficiently for any function that is given as source code in a supported programming languages. One basic implementation strategy is based on the concept of operator overloading that is available for many programming languages. Due the overloading of operators, an internal representation of the function can be generated at runtime. This so-called tape can then be used for computing derivatives. In the thesis, new techniques are introduced that allow a more efficient tape creation and the parallel evaluation of tapes. Advantages of the new techniques are demonstrated by means of runtime analyses for numerical examples.
35

Program Reversal Schedules for Single- and Multi-processor Machines

Walther, Andrea 10 December 1999 (has links)
Bei der Berechnung von Adjungierten, zum Debuggen und für ähnliche Anwendungen kann man die Umkehr der entsprechenden Programmauswertung verwenden. Der einfachste Ansatz, nämlich das Mitschreiben einer kompletten Mitschrift der Vorwärtsrechnung, welche anschließend rückwärts gelesen wird, verursacht einen enormen Speicherplatzbedarf. Als Alternative dazu kann man die Mitschrift auch stückweise erzeugen, indem die Programmauswertung von passend gewählten Checkpoints wiederholt gestartet wird. Das Ziel der Arbeit ist die Minimierung des von der Programmumkehr verursachten Zeit- und Speicherplatzbedarfs. Dieser wird gemessen in Auswertungswiederholungen bzw. verwendeten Checkpoints. Optimale Umkehrschemata werden für Ein- und Mehr-Schritt-Verfahren entwickelt, welche zum Beispiel bei der Diskretisierung einer gewöhnlichen Differentialgleichung Verwendung finden. Desweiteren erfolgte die Entwicklung von parallelen Umkehrschemata, d. h. mehrere Prozessoren werden für die Umkehrung der Programmauswertung eingesetzt. Diese zusätzlichen Prozessoren dienen dazu, die wiederholten Berechnungen des Programms zu parallelisieren, so daß ein Prozessor die Rückwartsrechnung ohne Unterbrechung durchführen kann. Sowohl für die seriellen als auch für die parallelen Umkehrschemata wurde gezeigt, daß die Länge der umzukehrenden Programmauswertung exponentiell in Abhängigkeit von der Zahl der verwendeten Checkpoints und der Zahl der wiederholten Auswertungen bzw. verwendeten Prozessoren wächst. / For adjoint calculations, parameter estimation, and similar purposes one may need to reverse the execution of a computer program. The simplest option is to record a complete execution log and then to read it backwards. This requires massive amounts of storage. Instead one may generate the execution log piecewise by restarting the ``forward'' calculation repeatedly from suitably placed checkpoints. The basic structure of the resulting reversal schedules is illustrated. Various strategies are analysed with respect to the resulting temporal and spatial complexity on serial and parallel machines. For serial machines known optimal compromises between operations count and memory requirement are explained, and they are extended to more general situations. For program execution reversal on multi-processors the new challenges and demands on an optimal reversal schedule are described. We present parallel reversal schedules that are provably optimal with regards to the number of concurrent processes and the total amount of memory required.
36

Thesis - Optimizing Smooth Local Volatility Surfaces with Power Utility Functions

Sällberg, Gustav, Söderbäck, Pontus January 2015 (has links)
The master thesis is focused on how a local volatility surfaces can be extracted by optimization with respectto smoothness and price error. The pricing is based on utility based pricing, and developed to be set in arisk neutral pricing setting. The pricing is done in a discrete multinomial recombining tree, where the timeand price increments optionally can be equidistant. An interpolation algorithm is used if the option that shallbe priced is not matched in the tree discretization. Power utility functions are utilized, where the log-utilitypreference is especially studied, which coincides with the (Kelly) portfolio that systematically outperforms anyother portfolio. A fine resolution of the discretization is generally a property that is sought after, thus a seriesof derivations for the implementation are done to restrict the computational encumbrance and thus allow finer discretization. The thesis is mainly focused on the derivation of the method rather than finding optimal parameters thatgenerate the local volatility surfaces. The method has shown that smooth surfaces can be extracted, whichconsider market prices. However, due to lacking available interest and dividend data, the pricing error increasessymmetrically for longer option maturities. However, the method shows exponential convergence and robustnessto different initial (flat) volatilities for the optimization initiation. Given an optimal smooth local volatility surface, an arbitrary payoff function can then be used to price thecorresponding option, which could be path-dependent, such as barrier options. However, only vanilla optionswill be considered in this thesis. Finally, we find that the developed
37

Inexpensive uncertainty analysis for CFD applications

Ghate, Devendra January 2014 (has links)
The work presented in this thesis aims to provide various tools to be used during design process to make maximum use of the increasing availability of accurate engine blade measurement data for high fidelity fluid mechanic simulations at a reasonable computational expense. A new method for uncertainty propagation for geometric error has been proposed for fluid mechanics codes using adjoint error correction. Inexpensive Monte Carlo (IMC) method targets small uncertainties and provides complete probability distribution for the objective function at a significantly reduced computational cost. A brief literature survey of the existing methods is followed by the formulation of IMC. An example algebraic model is used to demonstrate the IMC method. The IMC method is extended to fluid mechanic applications using Principal Component Analysis (PCA) for reduced order modelling. Implementation details for the IMC method are discussed using an example airfoil code. Finally, the IMC method has been implemented and validated for an industrial fluid mechanic code HYDRA. A consistent methodology has been developed for the automatic generation of the linear and adjoint codes by selective use of automatic differentiation (AD) technique. The method has the advantage of keeping the linear and the adjoint codes in-sync with the changes in the underlying nonlinear fluid mechanic solver. The use of various consistency checks have been demonstrated to ease the development and maintenance process of the linear and the adjoint codes. The use of AD has been extended for the calculation of the complete Hessian using forward-on-forward approach. The complete mathematical formulation for Hessian calculation using the linear and the adjoint solutions has been outlined for fluid mechanic solvers. An efficient implementation for the Hessian calculation is demonstrated using the airfoil code. A new application of the Independent Component Analysis (ICA) is proposed for manufacturing uncertainty source identification. The mathematical formulation is outlined followed by an example application of ICA for artificially generated uncertainty for the NACA0012 airfoil.
38

Sequence-to-sequence learning for machine translation and automatic differentiation for machine learning software tools

van Merriënboer, Bart 10 1900 (has links)
No description available.
39

New methods for estimation, modeling and validation of dynamical systems using automatic differentiation

Griffith, Daniel Todd 17 February 2005 (has links)
The main objective of this work is to demonstrate some new computational methods for estimation, optimization and modeling of dynamical systems that use automatic differentiation. Particular focus will be upon dynamical systems arising in Aerospace Engineering. Automatic differentiation is a recursive computational algorithm, which enables computation of analytically rigorous partial derivatives of any user-specified function. All associated computations occur, in the background without user intervention, as the name implies. The computational methods of this dissertation are enabled by a new automatic differentiation tool, OCEA (Object oriented Coordinate Embedding Method). OCEA has been recently developed and makes possible efficient computation and evaluation of partial derivatives with minimal user coding. The key results in this dissertation details the use of OCEA through a number of computational studies in estimation and dynamical modeling. Several prototype problems are studied in order to evaluate judicious ways to use OCEA. Additionally, new solution methods are introduced in order to ascertain the extended capability of this new computational tool. Computational tradeoffs are studied in detail by looking at a number of different applications in the areas of estimation, dynamical system modeling, and validation of solution accuracy for complex dynamical systems. The results of these computational studies provide new insights and indicate the future potential of OCEA in its further development.
40

Sédimentation des boues activées en système fermé : de l'investigation expérimentale à l'aide d'un transducteur ultrasonore à la modélisation 1 D, l'analyse de sensibilité et l'identification de paramètres / Activated sludge batch settling : from experimental investigation using an ultrasonic transducer to 1D modelling, sensitivity analysis and parameter identification

Locatelli, Florent 24 September 2015 (has links)
Ce travail porte sur l’étude expérimentale et la modélisation de la sédimentation des boues activées. Un pilote expérimental associant une colonne de sédimentation et un transducteur ultrasonore est proposé. Des profils de vitesse de sédimentation et de concentration en particules sont obtenus grâce à ce dispositif, ce qui permet de mieux comprendre les mécanismes de décantation des boues. Ces résultats sont utilisés afin de développer une approche numérique. Un modèle de sédimentation est construit en intégrant des fonctions expérimentales. Une méthodologie mettant en œuvre la différentiation automatique du modèle est ensuite élaborée et appliquée, d'une part, à l'analyse de sensibilité du modèle aux paramètres des fonctions utilisées et, d'autre part, à l'identification des valeurs de ces paramètres à l'aide des résultats expérimentaux. La conjonction des approches expérimentale et numérique proposées constitue un processus efficace pour le développement des modèles de sédimentation. / This work deals with the experimental investigation and modelling of activated sludge settling. An experimental setup combining a settling column and an ultrasonic transducer is proposed. Settling velocity and concentration profiles are obtained using this setup, allowing for a better understanding of the mechanisms of activated sludge settling. These results are applied to the development of a numerical approach. A settling model using experimental functions is built. A methodology based on the automatic differentiation of the model is developed. This methodology is used, on the one hand, to analyse the sensitivity of the results to the model parameters and, on the other hand, to identify the parameter values on the basis of experimental data. The combination of the proposed experimental and numerical methods yields an efficient process for the development of sedimentation models.

Page generated in 0.1373 seconds