• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Deterministic and Stochastic Bellman's Optimality Principles on Isolated Time Domains and Their Applications in Finance

Turhan, Nezihe 01 May 2011 (has links)
The concept of dynamic programming was originally used in late 1949, mostly during the 1950s, by Richard Bellman to describe decision making problems. By 1952, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions. Also, the Bellman equation, one of the basic concepts in dynamic programming, is named after him. Dynamic programming has become an important argument which was used in various fields; such as, economics, finance, bioinformatics, aerospace, information theory, etc. Since Richard Bellman's invention of dynamic programming, economists and mathematicians have formulated and solved a huge variety of sequential decision making problems both in deterministic and stochastic cases; either finite or infinite time horizon. This thesis is comprised of five chapters where the major objective is to study both deterministic and stochastic dynamic programming models in finance. In the first chapter, we give a brief history of dynamic programming and we introduce the essentials of theory. Unlike economists, who have analyzed the dynamic programming on discrete, that is, periodic and continuous time domains, we claim that trading is not a reasonably periodic or continuous act. Therefore, it is more accurate to demonstrate the dynamic programming on non-periodic time domains. In the second chapter we introduce time scales calculus. Moreover, since it is more realistic to analyze a decision maker’s behavior without risk aversion, we give basics of Stochastic Calculus in this chapter. After we introduce the necessary background, in the third chapter we construct the deterministic dynamic sequence problem on isolated time scales. Then we derive the corresponding Bellman equation for the sequence problem. We analyze the relation between solutions of the sequence problem and the Bellman equation through the principle of optimality. We give an example of the deterministic model in finance with all details of calculations by using guessing method, and we prove uniqueness and existence of the solution by using the Contraction Mapping Theorem. In the fourth chapter, we define the stochastic dynamic sequence problem on isolated time scales. Then we derive the corresponding stochastic Bellman equation. As in the deterministic case, we give an example in finance with the distributions of solutions.
2

Contributions au calcul des variations et au principe du maximum de Pontryagin en calculs time scale et fractionnaire / Contributions to calculus of variations and to Pontryagin maximum principle in time scale calculus and fractional calculus

Bourdin, Loïc 18 June 2013 (has links)
Cette thèse est une contribution au calcul des variations et à la théorie du contrôle optimal dans les cadres discret, plus généralement time scale, et fractionnaire. Ces deux domaines ont récemment connu un développement considérable dû pour l’un à son application en informatique et pour l’autre à son essor dans des problèmes physiques de diffusion anormale. Que ce soit dans le cadre time scale ou dans le cadre fractionnaire, nos objectifs sont de : a) développer un calcul des variations et étendre quelques résultats classiques (voir plus bas); b) établir un principe du maximum de Pontryagin (PMP en abrégé) pour des problèmes de contrôle optimal. Dans ce but, nous généralisons plusieurs méthodes variationnelles usuelles, allant du simple calcul des variations au principe variationnel d’Ekeland (couplé avec la technique des variations-aiguilles), en passant par l’étude d’invariances variationnelles par des groupes de transformations. Les démonstrations des PMPs nous amènent également à employer des théorèmes de point fixe et à prendre en considération la technique des multiplicateurs de Lagrange ou encore une méthode basée sur un théorème d’inversion locale conique. Ce manuscrit est donc composé de deux parties : la Partie 1 traite de problèmes variationnels posés sur time scale et la Partie 2 est consacrée à leurs pendants fractionnaires. Dans chacune de ces deux parties, nous suivons l’organisation suivante : 1. détermination de l’équation d’Euler-Lagrange caractérisant les points critiques d’une fonctionnelle Lagrangienne ; 2. énoncé d’un théorème de type Noether assurant l’existence d’une constante de mouvement pour les équations d’Euler-Lagrange admettant une symétrie ; 3. énoncé d’un théorème de type Tonelli assurant l’existence d’un minimiseur pour une fonctionnelle Lagrangienne et donc, par la même occasion, d’une solution pour l’équation d’Euler-Lagrange associée (uniquement en Partie 2) ; 4. énoncé d’un PMP (version forte en Partie 1, version faible en Partie 2) donnant une condition nécessaire pour les trajectoires qui sont solutions de problèmes de contrôle optimal généraux non-linéaires ; 5. détermination d’une condition de type Helmholtz caractérisant les équations provenant d’un calcul des variations (uniquement en Partie 1 et uniquement dans les cas purement continu et purement discret). Des théorèmes de type Cauchy-Lipschitz nécessaires à l’étude de problèmes de contrôle optimal sont démontrés en Annexe. / This dissertation deals with the mathematical fields called calculus of variations and optimal control theory. More precisely, we develop some aspects of these two domains in discrete, more generally time scale, and fractional frameworks. Indeed, these two settings have recently experience a significant development due to its applications in computing for the first one and to its emergence in physical contexts of anomalous diffusion for the second one. In both frameworks, our goals are: a) to develop a calculus of variations and extend some classical results (see below); b) to state a Pontryagin maximum principle (denoted in short PMP) for optimal control problems. Towards these purposes, we generalize several classical variational methods, including the Ekeland’s variational principle (combined with needle-like variations) as well as variational invariances via the action of groups of transformations. Furthermore, the investigations for PMPs lead us to use fixed point theorems and to consider the Lagrange multiplier technique and a method based on a conic implicit function theorem. This manuscript is made up of two parts : Part A deals with variational problems on time scale and Part B is devoted to their fractional analogues. In each of these parts, we follow (with minor differences) the following organization: 1. obtaining of an Euler-Lagrange equation characterizing the critical points of a Lagrangian functional; 2. statement of a Noether-type theorem ensuring the existence of a constant of motion for Euler-Lagrange equations admitting a symmetry;3. statement of a Tonelli-type theorem ensuring the existence of a minimizer for a Lagrangian functional and, consequently, of a solution for the corresponding Euler-Lagrange equation (only in Part B); 4. statement of a PMP (strong version in Part A and weak version in Part B) giving a necessary condition for the solutions of general nonlinear optimal control problems; 5. obtaining of a Helmholtz condition characterizing the equations deriving from a calculus of variations (only in Part A and only in the purely continuous and purely discrete cases). Some Picard-Lindelöf type theorems necessary for the analysis of optimal control problems are obtained in Appendices.

Page generated in 0.0902 seconds