• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 6
  • 2
  • 2
  • 2
  • Tagged with
  • 26
  • 26
  • 26
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Controle ótimo multi-período de média-variância para sistemas lineares sujeitos a saltos Markovianos e ruídos multiplicativos. / Multi-period mean-variance optimal control of Markov jumps linear systems with multiplicative noise.

Okimura, Rodrigo Takashi 06 April 2009 (has links)
Este estudo considera o problema de controle ótimo multi-período de média-variância para sistemas em tempo discreto com saltos markovianos e ruídos multiplicativos. Inicialmente considera-se um critério de desempenho formado por uma combinação linear da variância nal e valor esperado da saída do sistema. É apresentada uma solução analítica na obtenção da estratégia ótima para este problema. Em seguida são considerados os casos onde os critérios de desempenho são minimizar a variância nal sujeito a uma restrição no valor esperado ou maximizar o valor esperado nal sujeito a uma restrição na variância nal da saída do sistema. As estratégias ótimas de controle são obtidas de um conjunto de equações de diferenças acopladas de Riccati. Os resultados obtidos neste estudo generalizam resultados anteriores da literatura para o problema de controle ótimo com saldos markovianos e ruídos multiplicativos, apresentando condições explícitas e sucientes para a otimalidade da estratégia de controle. São apresentados modelos e simulações numéricas em otimização de carteiras de investimento e estratégias de gestão de ALM (asset liabilities management). / This thesis focuses on the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under three kinds of performance criterions related to the nal value of the expectation and variance of the output. In the first problem it is desired to minimize the nal variance of the output subject to a restriction on its nal expectation, in the second one it is desired to maximize the nal expectation of the output subject to a restriction on its nal variance, and in the third one it is considered a performance criterion composed by a linear combination of the nal variance and expectation of the output of the system. The optimal control strategies are obtained from a set of interconnected Riccati dierence equations and explicit sufficient conditions are presented for the existence of an optimal control strategy for these problems, generalizing previous results in the literature. Numerical simulations of investment portfolios and asset liabilities management models for pension funds with regime switching are presented.
12

Design and Analysis of Stochastic Dynamical Systems with Fokker-Planck Equation

Kumar, Mrinal 2009 December 1900 (has links)
This dissertation addresses design and analysis aspects of stochastic dynamical systems using Fokker-Planck equation (FPE). A new numerical methodology based on the partition of unity meshless paradigm is developed to tackle the greatest hurdle in successful numerical solution of FPE, namely the curse of dimensionality. A local variational form of the Fokker-Planck operator is developed with provision for h- and p- refinement. The resulting high dimensional weak form integrals are evaluated using quasi Monte-Carlo techniques. Spectral analysis of the discretized Fokker- Planck operator, followed by spurious mode rejection is employed to construct a new semi-analytical algorithm to obtain near real-time approximations of transient FPE response of high dimensional nonlinear dynamical systems in terms of a reduced subset of admissible modes. Numerical evidence is provided showing that the curse of dimensionality associated with FPE is broken by the proposed technique, while providing problem size reduction of several orders of magnitude. In addition, a simple modification of norm in the variational formulation is shown to improve quality of approximation significantly while keeping the problem size fixed. Norm modification is also employed as part of a recursive methodology for tracking the optimal finite domain to solve FPE numerically. The basic tools developed to solve FPE are applied to solving problems in nonlinear stochastic optimal control and nonlinear filtering. A policy iteration algorithm for stochastic dynamical systems is implemented in which successive approximations of a forced backward Kolmogorov equation (BKE) is shown to converge to the solution of the corresponding Hamilton Jacobi Bellman (HJB) equation. Several examples, including a four-state missile autopilot design for pitch control, are considered. Application of the FPE solver to nonlinear filtering is considered with special emphasis on situations involving long durations of propagation in between measurement updates, which is implemented as a weak form of the Bayes rule. A nonlinear filter is formulated that provides complete probabilistic state information conditioned on measurements. Examples with long propagation times are considered to demonstrate benefits of using the FPE based approach to filtering.
13

Controle ótimo multi-período de média-variância para sistemas lineares sujeitos a saltos Markovianos e ruídos multiplicativos. / Multi-period mean-variance optimal control of Markov jumps linear systems with multiplicative noise.

Rodrigo Takashi Okimura 06 April 2009 (has links)
Este estudo considera o problema de controle ótimo multi-período de média-variância para sistemas em tempo discreto com saltos markovianos e ruídos multiplicativos. Inicialmente considera-se um critério de desempenho formado por uma combinação linear da variância nal e valor esperado da saída do sistema. É apresentada uma solução analítica na obtenção da estratégia ótima para este problema. Em seguida são considerados os casos onde os critérios de desempenho são minimizar a variância nal sujeito a uma restrição no valor esperado ou maximizar o valor esperado nal sujeito a uma restrição na variância nal da saída do sistema. As estratégias ótimas de controle são obtidas de um conjunto de equações de diferenças acopladas de Riccati. Os resultados obtidos neste estudo generalizam resultados anteriores da literatura para o problema de controle ótimo com saldos markovianos e ruídos multiplicativos, apresentando condições explícitas e sucientes para a otimalidade da estratégia de controle. São apresentados modelos e simulações numéricas em otimização de carteiras de investimento e estratégias de gestão de ALM (asset liabilities management). / This thesis focuses on the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under three kinds of performance criterions related to the nal value of the expectation and variance of the output. In the first problem it is desired to minimize the nal variance of the output subject to a restriction on its nal expectation, in the second one it is desired to maximize the nal expectation of the output subject to a restriction on its nal variance, and in the third one it is considered a performance criterion composed by a linear combination of the nal variance and expectation of the output of the system. The optimal control strategies are obtained from a set of interconnected Riccati dierence equations and explicit sufficient conditions are presented for the existence of an optimal control strategy for these problems, generalizing previous results in the literature. Numerical simulations of investment portfolios and asset liabilities management models for pension funds with regime switching are presented.
14

Controle ótimo de sistemas com saltos Markovianos e ruído multiplicativo com custos linear e quadrático indefinido. / Indefinite quadratic with linear costs optimal control of Markov jump with multiplicative noise systems.

Wanderlei Lima de Paulo 01 November 2007 (has links)
Esta tese trata do problema de controle ótimo estocástico de sistemas com saltos Markovianos e ruído multiplicativo a tempo discreto, com horizontes de tempo finito e infinito. A função custo é composta de termos quadráticos e lineares nas variáveis de estado e de controle, com matrizes peso indefinidas. Como resultado principal do problema com horizonte finito, é apresentada uma condição necessária e suficiente para que o problema de controle seja bem posto, a partir da qual uma solução ótima é derivada. A condição e a lei de controle são escritas em termos de um conjunto acoplado de equações de Riccati interconectadas a um conjunto acoplado de equações lineares recursivas. Para o caso de horizonte infinito, são apresentadas as soluções ótimas para os problemas de custo médio a longo prazo e com desconto, derivadas a partir de uma solução estabilizante de um conjunto de equações algébricas de Riccati acopladas generalizadas (GCARE). A existência da solução estabilizante é uma condição suficiente para que tais problemas sejam do tipo bem posto. Além disso, são apresentadas condições para a existência das soluções maximal e estabilizante do sistema GCARE. Como aplicações dos resultados obtidos, são apresentadas as soluções de um problema de otimização de carteiras de investimento com benchmark e de um problema de gestão de ativos e passivos de fundos de pensão do tipo benefício definido, ambos os casos com mudanças de regime nas variáreis de mercado. / This thesis considers the finite-horizon and infinite-horizon stochastic optimal control problem for discrete-time Markov jump with multiplicative noise linear systems. The performance criterion is assumed to be formed by a linear combination of a quadratic part and a linear part in the state and control variables. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. For the finite-horizon problem the main results consist of deriving a necessary and sufficient condition under which the problem is well posed and a optimal control law is derived. This condition and the optimal control law are written in terms of a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. For the infinite-horizon problem a set of generalized coupled algebraic Riccati equations (GCARE) is studied. In this case, a sufficient condition under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution for the GCARE are presented. Moreover, a solution for the discounted and long run average cost problems is presented. The results obtained are applied to solver a portfolio optimization problem with benchmark and a pension fund problem with regime switching.
15

Etude de deux problèmes de contrôle stochastique : put americain avec dividendes discrets et principe de programmation dynamique avec contraintes en probabilités / Study of two stochastic control problems : american put with discrete dividends and dynamic programming principle with expectation constraints

Jeunesse, Maxence 29 January 2013 (has links)
Dans cette thèse, nous traitons deux problèmes de contrôle optimal stochastique. Chaque problème correspond à une Partie de ce document. Le premier problème traité est très précis, il s'agit de la valorisation des contrats optionnels de vente de type Américain (dit Put Américain) en présence de dividendes discrets (Partie I). Le deuxième est plus général, puisqu'il s'agit dans un cadre discret en temps de prouver l'existence d'un principe de programmation dynamique sous des contraintes en probabilités (Partie II). Bien que les deux problèmes soient assez distincts, le principe de programmation dynamique est au coeur de ces deux problèmes. La relation entre la valorisation d'un Put Américain et un problème de frontière libre a été prouvée par McKean. La frontière de ce problème a une signification économique claire puisqu'elle correspond à tout instant à la borne supérieure de l'ensemble des prix d'actifs pour lesquels il est préférable d'exercer tout de suite son droit de vente. La forme de cette frontière en présence de dividendes discrets n'avait pas été résolue à notre connaissance. Sous l'hypothèse que le dividende est une fonction déterministe du prix de l'actif à l'instant précédant son versement, nous étudions donc comment la frontière est modifiée. Au voisinage des dates de dividende, et dans le modèle du Chapitre 3, nous savons qualifier la monotonie de la frontière, et dans certains cas quantifier son comportement local. Dans le Chapitre 3, nous montrons que la propriété du smooth-fit est satisfaite à toute date sauf celles de versement des dividendes. Dans les deux Chapitres 3 et 4, nous donnons des conditions pour garantir la continuité de cette frontière en dehors des dates de dividende. La Partie II est originellement motivée par la gestion optimale de la production d'une centrale hydro-electrique avec une contrainte en probabilité sur le niveau d'eau du barrage à certaines dates. En utilisant les travaux de Balder sur la relaxation de Young des problèmes de commande optimale, nous nous intéressons plus spécifiquement à leur résolution par programmation dynamique. Dans le Chapitre 5, nous étendons au cadre des mesures de Young des résultats dûs à Evstigneev. Nous établissons alors qu'il est possible de résoudre par programmation dynamique certains problèmes avec des contraintes en espérances conditionnelles. Grâce aux travaux de Bouchard, Elie, Soner et Touzi sur les problèmes de cible stochastique avec perte contrôlée, nous montrons dans le Chapitre 6 qu'un problème avec contrainte en espérance peut se ramener à un problème avec des contraintes en espérances conditionnelles. Comme cas particulier, nous prouvons ainsi que le problème initial de la gestion du barrage peut se résoudre par programmation dynamique / In this thesis, we address two problems of stochastic optimal control. Each problem constitutes a different Part in this document. The first problem addressed is very precise, it is the valuation of American contingent claims and more specifically the American Put in the presence of discrete dividends (Part I). The second one is more general, since it is the proof of the existence of a dynamic programming principle under expectation constraints in a discrete time framework (Part II). Although the two problems are quite distinct, the dynamic programming principle is at the heart of these two problems. The relationship between the value of an American Put and a free boundary problem has been proved by McKean. The boundary of this problem has a clear economic meaning since it corresponds at all times to the upper limit of the asset price above which the holder of such an option would exercise immediately his right to sell. The shape of the boundary in the presence of discrete dividends has not been solved to the best of our knowledge. Under the assumption that the dividend is a deterministic function of asset prices at the date just before the dividend payment, we investigate how the boundary is modified. In the neighborhood of dividend dates and in the model of Chapter 3, we know what the monotonicity of the border is, and we quantify its local behavior. In Chapter 3, we show that the smooth-fit property is satisfied at any date except for those of the payment of dividends. In both Chapters 3 and 4, we are able to give conditions to guarantee the continuity of the border outside dates of dividend. Part II was originally motivated by the optimal management of the production of an hydro-electric power plant with a probability constraint on the reservoir level on certain dates. Using Balder'sworks on Young's relaxation of optimal control problems, we focus more specifically on their resolution by dynamic programming. In Chapter 5, we extend results of Evstigneev to the framework of Young measures. We show that dynamic programming can be used to solve some problems with conditional expectations constraints. Through the ideas of Bouchard, Elie, Soner and Touzi on stochastic target problems with controlled loss, we show in Chapter 6 that a problem with expectation constraints can be reduced to a problem with conditional expectation constraints. Finally, as a special case, we show that the initial problem of dam management can be solved by dynamic programming
16

Controle ótimo de sistemas lineares com saltos Markovianos e ruídos multiplicativos sob o critério de média variância ao longo do tempo. / Optimal control of linear systems with Markov jumps and multiplicative noises under a multiperiod mean-variance criterion.

Oliveira, Alexandre de 16 November 2011 (has links)
Este estudo considera o modelo de controle ótimo estocástico sob um critério de média-variância para sistemas lineares a tempo discreto sujeitos a saltos Markovianos e ruídos multiplicativos sob dois critérios. Inicialmente, consideramos como critério de desempenho a minimização multiperíodo de uma combinação entre a média e a variância da saída do sistema sem restrições. Em seguida, consideramos o critério de minimização multiperíodo da variância da saída do sistema ao longo do tempo com restrições sobre o valor esperado mínimo. Condições necessárias e suficientes explícitas para a existência de um controle ótimo são determinadas generalizando resultados anteriores existentes na literatura. O controle ótimo é escrito como uma realimentação de estado adicionado de um termo constante. Esta solução é obtida através de um conjunto de equações generalizadas a diferenças de Riccati interconectadas com um conjunto de equações lineares recursivas. Como aplicação, apresentamos alguns exemplos numéricos práticos para um problema de seleção de portfólio multiperíodo com mudança de regime, incluindo uma estratégia de ALM (Asset and Liability Management). Neste problema, deseja-se obter a melhor alocação de portfólio de forma a otimizar seu desempenho entre risco e retorno em cada passo de tempo até o nal do horizonte de investimento e sob um dos dois critérios citados acima. / In this work we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under two criterions. First, we consider an unconstrained multiperiod mean-variance trade-off performance criterion. In the sequence, we consider a multiperiod minimum variance criterion subject to constraints on the minimum expected output along the time. We present explicit necessary and sufficient conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. The optimal control law is written as a state feedback added with a deterministic sequence. This solution is derived from a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. As an application, we present some practical numerical examples on a multiperiod portfolio selection problem with regime switching, including an Asset and Liability Management strategy. In this problem it is desired to nd the best portfolio allocation in order to optimize its risk-return performance in every time step along the investment horizon, under one of the two criterions stated above.In this work we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under two criterions. First, we consider an unconstrained multiperiod mean-variance trade-off performance criterion. In the sequence, we consider a multiperiod minimum variance criterion subject to constraints on the minimum expected output along the time. We present explicit necessary and sufficient conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. The optimal control law is written as a state feedback added with a deterministic sequence. This solution is derived from a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. As an application, we present some practical numerical examples on a multiperiod portfolio selection problem with regime switching, including an Asset and Liability Management strategy. In this problem it is desired to nd the best portfolio allocation in order to optimize its risk-return performance in every time step along the investment horizon, under one of the two criterions stated above.
17

Optimal Bounded Control and Relevant Response Analysis for Random Vibrations

Iourtchenko, Daniil V 25 May 2001 (has links)
In this dissertation, certain problems of stochastic optimal control and relevant analysis of random vibrations are considered. Dynamic Programming approach is used to find an optimal control law for a linear single-degree-of-freedom system subjected to Gaussian white-noise excitation. To minimize a system's mean response energy, a bounded in magnitude control force is applied. This approach reduces the problem of finding the optimal control law to a problem of finding a solution to the Hamilton-Jacobi-Bellman (HJB) partial differential equation. A solution to this partial differential equation (PDE) is obtained by developed 'hybrid' solution method. The application of bounded in magnitude control law will always introduce a certain type of nonlinearity into the system's stochastic equation of motion. These systems may be analyzed by the Energy Balance method, which introduced and developed in this dissertation. Comparison of analytical results obtained by the Energy Balance method and by stochastic averaging method with numerical results is provided. The comparison of results indicates that the Energy Balance method is more accurate than the well-known stochastic averaging method.
18

Essays in mathematical finance

Murgoci, Agatha January 2009 (has links)
Diss. Stockholm : Handelshögskolan, 2009
19

Controle ótimo de sistemas lineares com saltos Markovianos e ruídos multiplicativos sob o critério de média variância ao longo do tempo. / Optimal control of linear systems with Markov jumps and multiplicative noises under a multiperiod mean-variance criterion.

Alexandre de Oliveira 16 November 2011 (has links)
Este estudo considera o modelo de controle ótimo estocástico sob um critério de média-variância para sistemas lineares a tempo discreto sujeitos a saltos Markovianos e ruídos multiplicativos sob dois critérios. Inicialmente, consideramos como critério de desempenho a minimização multiperíodo de uma combinação entre a média e a variância da saída do sistema sem restrições. Em seguida, consideramos o critério de minimização multiperíodo da variância da saída do sistema ao longo do tempo com restrições sobre o valor esperado mínimo. Condições necessárias e suficientes explícitas para a existência de um controle ótimo são determinadas generalizando resultados anteriores existentes na literatura. O controle ótimo é escrito como uma realimentação de estado adicionado de um termo constante. Esta solução é obtida através de um conjunto de equações generalizadas a diferenças de Riccati interconectadas com um conjunto de equações lineares recursivas. Como aplicação, apresentamos alguns exemplos numéricos práticos para um problema de seleção de portfólio multiperíodo com mudança de regime, incluindo uma estratégia de ALM (Asset and Liability Management). Neste problema, deseja-se obter a melhor alocação de portfólio de forma a otimizar seu desempenho entre risco e retorno em cada passo de tempo até o nal do horizonte de investimento e sob um dos dois critérios citados acima. / In this work we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under two criterions. First, we consider an unconstrained multiperiod mean-variance trade-off performance criterion. In the sequence, we consider a multiperiod minimum variance criterion subject to constraints on the minimum expected output along the time. We present explicit necessary and sufficient conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. The optimal control law is written as a state feedback added with a deterministic sequence. This solution is derived from a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. As an application, we present some practical numerical examples on a multiperiod portfolio selection problem with regime switching, including an Asset and Liability Management strategy. In this problem it is desired to nd the best portfolio allocation in order to optimize its risk-return performance in every time step along the investment horizon, under one of the two criterions stated above.In this work we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under two criterions. First, we consider an unconstrained multiperiod mean-variance trade-off performance criterion. In the sequence, we consider a multiperiod minimum variance criterion subject to constraints on the minimum expected output along the time. We present explicit necessary and sufficient conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. The optimal control law is written as a state feedback added with a deterministic sequence. This solution is derived from a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. As an application, we present some practical numerical examples on a multiperiod portfolio selection problem with regime switching, including an Asset and Liability Management strategy. In this problem it is desired to nd the best portfolio allocation in order to optimize its risk-return performance in every time step along the investment horizon, under one of the two criterions stated above.
20

Lineárně kvadratické optimální řízení ve spojitém čase / Continuous Time Linear Quadratic Optimal Control

Vostal, Ondřej January 2017 (has links)
We partially solve the adaptive ergodic stochastic optimal control problem where the driving process is a fractional Brownian motion with Hurst parameter H > 1/2. A formula is provided for an optimal feedback control given a strongly consistent estimator of the parameters of the controlled system is avail- able. There are some special conditions imposed on the estimator which means the results are not completely general. They apply, for example, in the case where the estimator is independent of the driving fractional Brownian motion. In the course of the thesis, construction of stochastic integrals of suitable determinis- tic functions with respect to fractional Brownian motion with Hurst parameter H > 1/2 over the unbounded positive real half-line is presented as well. 1

Page generated in 0.09 seconds