• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 11
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 142
  • 142
  • 59
  • 43
  • 22
  • 21
  • 21
  • 19
  • 18
  • 18
  • 17
  • 16
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Pricing And Hedging Of Constant Proportion Debt Obligations

Iscanoglu Cekic, Aysegul 01 February 2011 (has links) (PDF)
A Constant Proportion Debt Obligation is a credit derivative which has been introduced to generate a surplus return over a riskless market return. The surplus payments should be obtained by synthetically investing in a risky asset (such as a credit index) and using a linear leverage strategy which is capped for bounding the risk. In this thesis, we investigate two approaches for investigation of constant proportion debt obligations. First, we search for an optimal leverage strategy which minimises the mean-square distance between the final payment and the final wealth of constant proportion debt obligation by the use of optimal control methods. We show that the optimal leverage function for constant proportion debt obligations in a mean-square sense coincides with the one used in practice for geometric type diffusion processes. However, the optimal strategy will lead to a shortfall for some cases. The second approach of this thesis is to develop a pricing formula for constant proportion debt obligations. To do so, we consider both the early defaults and the default on the final payoff features of constant proportion debt obligations. We observe that a constant proportion debt obligation can be modelled as a barrier option with rebate. In this respect, given the knowledge on barrier options, the pricing equation is derived for a particular leverage strategy.
72

A simulation study for Bayesian hierarchical model selection methods

Fang, Fang January 2009 (has links) (PDF)
Thesis (M.S.)--University of North Carolina Wilmington, 2009. / Title from PDF title page (February 16, 2010) Includes bibliographical references (p. 30)
73

Model search strategy when P >> N in Bayesian hierarchical setting

Fang, Qijun January 2009 (has links) (PDF)
Thesis (M.S.)--University of North Carolina Wilmington, 2009. / Title from PDF title page (February 16, 2010) Includes bibliographical references (p. 34-35)
74

Decoherence υπό την επίδραση εξωτερικού θορύβου

Τζέμος, Αθανάσιος 29 September 2010 (has links)
Στην παρούσα εργασία μελετάται το φαινόμενο της Κβαντικής Αποσυνοχής (Decoherence) υπό την επίδραση εξωτερικού θορύβου. Στο πρώτο μέρος της εργασίας γίνεται ανασκόπηση των βασικών εννοιών της Κβαντικής Πληροφορικής, της Θεωρίας Αποσυνοχής και του Στοχαστικού Λογισμού. Στο δεύτερο-ερευνητικό μέρος μελετάται η Decoherence δύο qubits, τα οποία ανήκουν στην Αλυσίδα XY του Heisenberg. Τα qubits αυτά μελετώνται τόσο εντός ομογενούς μαγνητικού πεδίου όσο και εντός μαγνητικού πεδίου στοχαστικού χαρακτήρα. Επίσης, παρουσιάζονται και κάποια αριθμητικά αποτελέσματα στην περίπτωση που έχουμε θόρυβο στο μαγνητικό πεδίο και στη σταθερά συζεύξεως Jx. Στις δύο τελευταίες περιπτώσεις (όπου έχουμε θόρυβο) ευρίσκουμε Decoherence Free Subspaces. Τέλος, προτείνεται τρόπος ανάκτησης της Entanglement στην περίπτωση μικτής αρχικής κατάστασης του συστήματος των qubits. / In the current Master Thesis, the phenomenon of the Quantum Decoherence under the influence of external noise is examined. The first part is a review of the basic elements of Quantum Information Theory, Decoherence Theory, and Stochastic Calculus. In the second part, the Decoherence of two qubits that belong to the XY Heisenberg Chain is studied. The first case is where these qubits are placed in a constant magnetic field, while the second one is where the magnetic field has stochastic behavior. Furthermore, some arithmetic results, in the case of external noise in both the magnetic field and the coupling constant Jx are presented. In the last two cases (where we have noise), we find Decoherence Free Subspaces. Finally, we propose a way to rebound the Entanglement of these two qubits, if they are prepared in a mixed initial state.
75

Topics in Online Markov Decision Processes

Guan, Peng January 2015 (has links)
<p>This dissertation describes sequential decision making problems in non-stationary environments. Online learning algorithms deal with non-stationary environments, but generally there is no notion of a dynamic state to model future impacts of past actions. State-based models are common in stochastic control settings, but well-known frameworks such as Markov decision processes (MDPs) assume a known stationary environment. In recent years, there has been a growing interest in fusing the above two important learning frameworks and considering an MDP setting in which the cost function is allowed to change arbitrarily over time. A number of online MDP algorithms have been designed to work under various assumptions about the dynamics of state transitions so far and provide performance guarantees, i.e. bounds on the regret defined as the performance gap between the total cost incurred by the learner and the total cost of the best available stationary policy that could have been chosen in hindsight. </p><p>However, most of the work in this area has been algorithmic: given a problem, one</p><p>would develop an algorithm almost from scratch and prove the performance guarantees on a case-by-case basis. Moreover, the presence of the state and the assumption of an arbitrarily varying environment complicate both the theoretical analysis and the development of computationally efficient methods. Another potential issue is that, by removing distributional assumptions about the mechanism generating the cost sequences, the existing methods have to consider the worst-case scenario, which may render their solutions too conservative in situations where the environment exhibits some degree of predictability. </p><p>This dissertation contributes several novel techniques to address the above challenges of the online MDP framework and opens up new research directions for online MDPs. </p><p>Our proposed general framework for deriving algorithms in the online MDP setting leads to a unifying view of existing methods and provides a general procedure for constructing new ones. Several new algorithms are developed and analyzed using this framework. We develop convex-analytical algorithms that take advantage of possible regularity of observed sequences, yet maintain the worst case performance guarantees. To further study the convex-analytic methods we applied above, we take a step back to consider the traditional MDP problem and extend the LP approach to MDPs by adding a relative entropy regularization term. A computationally efficient algorithm for this class of MDPs is constructed under mild assumptions on the state transition models. Two-player zero-sum stochastic games are also investigated in this dissertation as an important extension of the online MDP setting. In short, this dissertation provides in-depth analysis of the online MDP problem and answers several important questions in this field.</p> / Dissertation
76

Controle de sistemas não-Markovianos / Control of non-Markovian systems

Francys Andrews de Souza 13 September 2017 (has links)
Nesta tese, apresentamos uma metodologia concreta para calcular os controles -ótimos para sistemas estocásticos não-Markovianos. A análise trajetória a trajetória e o uso da estrutura de discretização proposta por Leão e Ohashi [36] conjuntamente com argumentos de seleção mensuráveis, nos forneceu uma estrutura para transformar um problema infinito dimensional para um finito dimensional. Desta forma, garantimos uma descrição concreta para uma classe bastante geral de problemas. / In this thesis, we present a concrete methodology to calculate the -optimal controls for non-Markovian stochastic systems. A pathwise analysis and the use of the discretization structure proposed by Leão and Ohashi [36] jointly with measurable selection arguments, allows us a structure to transform an infinite dimensional problem into a finite dimensional. In this way, we guarantee a concrete description for a rather general class of stochastic problems.
77

Seleção dinâmica de portfólios em média-variância com saltos Markovianos. / Dynamic mean-variance portfolio selection with Markov regime switching.

Michael Viriato Araujo 19 October 2007 (has links)
Investiga-se, em tempo discreto, o problema multi-período de otimização de carteiras generalizado em média-variância cujos coeficientes de mercado são modulados por uma cadeia de Markov finita. O problema multi-período generalizado de média-variância com saltos Markovianos (PGMV ) é um problema de controle estocástico sem restrição cuja função objetivo consiste na maximização da soma ponderada ao longo do tempo da combinação linear de três elementos: o valor esperado da riqueza do investidor, o quadrado da esperança desta riqueza e a esperança do quadrado deste patrimônio. A principal contribuição deste trabalho é a derivação analítica de condições necessárias e suficientes para a determinação de uma estratégia ótima de investimento para o problema PGMV . A partir deste modelo são derivadas várias formulações de médiavariância, como o modelo tradicional cujo objetivo é maximizar o valor esperado da riqueza final do investidor, dado um nível de risco (variância) do portfólio no horizonte de investimento, bem como o modelo mais complexo que busca maximizar a soma ponderada das esperanças da riqueza ao longo do tempo, limitando a perda deste patrimônio em qualquer momento. Adicionalmente, derivam-se formas fechadas para a solução dos problemas citados quando as restrições incidem somente no instante final. Outra contribuição deste trabalho é a extensão do modelo PGMV para a solução do problema de seleção de carteiras em média-variância com o objetivo de superar um benchmark estocástico, com restrições sobre o valor esperado ou sobre a variância do tracking error do portfólio. Por fim, aplicam-se os resultados obtidos em exemplos numéricos cujo universo de investimento são todas as ações do IBOVESPA. / In this work we deal with a discrete-time multi-period mean-variance portfolio selection model with the market parameters subject to Markov regime switching. The multi-period generalized mean-variance portfolio selection model with regime switching (PGMV ) is an unrestricted stochastic control problem, in which the objective function involves the maximization of the weighted sum of a linear combination of three parts: the expected wealth, the square of the expected wealth and the expected value of the wealth squared. The main contribution of this work is the analytical derivation of necessary and sufficient conditions for the existence of an optimal control strategy to this PGMV model. We show that several mean-variance models are derived from the PGMV model, as the traditional formulation in which the objective is to maximize the expected terminal wealth for a given final risk (variance), or the complex one in which the objective function is to maximize the weighted sum of the wealth throughout its investment horizon, with control over maximum wealth lost. Additionally, we derive closed forms solutions for the above models when the restrictions are just in the final time. Another contribution of this work is to extend the PGMV model to solve the multi-period portfolio selection problem of beating a stochastic benchmark with control over the tracking error variance or its expected value. Finally, we run numerical examples in which the investment universe is formed by all the stocks belonging to the IBOVESPA.
78

Linear systems with Markov jumps and multiplicative noises: the constrained total variance problem. / Sistemas lineares com saltos Markovianos e ruídos multiplicativos: o problema da variância total restrita.

Fabio Barbieri 20 December 2016 (has links)
In this work we study the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises. We consider the multiperiod and finite time horizon optimization of a mean-variance cost function under a new criterion. In this new problem, we apply a constraint on the total output variance weighted by its risk parameter while maximizing the expected output. The optimal control law is obtained from a set of interconnected Riccati difference equations, extending previous results in the literature. The application of our results is exemplified by numerical simulations of a portfolio of stocks and a risk-free asset. / Neste trabalho, estudamos o problema do controle ótimo estocástico de sistemas lineares em tempo discreto sujeitos a saltos Markovianos e ruídos multiplicativos. Consideramos a otimização multiperíodo, com horizonte de tempo finito, de um funcional da média-variância sob um novo critério. Neste novo problema, maximizamos o valor esperado da saída do sistema ao mesmo tempo em que limitamos a sua variância total ponderada pelo seu parâmetro de risco. A lei de controle ótima é obtida através de um conjunto de equações de diferenças de Riccati interconectadas, estendendo resultados anteriores da literatura. São apresentadas simulações numéricas para uma carteira de investimentos com ações e um ativo de risco para exemplificarmos a aplicação de nossos resultados.
79

Discrete-time jump linear systems with Markov chain in a general state space. / Sistemas lineares com saltos a tempo discreto com cadeia de Markov em espaço de estados geral.

Danilo Zucolli Figueiredo 04 November 2016 (has links)
This thesis deals with discrete-time Markov jump linear systems (MJLS) with Markov chain in a general Borel space S. Several control issues have been addressed for this class of dynamic systems, including stochastic stability (SS), linear quadratic (LQ) optimal control synthesis, fllter design and a separation principle. Necessary and sffcient conditions for SS have been derived. It was shown that SS is equivalent to the spectral radius of an operator being less than 1 or to the existence of a solution to a \\Lyapunov-like\" equation. Based on the SS concept, the finite- and infinite-horizon LQ optimal control problems were tackled. The solution to the finite- (infinite-)horizon LQ optimal control problem was derived from the associated control S-coupled Riccati difference (algebraic) equations. By S-coupled it is meant that the equations are coupled via an integral over a transition probability kernel having a density with respect to a in-finite measure on the Borel space S. The design of linear Markov jump filters was analyzed and a solution to the finite- (infinite-)horizon filtering problem was obtained based on the associated filtering S-coupled Riccati difference (algebraic) equations. Conditions for the existence and uniqueness of a stabilizing positive semi-definite solution to the control and filtering S-coupled algebraic Riccati equations have also been derived. Finally a separation principle for discrete-time MJLS with Markov chain in a general state space was obtained. It was shown that the optimal controller for a partial information optimal control problem separates the partial information control problem into two problems, one associated with a filtering problem and the other associated with an optimal control problem with complete information. It is expected that the results obtained in this thesis may motivate further research on discrete-time MJLS with Markov chain in a general state space. / Esta tese trata de sistemas lineares com saltos markovianos (MJLS) a tempo discreto com cadeia de Markov em um espaço geral de Borel S. Vários problemas de controle foram abordados para esta classe de sistemas dinâmicos, incluindo estabilidade estocástica (SS), síntese de controle ótimo linear quadrático (LQ), projeto de filtros e um princípio da separação. Condições necessárias e suficientes para a SS foram obtidas. Foi demonstrado que SS é equivalente ao raio espectral de um operador ser menor que 1 ou à existência de uma solução para uma equação de Lyapunov. Os problemas de controle ótimo a horizonte finito e infinito foram abordados com base no conceito de SS. A solução para o problema de controle ótimo LQ a horizonte finito (infinito) foi obtida a partir das associadas equações a diferenças (algébricas) de Riccati S-acopladas de controle. Por S-acopladas entende-se que as equações são acopladas por uma integral sobre o kernel estocástico com densidade de transição em relação a uma medida in-finita no espaço de Borel S. O projeto de filtros lineares markovianos foi analisado e uma solução para o problema da filtragem a horizonte finito (infinito) foi obtida com base nas associadas equações a diferenças (algébricas) de Riccati S-acopladas de filtragem. Condições para a existência e unicidade de uma solução positiva semi-definida e estabilizável para as equações algébricas de Riccati S-acopladas associadas aos problemas de controle e filtragem também foram obtidas. Por último, foi estabelecido um princípio da separação para MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral. Foi demonstrado que o controlador ótimo para um problema de controle ótimo com informação parcial separa o problema de controle com informação parcial em dois problemas, um deles associado a um problema de filtragem e o outro associado a um problema de controle ótimo com informação completa. Espera-se que os resultados obtidos nesta tese possam motivar futuras pesquisas sobre MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral.
80

Strategies in robust and stochastic model predictive control

Munoz Carpintero, Diego Alejandro January 2014 (has links)
The presence of uncertainty in model predictive control (MPC) has been accounted for using two types of approaches: robust MPC (RMPC) and stochastic MPC (SMPC). Ideal RMPC and SMPC formulations consider closed-loop optimal control problems whose exact solution, via dynamic programming, is intractable for most systems. Much effort then has been devoted to find good compromises between the degree of optimality and computational tractability. This thesis expands on this effort and presents robust and stochastic MPC strategies with reduced online computational requirements where the conservativeness incurred is made as small as conveniently possible. Two RMPC strategies are proposed for linear systems under additive uncertainty. They are based on a recently proposed approach which uses a triangular prediction structure and a non-linear control policy. One strategy considers a transference of part of the computation of the control policy to an offline stage. The other strategy considers a modification of the prediction structure so that it has a striped structure and the disturbance compensation extends throughout an infinite horizon. An RMPC strategy for linear systems with additive and multiplicative uncertainty is also presented. It considers polytopic dynamics that are designed so as to maximize the volume of an invariant ellipsoid, and are used in a dual-mode prediction scheme where constraint satisfaction is ensured by an approach based on a variation of Farkas' Lemma. Finally, two SMPC strategies for linear systems with additive uncertainty are presented, which use an affine-in-the-disturbances control policy with a striped structure. One strategy considers an offline sequential design of the gains of the control policy, while these are variables in the online optimization in the other. Control theoretic properties, such as recursive feasibility and stability, are studied for all the proposed strategies. Numerical comparisons show that the proposed algorithms can provide a convenient compromise in terms of computational demands and control authority.

Page generated in 0.0976 seconds