• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 426
  • 42
  • 25
  • 24
  • 20
  • 16
  • 16
  • 16
  • 16
  • 16
  • 15
  • 10
  • 5
  • 2
  • 2
  • Tagged with
  • 637
  • 637
  • 140
  • 97
  • 94
  • 79
  • 67
  • 63
  • 57
  • 56
  • 55
  • 53
  • 52
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Asymptotic behavior of stochastic systems possessing Markovian realizations

Meyn, S. P. (Sean P.) January 1987 (has links)
No description available.
92

Stable limit theorems for Markov chains /

Kimbleton, Stephen Robert January 1967 (has links)
No description available.
93

Efficient sampling plans in a two-state Markov chain /

Bai, Do Sun January 1971 (has links)
No description available.
94

The second gap of the Markoff spectrum of Q(i) /

Hansen, Henry Walter January 1973 (has links)
No description available.
95

Contributions to the theory of Markov chains /

Winkler, William E. January 1973 (has links)
No description available.
96

Markov chains and potentials.

Fraser, Ian Johnson. January 1965 (has links)
No description available.
97

Text classification using a hidden Markov model

Yi, Kwan, 1963- January 2005 (has links)
No description available.
98

Convergence of some stochastic matrices

Wilcox, Chester Clinton. January 1963 (has links)
Call number: LD2668 .T4 1963 W66 / Master of Science
99

Reinforcement learning applied to option pricing

Martin, K. S. 01 September 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / This dissertation considers the pricing of European and American options. European option prices are determined by the market and can be veri ed by a closed-form solution to the Black-Scholes model. These options can only be exercised at the maturity date. American option prices are not derived from the market and cannot be priced using the same closed-form solution as in the case of the European options because American options can be exercised at any time on or before the maturity date. An initial method was investigated in pricing a European option but could not price American options. Improvements were made producing two robust option pricing models. The results of which were compared to the closed-form solution in the case of European options and a numerical approximation solution in the case of American options. The improved models showed two signi cant bene ts. The rst bene t is the ability to price both European and American options and the second is the ability to calibrate the models to market prices using market data. Changes to the parameters of the models showed the limitations of each improved model. In conclusion, the improved methods are e ective procedures for solving the European and American option pricing problem. Keywords: European options, American options, Markov Decision Processes, Kernel-Based Reinforcement Learning, Calibration.
100

Essays in information relaxations and scenario analysis for partially observable settings

Ruiz Lacedelli, Octavio January 2019 (has links)
This dissertation consists of three main essays in which we study important problems in engineering and finance. In the first part of this dissertation, we study the use of Information Relaxations to obtain dual bounds in the context of Partially Observable Markov Decision Processes (POMDPs). POMDPs are in general intractable problems and the best we can do is obtain suboptimal policies. To evaluate these policies, we investigate and extend the information relaxation approach developed originally for Markov Decision Processes. The use of information relaxation duality for POMDPs presents important challenges, and we show how change-of-measure arguments can be used to overcome them. As a second contribution, we show that many value function approximations for POMDPs are supersolutions. By constructing penalties from supersolutions we are able to achieve significant variance reduction when estimating the duality gap directly, and the resulting dual bounds are guaranteed to provide tighter bounds than those provided by the supersolutions themselves. Applications in robotic navigation and telecommunications are given in Chapter 2. A further application of this approach is provided in Chapter 5 in the context of personalized medicine. In the second part of this dissertation, we discuss a number of weaknesses inherent in traditional scenario analysis. For instance, the standard approach to scenario analysis aims to compute the P&L of a portfolio resulting from joint stresses to underlying risk factors, leaving all unstressed risk factors set to zero. This approach ignores thereby the conditional distribution of the unstressed risk factors given the stressed risk factors. We address these weaknesses by embedding the scenario analysis within a dynamic factor model for the underlying risk factors. We recur to multivariate state-space models that allow the modeling of real-world behavior of financial markets, like volatility clustering for example. Additionally, these models are sufficiently tractable to permit the computation (or simulation from) the conditional distribution of unstressed risk factors. Our approach permits the use of observable and unobservable risk factors. We provide applications to fixed income and options portfolios, where we are able to show the degree in which the two scenario analysis approaches can lead to dramatic differences. In the third part, we propose a framework to study a Human-Machine interaction system within the context of financial Robo-advising. In this setting, based on risk-sensitive dynamic games, the robo-advisor adaptively learns the preferences of the investor as the investor makes decisions that optimize her risk-sensitive criterion. The investor and machine's objectives are aligned but the presence of asymmetric information makes this joint optimization process a game with strategic interactions. By considering an investor with mean-variance risk preferences we are able to reduce the game to a POMDP. The human-machine interaction protocol features a trade-off between allowing the robo-advisor to learn the investors preferences through costly communications and optimizing the investor's objective relying on outdated information.

Page generated in 0.1001 seconds