• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 331
  • 33
  • 24
  • 24
  • 20
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 2
  • 2
  • 2
  • Tagged with
  • 503
  • 503
  • 124
  • 88
  • 79
  • 69
  • 60
  • 54
  • 53
  • 51
  • 48
  • 46
  • 46
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Stable limit theorems for Markov chains /

Kimbleton, Stephen Robert January 1967 (has links)
No description available.
32

Efficient sampling plans in a two-state Markov chain /

Bai, Do Sun January 1971 (has links)
No description available.
33

The second gap of the Markoff spectrum of Q(i) /

Hansen, Henry Walter January 1973 (has links)
No description available.
34

Contributions to the theory of Markov chains /

Winkler, William E. January 1973 (has links)
No description available.
35

Markov chains and potentials.

Fraser, Ian Johnson. January 1965 (has links)
No description available.
36

Text classification using a hidden Markov model

Yi, Kwan, 1963- January 2005 (has links)
No description available.
37

Convergence of some stochastic matrices

Wilcox, Chester Clinton. January 1963 (has links)
Call number: LD2668 .T4 1963 W66 / Master of Science
38

Essays in information relaxations and scenario analysis for partially observable settings

Ruiz Lacedelli, Octavio January 2019 (has links)
This dissertation consists of three main essays in which we study important problems in engineering and finance. In the first part of this dissertation, we study the use of Information Relaxations to obtain dual bounds in the context of Partially Observable Markov Decision Processes (POMDPs). POMDPs are in general intractable problems and the best we can do is obtain suboptimal policies. To evaluate these policies, we investigate and extend the information relaxation approach developed originally for Markov Decision Processes. The use of information relaxation duality for POMDPs presents important challenges, and we show how change-of-measure arguments can be used to overcome them. As a second contribution, we show that many value function approximations for POMDPs are supersolutions. By constructing penalties from supersolutions we are able to achieve significant variance reduction when estimating the duality gap directly, and the resulting dual bounds are guaranteed to provide tighter bounds than those provided by the supersolutions themselves. Applications in robotic navigation and telecommunications are given in Chapter 2. A further application of this approach is provided in Chapter 5 in the context of personalized medicine. In the second part of this dissertation, we discuss a number of weaknesses inherent in traditional scenario analysis. For instance, the standard approach to scenario analysis aims to compute the P&L of a portfolio resulting from joint stresses to underlying risk factors, leaving all unstressed risk factors set to zero. This approach ignores thereby the conditional distribution of the unstressed risk factors given the stressed risk factors. We address these weaknesses by embedding the scenario analysis within a dynamic factor model for the underlying risk factors. We recur to multivariate state-space models that allow the modeling of real-world behavior of financial markets, like volatility clustering for example. Additionally, these models are sufficiently tractable to permit the computation (or simulation from) the conditional distribution of unstressed risk factors. Our approach permits the use of observable and unobservable risk factors. We provide applications to fixed income and options portfolios, where we are able to show the degree in which the two scenario analysis approaches can lead to dramatic differences. In the third part, we propose a framework to study a Human-Machine interaction system within the context of financial Robo-advising. In this setting, based on risk-sensitive dynamic games, the robo-advisor adaptively learns the preferences of the investor as the investor makes decisions that optimize her risk-sensitive criterion. The investor and machine's objectives are aligned but the presence of asymmetric information makes this joint optimization process a game with strategic interactions. By considering an investor with mean-variance risk preferences we are able to reduce the game to a POMDP. The human-machine interaction protocol features a trade-off between allowing the robo-advisor to learn the investors preferences through costly communications and optimizing the investor's objective relying on outdated information.
39

Continuous Markov processes on the Sierpinski Gasket and on the Sierpinski Carpet.

January 2008 (has links)
Li, Chung Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 43). / Abstracts in English and Chinese. / Acknowledgement --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Construction of the State Spaces --- p.5 / Chapter 2.1 --- The Sierpinski Gasket --- p.5 / Chapter 2.1.1 --- Neighbourhood in the Sierpinski Gasket --- p.7 / Chapter 2.2 --- The Sierpinski Carpet --- p.9 / Chapter 2.2.1 --- Neighbourhood in the Sierpinski Carpet --- p.10 / Chapter 3 --- Preliminary Random Processes on Each Level --- p.12 / Chapter 3.1 --- The Sierpinski Gasket --- p.12 / Chapter 3.1.1 --- Definitions --- p.12 / Chapter 3.1.2 --- Properties of the Random Walk --- p.13 / Chapter 3.1.3 --- Preparations for convergence and continuity --- p.16 / Chapter 3.2 --- The Sierpinski Carpet --- p.19 / Chapter 3.2.1 --- The Brownian Motion Bn on Cn --- p.19 / Chapter 3.2.2 --- Properties of Bm(t) --- p.20 / Chapter 3.2.3 --- Exit time for Bn --- p.27 / Chapter 4 --- The limiting process --- p.29 / Chapter 4.1 --- The Sierpinski Gasket --- p.29 / Chapter 4.1.1 --- Convergence and continuity --- p.29 / Chapter 4.1.2 --- Extension from to G --- p.31 / Chapter 4.1.3 --- Markov property --- p.33 / Chapter 4.2 --- The Sierpinski Carpet --- p.34 / Chapter 4.2.1 --- Continuity --- p.34 / Chapter 4.2.2 --- Existence of Markov process on C --- p.37 / Chapter 4.2.3 --- Piecing Together --- p.38 / Bibliography --- p.43
40

Research of mixture of experts model for time series prediction

Wang, Xin, n/a January 2005 (has links)
For the prediction of chaotic time series, a dichotomy has arisen between local approaches and global approaches. Local approaches hold the reputation of simplicity and feasibility, but they generally do not produce a compact description of the underlying system and are computationally intensive. Global approaches have the advantage of requiring less computation and are able to yield a global representation of the studied time series. However, due to the complexity of the time series process, it is often not easy to construct a global model to perform the prediction precisely. In addition to these approaches, a combination of the global and local techniques, called mixture of experts (ME), is also possible, where a smaller number of models work cooperatively to implement the prediction. This thesis reports on research about ME models for chaotic time series prediction. Based on a review of the techniques in time series prediction, a HMM-based ME model called "Time-line" Hidden Markov Experts (THME) is developed, where the trajectory of the time series is divided into some regimes in the state space and regression models called local experts are applied to learn the mapping on the regimes separately. The dynamics for the expert combination is a HMM, however, the transition probabilities are designed to be time-varying and conditional on the "real time" information of the time series. For the learning of the "time-line" HMM, a modified Baum-Welch algorithm is developed and the convergence of the algorithm is proved. Different versions of the model, based on MLP, RBF and SVM experts, are constructed and applied to a number of chaotic time series on both one-step-ahead and multi-step-ahead predictions. Experiments show that in general THME achieves better generalization performance than the corresponding single models in one-step-ahead prediction and comparable to some published benchmarks in multi-step-ahead prediction. Various properties of THME, such as the feature selection for trajectory dividing, the clustering techniques for regime extraction, the "time-line" HMM for expert combination and the performance of the model when it has different number of experts, are investigated. A number of interesting future directions for this work are suggested, which include the feature selection for regime extraction, the model selection for transition probability modelling, the extension to distribution prediction and the application on other time series.

Page generated in 0.0249 seconds