Spelling suggestions: "subject:"markov processes"" "subject:"darkov processes""
51 |
Curvature, isoperimetry, and discrete spin systemsMurali, Shobhana 12 1900 (has links)
No description available.
|
52 |
Genetic algorithms : a markov chain and detail balance approachMeddin, Mona 08 1900 (has links)
No description available.
|
53 |
Countable Markov chains with an application to queueing theoryOwens, Ray Collins 05 1900 (has links)
No description available.
|
54 |
Isoperimetic and related constants for graphs and markov chainsStoyanov, Tsvetan I. 08 1900 (has links)
No description available.
|
55 |
Random evolutions with feedbackSiegrist, Kyle Travis 05 1900 (has links)
No description available.
|
56 |
State-similarity metrics for continuous Markov decision processesFerns, Norman Francis. January 2007 (has links)
In recent years, various metrics have been developed for measuring the similarity of states in probabilistic transition systems (Desharnais et al., 1999; van Breugel & Worrell, 2001a). In the context of Markov decision processes, we have devised metrics providing a robust quantitative analogue of bisimulation. Most importantly, the metric distances can be used to bound the differences in the optimal value function that is integral to reinforcement learning (Ferns et al. 2004; 2005). More recently, we have discovered an efficient algorithm to calculate distances in the case of finite systems (Ferns et al., 2006). In this thesis, we seek to properly extend state-similarity metrics to Markov decision processes with continuous state spaces both in theory and in practice. In particular, we provide the first distance-estimation scheme for metrics based on bisimulation for continuous probabilistic transition systems. Our work, based on statistical sampling and infinite dimensional linear programming, is a crucial first step in real-world planning; many practical problems are continuous in nature, e.g. robot navigation, and often a parametric model or crude finite approximation does not suffice. State-similarity metrics allow us to reason about the quality of replacing one model with another. In practice, they can be used directly to aggregate states.
|
57 |
Analysis of reframing performance of multilevel synchronous time division multiplex hierarchyLiu, Shyan-Shiang 05 1900 (has links)
No description available.
|
58 |
Measures of effectiveness for data fusion based on information entropyNoonan, Colin Anthony January 2000 (has links)
This thesis is concerned with measuring and predicting the performance and effectiveness of a data fusion process. Its central proposition is that information entropy may be used to quantify concisely the effectiveness of the process. The personal and original contribution to that subject which is contained in this thesis is summarised as follows: The mixture of performance behaviours that occur in a data fusion system are described and modelled as the states of an ergodic Markov process. An new analytic approach to combining the entropy of discrete and continuous information is defined. A new simple and accurate model of data association performance is proposed. A new model is proposed for the propagation of information entropy in an minimum mean square combination of track estimates. A new model is proposed for the propagation of the information entropy of object classification belief as new observations are incorporated in a recursive Bayesian classifier. A new model to quantify the information entropy of the penalty of ignorance is proposed. New formulations of the steady state solution of the matrix Riccati equation to model tracker performance are proposed.
|
59 |
Text classification using a hidden Markov modelYi, Kwan, 1963- January 2005 (has links)
Text categorization (TC) is the task of automatically categorizing textual digital documents into pre-set categories by analyzing their contents. The purpose of this study is to develop an effective TC model to resolve the difficulty of automatic classification. In this study, two primary goals are intended. First, a Hidden Markov Model (HAM is proposed as a relatively new method for text categorization. HMM has been applied to a wide range of applications in text processing such as text segmentation and event tracking, information retrieval, and information extraction. Few, however, have applied HMM to TC. Second, the Library of Congress Classification (LCC) is adopted as a classification scheme for the HMM-based TC model for categorizing digital documents. LCC has been used only in a handful of experiments for the purpose of automatic classification. In the proposed framework, a general prototype for an HMM-based TC model is designed, and an experimental model based on the prototype is implemented so as to categorize digitalized documents into LCC. A sample of abstracts from the ProQuest Digital Dissertations database is used for the test-base. Dissertation abstracts, which are pre-classified by professional librarians, form an ideal test-base for evaluating the proposed model of automatic TC. For comparative purposes, a Naive Bayesian model, which has been extensively used in TC applications, is also implemented. Our experimental results show that the performance of our model surpasses that of the Naive Bayesian model as measured by comparing the automatic classification of abstracts to the manual classification performed by professionals.
|
60 |
Asymptotic behavior of stochastic systems possessing Markovian realizationsMeyn, S. P. (Sean P.) January 1987 (has links)
The asymptotic properties of discrete time stochastic systems operating under feedback is addressed. It is assumed that a Markov chain $ Phi$ evolving on Euclidean space exists, and that the input and output processes appear as functions of $ Phi$. The main objectives of the thesis are (i) to extend various asymptotic properties of Markov chains to hold for arbitrary initial distributions; and (ii) to develop a robustness theory for Markovian systems. / A condition called local stochastic controllability, a generalization of the concept of controllability from linear system theory, is introduced and is shown to be sufficient to ensure that the first objective is met. The second objective is explored by introducing a notion of convergence for stochastic systems and investigating the behavior of the invariant probabilities corresponding to a convergent sequence of stochastic systems. / These general results are applied to two previously unsolved problems: The asymptotic behavior of linear state space systems operating under nonlinear feedback, and the stability and asymptotic behavior of a class of random parameter AR (p) stochastic systems under optimal control.
|
Page generated in 0.0576 seconds