501 |
Numerical solution of Markov ChainsElsayad, Amr Lotfy 01 January 2002 (has links)
This project deals with techniques to solve Markov Chains numerically.
|
502 |
Self-Organization of Multi-Agent Systems Using Markov Chain ModelsJanuary 2020 (has links)
abstract: The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal.
This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2020
|
503 |
Analysis of internet image search performanceWang, Xiaoling 01 January 2011 (has links)
No description available.
|
504 |
Event History Analysis in Multivariate Longitudinal DataYuan, Chaoyu January 2021 (has links)
This thesis studies event history analysis in multivariate longitudinal observational databases (LODs) and its application in postmarketing surveillance to identify and measure the relationship between events of health outcomes and drug exposures. The LODs contain repeated measurements on each individual whose healthcare information is recorded electronically. Novel statistical methods are being developed to handle challenging issues arising from the scale and complexity of postmarketing surveillance LODs. In particular, the self-controlled case series (SCCS) method has been developed with two major features (1) it only uses individuals with at least one event for analysis and inference and, (2) it uses each individual to be served as his/her own control, effectively requiring a person to switch treatments during the observation period. Although this method handles heterogeneity and bias, it does not take full advantage of the observational databases. In this connection, the SCCS method may lead to a substantial loss of efficiency.
We proposed a multivariate proportional intensity modeling approach with random effect for multivariate LODs. The proposed method can explain the heterogeneity and eliminate bias in LODs. It also handles multiple types of event cases and makes full use of the observational databases. In the first part of this thesis, we present the multivariate proportional intensity model with correlated frailty. We explore the correlation structure between multiple types of clinical events and drug exposures. We introduce a multivariate Gaussian frailty to incorporate thewithin-subject heterogeneity, i.e. hidden confounding factors. For parameter estimation, we adopt the Bayesian approach using the Markov chain Monte Carlo method to get a series of samples from the targeted full likelihood. We compare the new method with the SCCS method and some frailty models through simulation studies.
We apply the proposed model to an electronic health record (EHR) dataset and identify event types as defined in Observational Outcomes Medical Partnership (OMOP) project. We show that the proposed method outperforms the existing methods in terms of common metrics, such as receiver operating characteristic (ROC) metrics. Finally, we extend the proposed correlated frailty model to include a dynamic random effect. We establish a general asymptotic theory for the nonparametric maximum likelihood estimators in terms of identifiability, consistency, asymptotic normality and asymptotic efficiency. A detailed illustration of the proposed method is done with the clinical event Myocardial Infarction (MI) and drug treatment of Angiotensin-converting-enzyme (ACE) inhibitors, showing the dynamic effect of unobserved heterogeneity.
|
505 |
Modelling Safety of Autonomous Driving with Semi-Markov ProcessesKvanta, Hugo January 2021 (has links)
With the advent of autonomous vehicles, the issue of safety-evaluationhas become key. ISO26262 recommends using Markov chains. However, in their most common form, Markov chains lack the flexibility required to model non- exponential probability distributions and systems displaying parallelism. In these cases, generalized semi-Markov processes arebetter suited. Though, these are significantly more taxing to analyze mathematically. This thesis instead explores the option of simulating these systemsdirectly via MATLAB’s Simulink and Stateflow. An example system, here called CASE, currently under study by Scania was used as an example. The results showed that direct simulation is indeed possible, but the computational times are significantly greater than those from standard MATLAB-functions. The method should therefore be employed on parallel systems when results with a high level of fidelity are needed, and alternative methods are not available.
|
506 |
Courbure de Ricci grossière de processus markoviens / Coarse Ricci curvature of Markov processesVeysseire, Laurent 16 July 2012 (has links)
La courbure de Ricci grossière d’un processus markovien sur un espace polonais est définie comme un taux de contraction local de la distance de Wasserstein W1 entre les lois du processus partant de deux points distincts. La première partie de cette thèse traite de résultats valables dans le cas d’espaces polonais quelconques. On montre que l’infimum de la courbure de Ricci grossière est un taux de contraction global du semigroupe du processus pour la distance W1. Quoiqu’intuitif, ce résultat est difficile à démontrer en temps continu. La preuve de ce résultat, ses conséquences sur le trou spectral du générateur font l’objet du chapitre 1. Un autre résultat intéressant, faisant intervenir les valeurs de la courbure de Ricci grossière en différents points, et pas seulement son infimum, est un résultat de concentration des mesures d’équilibre, valable uniquement en temps discret. Il sera traité dans le chapitre 2. La seconde partie de cette thèse traite du cas particulier des diffusions sur les variétés riemanniennes. Une formule est donnée permettant d’obtenir la courbure de Ricci grossière à partir du générateur. Dans le cas où la métrique est adaptée à la diffusion, nous montrons l’existence d’un couplage entre les trajectoires tel que la courbure de Ricci grossière est exactement le taux de décroissance de la distance entre ces trajectoires. Le trou spectral du générateur de la diffusion est alors plus grand que la moyenne harmonique de la courbure de Ricci. Ce résultat peut être généralisé lorsque la métrique n’est pas celle induite par le générateur, mais il nécessite une hypothèse contraignante, et la courbure que l'on doit considérer est plus faible. / The coarse Ricci curvature of a Markov process on a Polish space is defined as a local contraction rate of the W1 Wasserstein distance between the laws of the process starting at two different points. The first part of this thesis deals with results holding in the case of general Polish spaces. The simplest of them is that the infimum of the coarse Ricci curvature is a global contraction rate of the semigroup of the process for the W1 distance between probability measures. Though intuitive, this result is diffucult to prove in continuous time. The proof of this result, and the following consequences for the spectral gap of the generator are the subject of Chapter 1. Another interesting result, using the values of the coarse Ricci curvature at different points, and not only its infimum, is a concentration result for the equilibrium measures, only holding in a discrete time framework. That will be the topic of Chapter 2. The second part of this thesis deals with the particular case of diffusions on Riemannian manifolds. A formula is given, allowing to get the coarse Ricci curvature from the generator of the diffusion. In the case when the metric is adapted to the diffusion, we show the existence of a coupling between the paths starting at two different points, such that the coarse Ricci curvature is exactly the decreasing rate of the distance between these paths. We can then show that the spectral gap of the generator is at least the harmonic mean of the Ricci curvature. This result can be generalized when the metric is not the one induced by the generator, but it needs a very restricting hypothesis, and the curvature we have to choose is smaller.
|
507 |
Response of dynamic systems to a class of renewal impulse process excitations : non-diffusive Markov approachTellier, Matilde 02 December 2008 (has links)
The most suitable model that idealizes random sequences of shock and impacts on vibratory systems is that of a random train of pulses (or impulses), whose arrivals are characterized in terms of stochastic point processes. Most of the existing methods of stochastic dynamics are relevant to random impulsive excitations driven by Poisson processes and there exist some methods for Erlang renewal-driven impulse processes. Herein, two classes of random impulse processes are considered. The first one is the train of impulses whose interarrival timesare driven by an Erlang renewal process. The second class is obtained by selecting some impulses from the train driven by an Erlang renewal process. The selection is performed with the aid of the jump, zero-one, stochastic process governed by the stochastic differential equation driven by the independent Erlang renewal processes. The underlying counting process, driving the arrival times of the impulses, is fully characterized. The expressions for the probability density functions of the first and second waiting times are derived and by means of these functions it is proved that the underlying counting process is a renewal (non-Erlang) process. The probability density functions of the interarrival times are evaluated for four different cases of the driving process and the results obtained for some example sets of parameters are shown graphically.
The advantage of modeling the interarrival times using the class of non-Erlang renewal processes analyzed in the present dissertation, rather than the Poisson or Erlang distributions is that it is possible to deal with a broader class of the interarrival probability density functions. The non-Erlang renewal processes considered herein, obtained from two independent Erlang renewal processes, are characterized by four parameters that can be chosen to fit more closely the actual data on the distribution of the interarrival times.
As the renewal counting process is not the one with independent increments, the state vector of the dynamic system under a renewal impulse process excitation is not a Markov process. The non-Markov problem may be then converted into a Markov one at the expense of augmenting the state vector by auxiliary discrete stochastic variables driven by a Poisson process. Other than the existing in literature (Iwankiewicz and Nielsen), a novel technique of conversion is devised here, where the auxiliary variables are all zero-one processes. In a considered class of non-Erlang renewal impulse processes each of the driving Erlang processes is recast in terms of the Poisson process, the augmented state vector driven by two independent Poisson processes becomes a non-diffusive Markov process.
For a linear oscillator, under a considered class of non-Erlang renewal impulse process, the equations for response moments are obtained from the generalized Ito’s differential rule and the mean value and variance of the response are evaluated and shown graphically for some selected sets of parameters.
For a non-linear oscillator under both Erlang renewal-driven impulses and the considered class of non-Erlang renewal impulse processes, the technique of equations for moments together with a modified closure technique is devised.
The specific physical properties of an impulsive load process allow to modify the classical cumulant-neglect closure scheme and to develop a more efficient technique for the class of excitations considered. The joint probability density of the augmented state vector is expressed as sum of contributions conditioned on the ‘on’ and ‘off’ states of the auxiliary variables. A discrete part of the joint probability density function accounts for the fact that there is a finite probability of the system being in a deterministic state (for example at rest) from the initial time to the occurrence of the first impulse. The continuous part, which is the conditional probability given that the first impulse has occurred, can be expressed in terms of functions of the displacement and velocity of the system. These functions can be viewed as unknown probability densities of a bi-variate stochastic process, each of which originates a set of ‘conditional moments’. The set of relationships between unconditional and conditional moments is derived. The ordinary cumulant neglect closure is then performed on the conditional moments pertinent to the continuous part only. The closure scheme is then formulated by expressing the ‘unconditional’ moments of order greater then the order of closure, in terms of unconditional moments of lower order.
The stochastic analysis of a Duffing oscillator under the the random train of impulses driven by an Erlang renewal processes and a non-Erlang renewal process R(t), is performed by applying the second order ordinary cumulant neglect closure and the modified second order closure approximation and the approximate analytical results are verified against direct Monte Carlo simulation. The modified closure scheme proves to give better results for highly non-Gaussian train of impulses, characterized by low mean arrival rate.
|
508 |
Robust and Interpretable Sequential Decision-Making for HealthcareGrand-Clement, Julien January 2021 (has links)
Markov Decision Processes (MDP) is a common framework for modeling sequential decision-making problems, with applications ranging from inventory and supply chains to healthcare applications, autonomous driving and solving repeated games. Despite its modeling power, two fundamental challenges arise when using the MDP framework in real-worldapplications. First, the optimal decision rule may be highly dependent on the MDP parameters (e.g., transition rates across states and rewards for each state-action pair). When the parameters are miss-estimated, the resulting decision rule may be suboptimal when deployed in practice.
Additionally, the optimal policies computed by state-of-the-art algorithms may not be interpretable and can be seen as a black-box. Among other reasons, this is problematic as the policy may not be understandable for the people that are supposed to operationalize it. In this thesis, we aim to broaden the applicability of the MDP framework by addressing the challenges of robustness and interpretability. In the first part of the thesis, we focus on robustness.
We introduce a novel model for parameter uncertainty in Chapter 2, that is significantly less pessimistic than prior models of uncertainty while enabling the efficient computation of a robust policy. In Chapter 3, we consider a healthcare application, where we focus on proactively transferring patients of a hospital to the Intensive Care Unit, to ameliorate the overall survival rates and patients’ flow. In the second part of this thesis, we focus on interpretable algorithms, with an emphasis on the application to find novel triage protocols for ventilator allocations for COVID-19 patients.
In Chapter 4, we introduce a simulation model to estimate the performance of the official New York State (NYS) triage protocol at various levels of shortages of ventilators, using a real data set of patients intubated during Spring 2020 because of COVID-19 complications. In Chapter 5, we introduce our algorithmic framework for computing interpretable (tree) policies and apply our methods to learn novel triage protocols.
|
509 |
Branching processes for structured populations and estimators for cell division / Processus de branchement pour des populations structurées et estimateurs pour la division cellulaireMarguet, Aline 27 November 2017 (has links)
Cette thèse porte sur l'étude probabiliste et statistique de populations sans interactions structurées par un trait. Elle est motivée par la compréhension des mécanismes de division et de vieillissement cellulaire. On modélise la dynamique de ces populations à l'aide d'un processus de Markov branchant à valeurs mesures. Chaque individu dans la population est caractérisé par un trait (l'âge, la taille, etc...) dont la dynamique au cours du temps suit un processus de Markov. Ce trait détermine le cycle de vie de chaque individu : sa durée de vie, son nombre de descendants et le trait à la naissance de ses descendants. Dans un premier temps, on s'intéresse à la question de l'échantillonnage uniforme dans la population. Nous décrivons le processus pénalisé, appelé processus auxiliaire, qui correspond au trait d'un individu "typique" dans la population en donnant son générateur infinitésimal. Dans un second temps, nous nous intéressons au comportement asymptotique de la mesure empirique associée au processus de branchement. Sous des hypothèses assurant l'ergodicité du processus auxiliaire, nous montrons que le processus auxiliaire correspond asymptotiquement au trait le long de sa lignée ancestrale d'un individu échantillonné uniformément dans la population. Enfin, à partir de données composées des traits à la naissance des individus dans l'arbre jusqu'à une génération donnée, nous proposons des estimateurs à noyau de la densité de transition de la chaine correspondant au trait le long d'une lignée ainsi que de sa mesure invariante. De plus, dans le cas d'une diffusion réfléchie sur un compact, nous estimons par maximum de vraisemblance le taux de division du processus. Nous montrons la consistance de cet estimateur ainsi que sa normalité asymptotique. L'implémentation numérique de l'estimateur est par ailleurs réalisée. / We study structured populations without interactions from a probabilistic and a statistical point of view. The underlying motivation of this work is the understanding of cell division mechanisms and of cell aging. We use the formalism of branching measure-valued Markov processes. In our model, each individual is characterized by a trait (age, size, etc...) which moves according to a Markov process. The rate of division of each individual is a function of its trait and when a branching event occurs, the trait of the descendants at birth depends on the trait of the mother and on the number of descendants. First, we study the trait of a uniformly sampled individual in the population. We explicitly describe the penalized Markov process, named auxiliary process, corresponding to the dynamic of the trait of a "typical" individual by giving its associated infinitesimal generator. Then, we study the asymptotic behavior of the empirical measure associated with the branching process. Under assumptions assuring the ergodicity of the auxiliary process, we prove that the auxiliary process asymptotically corresponds to the trait along its ancestral lineage of a uniformly sampled individual in the population. Finally, we address the problem of parameter estimation in the case of a branching process structured by a diffusion. We consider data composed of the trait at birth of all individuals in the population until a given generation. We give kernel estimators for the transition density and the invariant measure of the chain corresponding to the trait of an individual along a lineage. Moreover, in the case of a reflected diffusion on a compact set, we use maximum likelihood estimation to reconstruct the division rate. We prove consistency and asymptotic normality for this estimator. We also carry out the numerical implementation of the estimator.
|
510 |
Identification of Markov Processes within a Wind Turbine Array Boundary LayerMelius, Matthew Scott 23 August 2013 (has links)
The Markovianity within a wind turbine array boundary layer is explored for data taken in a wind tunnel containing a model wind turbine array. A stochastic analysis of the data is carried out using Markov chain theory. The data were obtained via hot-wire anemometry thus providing point velocity statistics. The theory of Markovian processes is applied to obtain a statistical description of longitudinal velocity increments inside the turbine wake using conditional probability density functions. It is found that two and three point conditional probability density functions are similar for scale differences larger than the Taylor micro-scale. This result is quantified by use of the Wilcoxon rank-sum test which verifies that this relationship holds independent of initial scale selection outside of the near-wake region behind a wind turbine. Furthermore, at the locations which demonstrate Markovian properties there is a well defined inertial sub-range which follows Kolmogorv's -5/3 scaling behavior. Results indicate an existence of Markovian properties at scales on the order of the Taylor micro-scale, λ for most locations in the wake. The exception being directly behind the tips of the rotor and the hub where the complex turbulent interactions characteristic of the near-wake demonstrate influence upon the Markov process. The presence of a Markov process in the remaining locations leads to characterization of the multi-point statistics of the wind turbine wakes using the most recent states of the flow.
|
Page generated in 0.0653 seconds