• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel 22 June 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
2

Monotonicity and complete monotonicity for continuous-time Markov chains

Dai Pra, Paolo, Louis, Pierre-Yves, Minelli, Ida January 2006 (has links)
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent.<br> However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time. / Nous étudions les notions de monotonie et de monotonie complète pour les processus de Markov (ou chaînes de Markov à temps continu) prenant leurs valeurs dans un espace partiellement ordonné. Ces deux notions ne sont pas équivalentes, comme c'est le cas lorsque le temps est discret. Cependant, nous établissons que pour certains ensembles partiellement ordonnés, l'équivalence a lieu en temps continu bien que n'étant pas vraie en temps discret.
3

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel 22 June 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
4

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel January 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
5

Modelling of Safety Concepts for Autonomous Vehicles using Semi-Markov Models

Bondesson, Carl January 2018 (has links)
Autonomous vehicles is soon a reality in the every-day life. Though before it is used commercially the vehicles need to be proven safe. The current standard for functional safety on roads, ISO 26262, does not include autonomous vehicles at the moment, which is why in this project an approach using semi-Markov models is used to assess safety. A semi-Markov process is a stochastic process modelled by a state space model where the transitions between the states of the model can be arbitrarily distributed. The approach is realized as a MATLAB tool where the user can use a steady-state based analysis called a Loss and Risk based measure of safety to assess safety. The tool works and can assess safety of semi-Markov systems as long as they are irreducible and positive recurrent. For systems that fulfill these properties, it is possible to draw conclusions about the safety of the system through a risk analysis and also about which autonomous driving level the system is in through a sensitivity analysis. The developed tool, or the approach with the semi-Markov model, might be a good complement to ISO 26262.
6

Performance analysis of the general packet radio service

Lindemann, Christoph, Thümmler, Axel 17 December 2018 (has links)
This paper presents an efficient and accurate analytical model for the radio interface of the general packet radio service (GPRS) in a GSM network. The model is utilized for investigating how many packet data channels should be allocated for GPRS under a given amount of traffic in order to guarantee appropriate quality of service. The presented model constitutes a continuous-time Markov chain. The Markov model represents the sharing of radio channels by circuit switched GSM connections and packet switched GPRS sessions under a dynamic channel allocation scheme. In contrast to previous work, the Markov model explicitly represents the mobility of users by taking into account arrivals of new GSM and GPRS users as well as handovers from neighboring cells. Furthermore, we take into account TCP flow control for the GPRS data packets. To validate the simplifications necessary for making the Markov model amenable to numerical solution, we provide a comparison of the results of the Markov model with a detailed simulator on the network level.
7

Hierarchical Approximation Methods for Option Pricing and Stochastic Reaction Networks

Ben Hammouda, Chiheb 22 July 2020 (has links)
In biochemically reactive systems with small copy numbers of one or more reactant molecules, stochastic effects dominate the dynamics. In the first part of this thesis, we design novel efficient simulation techniques for a reliable and fast estimation of various statistical quantities for stochastic biological and chemical systems under the framework of Stochastic Reaction Networks. In the first work, we propose a novel hybrid multilevel Monte Carlo (MLMC) estimator, for systems characterized by having simultaneously fast and slow timescales. Our hybrid multilevel estimator uses a novel split-step implicit tau-leap scheme at the coarse levels, where the explicit tau-leap method is not applicable due to numerical instability issues. In a second work, we address another challenge present in this context called the high kurtosis phenomenon, observed at the deep levels of the MLMC estimator. We propose a novel approach that combines the MLMC method with a pathwise-dependent importance sampling technique for simulating the coupled paths. Our theoretical estimates and numerical analysis show that our method improves the robustness and complexity of the multilevel estimator, with a negligible additional cost. In the second part of this thesis, we design novel methods for pricing financial derivatives. Option pricing is usually challenging due to: 1) The high dimensionality of the input space, and 2) The low regularity of the integrand on the input parameters. We address these challenges by developing different techniques for smoothing the integrand to uncover the available regularity. Then, we approximate the resulting integrals using hierarchical quadrature methods combined with Brownian bridge construction and Richardson extrapolation. In the first work, we apply our approach to efficiently price options under the rough Bergomi model. This model exhibits several numerical and theoretical challenges, implying classical numerical methods for pricing being either inapplicable or computationally expensive. In a second work, we design a numerical smoothing technique for cases where analytic smoothing is impossible. Our analysis shows that adaptive sparse grids’ quadrature combined with numerical smoothing outperforms the Monte Carlo approach. Furthermore, our numerical smoothing improves the robustness and the complexity of the MLMC estimator, particularly when estimating density functions.
8

Scalable analysis of stochastic process algebra models

Tribastone, Mirco January 2010 (has links)
The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results.
9

Comparison of Two Parameter Estimation Techniques for Stochastic Models

Robacker, Thomas C 01 August 2015 (has links)
Parameter estimation techniques have been successfully and extensively applied to deterministic models based on ordinary differential equations but are in early development for stochastic models. In this thesis, we first investigate using parameter estimation techniques for a deterministic model to approximate parameters in a corresponding stochastic model. The basis behind this approach lies in the Kurtz limit theorem which implies that for large populations, the realizations of the stochastic model converge to the deterministic model. We show for two example models that this approach often fails to estimate parameters well when the population size is small. We then develop a new method, the MCR method, which is unique to stochastic models and provides significantly better estimates and smaller confidence intervals for parameter values. Initial analysis of the new MCR method indicates that this method might be a viable method for parameter estimation for continuous time Markov chain models.
10

Single and Multi-player Stochastic Dynamic Optimization

Saha, Subhamay January 2013 (has links) (PDF)
In this thesis we investigate single and multi-player stochastic dynamic optimization prob-lems. We consider both discrete and continuous time processes. In the multi-player setup we investigate zero-sum games with both complete and partial information. We study partially observable stochastic games with average cost criterion and the state process be-ing discrete time controlled Markov chain. The idea involved in studying this problem is to replace the original unobservable state variable with a suitable completely observable state variable. We establish the existence of the value of the game and also obtain optimal strategies for both players. We also study a continuous time zero-sum stochastic game with complete observation. In this case the state is a pure jump Markov process. We investigate the nite horizon total cost criterion. We characterise the value function via appropriate Isaacs equations. This also yields optimal Markov strategies for both players. In the single player setup we investigate risk-sensitive control of continuous time Markov chains. We consider both nite and in nite horizon problems. For the nite horizon total cost problem and the in nite horizon discounted cost problem we characterise the value function as the unique solution of appropriate Hamilton Jacobi Bellman equations. We also derive optimal Markov controls in both the cases. For the in nite horizon average cost case we shown the existence of an optimal stationary control. we also give a value iteration scheme for computing the optimal control in the case of nite state and action spaces. Further we introduce a new class of stochastic processes which we call stochastic processes with \age-dependent transition rates". We give a rigorous construction of the process. We prove that under certain assunptions the process is Feller. We also compute the limiting probabilities for our process. We then study the controlled version of the above process. In this case we take the risk-neutral cost criterion. We solve the in nite horizon discounted cost problem and the average cost problem for this process. The crucial step in analysing these problems is to prove that the original control problem is equivalent to an appropriate semi-Markov decision problem. Then the value functions and optimal controls are characterised using this equivalence and the theory of semi-Markov decision processes (SMDP). The analysis of nite horizon problems becomes di erent from that of in nite horizon problems because of the fact that in this case the idea of converting into an equivalent SMDP does not seem to work. So we deal with the nite horizon total cost problem by showing that our problem is equivalent to another appropriately de ned discrete time Markov decision problem. This allows us to characterise the value function and to nd an optimal Markov control.

Page generated in 0.067 seconds