• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 14
  • 10
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 116
  • 29
  • 21
  • 18
  • 17
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Méthodes numériques pour les processus markoviens déterministes par morceaux / Numerical methods for piecewise-deterministic Markov processes

Brandejsky, Adrien 02 July 2012 (has links)
Les processus markoviens déterministes par morceaux (PMDM) ont été introduits dans la littérature par M.H.A. Davis en tant que classe générale de modèles stochastiques non-diffusifs. Les PMDM sont des processus hybrides caractérisés par des trajectoires déterministes entrecoupées de sauts aléatoires. Dans cette thèse, nous développons des méthodes numériques adaptées aux PMDM en nous basant sur la quantification d'une chaîne de Markov sous-jacente au PMDM. Nous abordons successivement trois problèmes : l'approximation d'espérances de fonctionnelles d'un PMDM, l'approximation des moments et de la distribution d'un temps de sortie et le problème de l'arrêt optimal partiellement observé. Dans cette dernière partie, nous abordons également la question du filtrage d'un PMDM et établissons l'équation de programmation dynamique du problème d'arrêt optimal. Nous prouvons la convergence de toutes nos méthodes (avec le plus souvent des bornes de la vitesse de convergence) et les illustrons par des exemples numériques. / Piecewise-deterministic Markov processes (PDMP’s) have been introduced by M.H.A. Davis as a general class of non-diffusive stochastic models. PDMP’s are hybrid Markov processes involving deterministic motion punctuated by random jumps. In this thesis, we develop numerical methods that are designed to fit PDMP's structure and that are based on the quantization of an underlying Markov chain. We deal with three issues : the approximation of expectations of functional of a PDMP, the approximation of the moments and of the distribution of an exit time and the partially observed optimal stopping problem. In the latter one, we also tackle the filtering of a PDMP and we establish the dynamic programming equation of the optimal stopping problem. We prove the convergence of all our methods (most of the time, we also obtain a bound for the speed of convergence) and illustrate them with numerical examples.
62

Dynamický model ceny jízdného / Dynamic fare model

Kislinger, Jan January 2017 (has links)
The problem of creating dynamic fare model consists of two tasks - estimating demand for train tickets and multistage optimization of price of fare. We introduce a model of inhomogeneous Markov process for the process of selling the tickets in this thesis. Because of the complexity of the state space the optimization problem needs to be solved using simulation methods. The solution was implemented in R language for single-stage and two-stage problems. Before this application we summarize the theory of inhomogeneous Markov process with special attention to process with separable inhomogeneity. Then we propose methods for estimating the intensity using maximum likelihood theory. We also describe and compare two algorithms for simulated optimization. Powered by TCPDF (www.tcpdf.org)
63

Processos de renovação obtidos por agregação de estados a partir de um processo markoviano / Renewal processes obtained by aggregation of states from a markovian process

Carvalho, Walter Augusto Fonsêca de, 1964- 24 August 2018 (has links)
Orientadores: Nancy Lopes Garcia, Alexsandro Giacomo Grimbert Gallo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-24T12:54:22Z (GMT). No. of bitstreams: 1 Carvalho_WalterAugustoFonsecade_D.pdf: 1034671 bytes, checksum: 25dd72305f343655bedfde62a785a259 (MD5) Previous issue date: 2014 / Resumo: Esta tese é dedicada ao estudo dos processos de renovação binários obtidos como agregação de estados a partir de processos Markovianos com alfabeto finito. Na primeira parte, utilizamos uma abordagem matricial para obter condições sob as quais o processo agregado pertence a cada uma das seguintes classes: (1) Markoviano de ordem finita, (2) processo de ordem infinita com probabilidades de transição contínuas, (3) processo Gibbsiano. A segunda parte trata da distância d entre processos de renovação binários. Obtivemos condições sob as quais esta distância pode ser atingida entre tais processos / Abstract: This thesis is devoted to the study of binary renewal processes obtained as aggregation of states from Markov processes with finite alphabet. In the rst part, we use a matrix approach to obtain conditions under which the aggregated process belongs to each of the following classes: (1) Markov of finite order, (2) process of infinite order with continuous transition probabilities, (3) Gibbsian process. The second part deals with the distance d between binary renewal processes. We obtain conditions under which this distance can be achieved between these processes / Doutorado / Estatistica / Doutor em Estatística
64

Computing Agent Competency in First Order Markov Processes

Cao, Xuan 06 December 2021 (has links)
Artificial agents are usually designed to achieve specific goals. An agent's competency can be defined as its ability to accomplish its goals under different conditions. This thesis restricts attention to a specific type of goal, namely reaching a desired state without exceeding a tolerance threshold of undesirable events in a first-order Markov process. For such goals, the state-dependent competency for an agent can be defined as the probability of reaching the desired state without exceeding the threshold and within a time limit given an initial state. The thesis further defines total competency as the set of state-dependent competency relationships over all possible initial states. The thesis uses a Monte Carlo approach to establish a baseline for estimating state-dependent competency. The Monte Carlo approach (a) uses trajectories sampled from an agent behaving in the environment, and then (b) uses nonlinear regression over the trajectory samples to estimate the competency curve. The thesis further presents an equation demonstrating recurrent relations for total competency and an algorithm based on that equation for computing total competency whose worst case computation time grows quadratically with the size of the state space. Simple maze-based Markov chains show that the Monte Carlo approach to estimating the competency agrees with the results computed by the proposed algorithm. Lastly, the thesis explores a special case where there are multiple sequential atomic goals that make up a complex goal. The thesis models a set of sequential goals as a Bayesian network and presents an equation based on the chain rule for deriving the competency for the complex goal from the competency for atomic goals. Experiments for the canonical taxi problem with sequential goals show the correctness of the Bayesian network-based decomposition approach.
65

Processus de Markov déterministes par morceaux branchants et problème d’arrêt optimal, application à la division cellulaire / Branching piecewise deterministic Markov processes and optimal stopping problem, applications to cell division

Joubaud, Maud 25 June 2019 (has links)
Les processus markoviens déterministes par morceaux (PDMP) forment une vaste classe de processus stochastiques caractérisés par une évolution déterministe entre des sauts à mécanisme aléatoire. Ce sont des processus de type hybride, avec une composante discrète de mode et une composante d’état qui évolue dans un espace continu. Entre les sauts du processus, la composante continue évolue de façon déterministe, puis au moment du saut un noyau markovien sélectionne la nouvelle valeur des composantes discrète et continue. Dans cette thèse, nous construisons des PDMP évoluant dans des espaces de mesures (de dimension infinie), pour modéliser des population de cellules en tenant compte des caractéristiques individuelles de chaque cellule. Nous exposons notre construction des PDMP sur des espaces de mesure, et nous établissons leur caractère markovien. Sur ces processus à valeur mesure, nous étudions un problème d'arrêt optimal. Un problème d'arrêt optimal revient à choisir le meilleur temps d'arrêt pour optimiser l'espérance d'une certaine fonctionnelle de notre processus, ce qu'on appelle fonction valeur. On montre que cette fonction valeur est solution des équations de programmation dynamique et on construit une famille de temps d'arrêt $epsilon$-optimaux. Dans un second temps, nous nous intéressons à un PDMP en dimension finie, le TCP, pour lequel on construit un schéma d'Euler afin de l'approcher. Ce choix de modèle simple permet d'estimer différents types d'erreurs. Nous présentons des simulations numériques illustrant les résultats obtenus. / Piecewise deterministic Markov processes (PDMP) form a large class of stochastic processes characterized by a deterministic evolution between random jumps. They fall into the class of hybrid processes with a discrete mode and an Euclidean component (called the state variable). Between the jumps, the continuous component evolves deterministically, then a jump occurs and a Markov kernel selects the new value of the discrete and continuous components. In this thesis, we extend the construction of PDMPs to state variables taking values in some measure spaces with infinite dimension. The aim is to model cells populations keeping track of the information about each cell. We study our measured-valued PDMP and we show their Markov property. With thoses processes, we study a optimal stopping problem. The goal of an optimal stopping problem is to find the best admissible stopping time in order to optimize some function of our process. We show that the value fonction can be recursively constructed using dynamic programming equations. We construct some $epsilon$-optimal stopping times for our optimal stopping problem. Then, we study a simple finite-dimension real-valued PDMP, the TCP process. We use Euler scheme to approximate it, and we estimate some types of errors. We illustrate the results with numerical simulations.
66

Applications of the Helmholtz-Hodge Decomposition to Networks and Random Processes

Strang, Alexander 07 September 2020 (has links)
No description available.
67

Can students' progress data be modeled using Markov chains? / Kan studenters genomströmning modelleras med Markovkedjor?

Carlsson, Filip January 2019 (has links)
In this thesis a Markov chain model, which can be used for analysing students’ performance and their academic progress, is developed. Being able to evaluate students progress is useful for any educational system. It gives a better understanding of how students resonates and it can be used as support for important decisions and planning. Such a tool can be helpful for managers of the educational institution to establish a more optimal educational policy, which ensures better position in the educational market. To show that it is reasonable to use a Markov chain model for this purpose, a test for how well data fits such a model is created and used. The test shows that we cannot reject the hypothesis that the data can be fitted to a Markov chain model. / I detta examensarbete utvecklas en Markov-kedjemodell, som kan användas för att analysera studenters prestation och akademiska framsteg. Att kunna utvärdera studenters väg genom studierna är användbart för alla utbildningssystem. Det ger en bättre förståelse för hur studenter resonerar och det kan användas som stöd för viktiga beslut och planering. Ett sådant verktyg kan vara till hjälp för utbildningsinstitutionens chefer att upprätta en mer optimal utbildningspolitik, vilket säkerställer en bättre ställning på utbildningsmarknaden. För att visa att det är rimligt att använda en Markov-kedjemodell för detta ändamål skapas och används ett test för hur väl data passar en sådan modell. Testet visar att vi inte kan avvisa hypotesen att data kan passa en Markov-kedjemodell.
68

Convergence Formulas for the Level-increment Truncation Approximation of M/G/1-type Markov Chains / M/G/1型マルコフ連鎖のレベル増分切断近似に対する収束公式

Ouchi, Katsuhisa 24 November 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24980号 / 情博第853号 / 新制||情||143(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 下平 英寿, 准教授 本多 淳也 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
69

Reliability Based Classification of Transitions in Complex Semi-Markov Models / Tillförlitlighetsbaserad klassificering av övergångar i komplexa semi-markovmodeller

Fenoaltea, Francesco January 2022 (has links)
Markov processes have a long history of being used to model safety critical systems. However, with the development of autonomous vehicles and their increased complexity, Markov processes have been shown to not be sufficiently precise for reliability calculations. Therefore there has been the need to consider a more general stochastic process, namely the Semi-Markov process (SMP). SMPs allow for transitions with general distributions between different states and can be used to precisely model complex systems. This comes at the cost of increased complexity when calculating the reliability of systems. As such, methods to increase the interpretability of the system and allow for appropriate approximations have been considered and researched. In this thesis, a novel classification approach for transitions in SMP has been defined and complemented with different conjectures and properties. A transition is classified as good or bad by comparing the reliability of the original system with the reliability of any perturbed system, for which the studied transition is more likely to occur. Cases are presented to illustrate the use of this classification technique. Multiple suggestions and conjectures for future work are also presented and discussed. / Markovprocesser har länge använts för att modellera säkerhetskritiska system. Med utvecklingen av autonoma fordon och deras ökade komplexitet, har dock markovprocesser visat sig vara otillräckliga exakta för tillförlitlighetsberäkningar. Därför har det funnits ett behov för en mer allmän stokastisk process, nämligen semi-markovprocessen (SMP). SMP tillåter generella fördelningar mellan tillstånd och kan användas för att modellera komplexa system med hög noggrannhet. Detta innebär dock en ökad komplexitet vid beräkningen av systemens tillförlitlighet. Metoder för att öka systemets tolkningsbarhet och möjliggöra lämpliga approximationer har därför övervägts och undersökts. I den här masteruppsatsen har en ny klassificeringsmetod för övergångar i SMP definierats och kompletteras med olika antaganden och egenskaper. En övergång klassificeras som antingen bra eller dålig genom en jämförelse av tillförlitligheten i det ursprungliga systemets och ett ändrat system, där den studerade övergången har högre sannolikhet att inträffa. Fallstudier presenteras för att exemplifiera användningen av denna klassificeringsteknik. Flera förslag och antaganden för framtida arbete presenteras och diskuteras också.
70

Temporal and Spatial Analysis of Water Quality Time Series

Khalil Arya, Farid January 2015 (has links)
No description available.

Page generated in 0.0636 seconds