Spelling suggestions: "subject:"[een] MARKOV CHAINS"" "subject:"[enn] MARKOV CHAINS""
151 |
Simulation parfaite de réseaux fermés de files d’attente et génération aléatoire de structures combinatoires / Perfect sampling of closed queueing networks and random generation of combinatorial objectsRovetta, Christelle 20 June 2017 (has links)
La génération aléatoire d'objets combinatoires est un problème qui se pose dans de nombreux domaines de recherche (réseaux de communications, physique statistique, informatique théorique, combinatoire, etc.). Couramment, la distribution des échantillons est définie comme la distribution stationnaire d'une chaîne de Markov ergodique. En 1996, Propp et Wilson ont proposé un algorithme permettant l'échantillonnage sans biais de la distribution stationnaire. Ce dernier appelé aussi algorithme de simulation parfaite, requiert la simulation en parallèle de tous les états possibles de la chaîne. Plusieurs stratégies ont été mises en œuvre afin de ne pas avoir à simuler toutes les trajectoires. Elles sont intrinsèquement liées à la structure de la chaîne considérée et reposent essentiellement sur la propriété de monotonie, la construction de processus bornants qui exploitent la structure de treillis de l'espace d'états ou le caractère local des transitions. Dans le domaine des réseaux de communications, on s'intéresse aux performances des réseaux de files d'attente. Ces derniers se distinguent en deux groupes : ceux dont la distribution stationnaire possède une forme produit qui est facile à évaluer par le calcul et les autres. Pour ce dernier groupe, on utilise la génération aléatoire pour l'évaluation de performances. De par la structure des chaînes qui leurs sont associées, les réseaux ouverts de files d'attente se prêtent bien à la simulation via l'algorithme de simulation parfaite mais pas les réseaux fermés. La difficulté réside dans la taille de l'espace des états qui est exponentielle en le nombre de files à laquelle s'ajoute une contrainte globale à savoir le nombre constant de clients. La contribution principale de cette thèse est une nouvelle structure de données appelée diagramme. Cette structure est inspirée de la programmation dynamique et introduit une nouvelle technique de construction de processus bornant. La première partie du manuscrit est consacrée à la mise en œuvre de l'algorithme de Propp et Wilson pour des réseaux fermés n'étant pas nécessairement à forme produit. La représentation des états par un diagramme et l'opération de transition pour le processus bornant a dès lors une complexité polynomiale en le nombre de files et de clients. Cette technique est ensuite étendue aux réseaux fermés multiclasses ainsi qu'aux réseaux possédant des synchronisations. Une spécification des ensembles d'objets pouvant être représentés par un diagramme ainsi que des algorithmes agissant sur cette structure de données sont également proposés dans cette thèse. La méthode de Botzmann est une autre technique de simulation sans biais. Basée sur la combinatoire analytique, elle permet l'échantillonnage uniforme d'objets appartenant à une même classe combinatoire. Elle est employée dans la seconde partie de cette thèse afin d'échantillonner la distribution stationnaire de réseaux fermés à forme produit et pour la génération des multi-ensembles de taille fixe. Dans ce cadre, les diagrammes sont une nouvelle fois mis à profit. Enfin, la troisième partie présente les logiciels découlant des travaux présentés tout au long de ce travail, et qui implémentent les diagrammes et mettent en œuvre la simulation parfaite de réseaux fermés de files d'attente. / Random generation of combinatorial objects is an important problem in many fields of research (communications networks, theoretical computing, combinatorics, statistical physics, ...). This often requires sampling the stationary distribution of an ergodic Markov chain. In 1996, Propp and Wilson introduced an algorithm to produce unbiased samples of the stationary distribution, also called a perfect sampling algorithm. It requires parallel simulation of all possible states of the chain. To avoid simulating all the trajectories, several strategies have been implemented. But they are related to the structure of the chain and require a monotonicity property, or a construction of a bounding chain that exploits the lattice structure of the state space or the local character of the transitions.In the field of communications networks, attention is paid to the performance of queueing networks, that can be distinguished into two groups: the networks that have a product form stationary distribution which is easy to compute. Random generation can be used for the others. Perfect sampling algorithms can be used for open queueing networks, thanks to the lattice structure of their state space. Unfortunately, that is not the case for closed queueing networks, due to the size of the state space which is exponential in the number of queues and a global constraint (a constant number of customers). The main contribution of this thesis is a new data structure called a diagram. It is inspired by dynamic programming and allows a new technique of construction of bounding processes. The first part of the manuscript is devoted to the implementation of the Propp and Wilson algorithm for closed queueing networks. The representation of a set of states by a diagram and the transition operation for the bounding process has a polynomial complexity in the number of queues and customers. This technique is extended to closed multi-class networks and to networks with synchronizations. Specification of sets of objects that can be represented by a diagram and generic algorithms that use this data structure are proposed in this manuscript. The Boltzmann method is another unbiased sampling technique. It is based on analytical combinatorics and produces uniform samples from objects that belong to the same combinatorial class. It is used in the second part of this thesis in order to sample the stationary distribution of closed networks with product form and for the generation of multisets of fixed cardinality. Diagrams are used again in this context. Finally, the third part presents the software produced during this thesis, implementing diagrams and perfect simulation of closed queueing networks.
|
152 |
Stochastic Performance and Maintenance Optimization Models for Pavement Infrastructure ManagementMohamed S. Yamany (8803016) 07 May 2020 (has links)
<p>Highway infrastructure, including
roads/pavements, contributes significantly to a country’s economic growth,
quality of life improvement, and negative environmental impacts. Hence, highway
agencies strive to make efficient and effective use of their limited funding to
maintain their pavement infrastructure in good structural and functional
conditions. This necessitates predicting pavement performance and scheduling
maintenance interventions accurately and reliably by using appropriate
performance modeling and maintenance optimization methodologies, while
considering the impact of influential variables and the uncertainty inherent in
pavement condition data.</p>
<p> </p>
<p>Despite the enormous research efforts
toward stochastic pavement performance modeling and maintenance optimization,
several research gaps still exist. Prior research has not provided a synthesis
of Markovian models and their associated methodologies that could assist
researchers and highway agencies in selecting the Markov methodology that is
appropriate for use with the data available to the agency. In addition, past
Markovian pavement performance models did not adequately account for the
marginal effects of the preventive maintenance (PM) treatments due to the lack
of historical PM data, resulting in potentially unreliable models. The primary
components of a Markov model are the transition probability matrix, number of
condition states (NCS), and length of duty cycle (LDC). Previous Markovian pavement performance
models were developed using NCS and LDC based on data availability, pavement
condition indicator and data collection frequency. However, the selection of
NCS and LDC should also be based on producing pavement performance models with
high levels of prediction accuracy. Prior stochastic pavement maintenance
optimization models account for the uncertainty of the budget allocated to
pavement preservation at the network level. Nevertheless, variables such as
pavement condition deterioration and improvement that are also associated with
uncertainty, were not included in stochastic optimization models due to the
expected large size of the optimization problem.</p><p>The overarching goal of this dissertation
is to contribute to filling these research gaps with a view to improving
pavement management systems, helping to predict probabilistic pavement
performance and schedule pavement preventive maintenance accurately and
reliably. This study reviews Markovian pavement performance models using
various Markov methodologies and transition probabilities estimation methods,
presents a critical analysis of the different aspects of Markovian models as
applied in the literature, reveals gaps in knowledge, and offers suggestions
for bridging those gaps. This dissertation develops a decision tree which could
be used by researchers and highway agencies to select appropriate Markov
methodologies to model pavement performance under different conditions of data
availability. The lack of consideration of pavement PM impacts into
probabilistic pavement performance models due to absence of historical PM data
may result in erroneous and often biased pavement condition predictions,
leading to non-optimal pavement maintenance decisions. Hence, this research
introduces and validates a hybrid approach to incorporate the impact of PM into
probabilistic pavement performance models when historical PM data are limited
or absent. The types of PM treatments and their times of application are
estimated using two approaches: (1) Analysis of the state of practice of
pavement maintenance through literature and expert surveys, and (2) Detection
of PM times from probabilistic pavement performance curves. Using a newly
developed optimization algorithm, the estimated times and types of PM
treatments are integrated into pavement condition data. A non-homogeneous
Markovian pavement performance model is developed by estimating the transition
probabilities of pavement condition using the ordered-probit method. The
developed hybrid approach and performance models are validated using cross-validation
with out-of-sample data and through surveys of subject matter experts in
pavement engineering and management. The results show that the hybrid approach
and models developed can predict probabilistic pavement condition incorporating
PM effects with an accuracy of 87%.</p><p>The key Markov chain methodologies,
namely, homogeneous, staged-homogeneous, non-homogeneous, semi- and hidden
Markov, have been used to develop stochastic pavement performance models. This
dissertation hypothesizes that the NCS and LDC significantly influence the
prediction accuracy of Markov models and that the nature of such influence
varies across the different Markov methodologies. As such, this study develops
and compares the Markovian pavement performance models using empirical data and
investigates the sensitivity of Markovian model prediction accuracy to the NCS
and LDC. The results indicate that the semi-Markov is generally statistically
superior to the homogeneous and staged-homogeneous Markov (except in a few
cases of NCS and LDC combinations) and that Markovian model prediction accuracy
is significantly sensitive to the NCS and LDC: an increase in NCS improves the
prediction accuracy until a certain NCS threshold after which the accuracy
decreases, plausibly due to data overfitting. In addition, an increase in LDC
improves the prediction accuracy when the NCS is small.</p><p>Scheduling pavement
maintenance at road network level without considering the uncertainty of
pavement condition deterioration and improvement over the long-term (typically,
pavement design life) likely results in mistiming maintenance applications and
less optimal decisions. Hence, this dissertation develops stochastic pavement
maintenance optimization models that account for the uncertainty of pavement
condition deterioration and improvement as well as the budget constraint. The
objectives of the stochastic optimization models are to minimize the overall
deterioration of road network condition while minimizing the total maintenance
cost of the road network over a 20-year planning horizon (typical pavement
design life). Multi-objective Genetic Algorithm (MOGA) is used because of its
robust search capabilities, which lead to global optimal solutions. In order to
reduce the number of combinations of solutions of stochastic MOGA models, three
approaches are proposed and applied: (1) using PM treatments that are most
commonly used by highway agencies, (2) clustering pavement sections based on
their ages, and (3) creating a filtering constraint that applies a rest period
after treatment applications. The results of the stochastic MOGA models show
that the Pareto optimal solutions change significantly when the uncertainty of
pavement condition deterioration and improvement is included.</p>
|
153 |
Performance analysis of the general packet radio serviceLindemann, Christoph, Thümmler, Axel 17 December 2018 (has links)
This paper presents an efficient and accurate analytical model for the radio interface of the general packet radio service (GPRS) in a GSM network. The model is utilized for investigating how many packet data channels should be allocated for GPRS under a given amount of traffic in order to guarantee appropriate quality of service. The presented model constitutes a continuous-time Markov chain. The Markov model represents the sharing of radio channels by circuit switched GSM connections and packet switched GPRS sessions under a dynamic channel allocation scheme. In contrast to previous work, the Markov model explicitly represents the mobility of users by taking into account arrivals of new GSM and GPRS users as well as handovers from neighboring cells. Furthermore, we take into account TCP flow control for the GPRS data packets. To validate the simplifications necessary for making the Markov model amenable to numerical solution, we provide a comparison of the results of the Markov model with a detailed simulator on the network level.
|
154 |
Étude expérimentale et modélisation de mélangeurs convectifs : agitation de poudres de différentes coulabilités / Experimental study and modeling of convective mixers : agitation of powders of different flowabilitiesLegoix, Léonard 25 November 2016 (has links)
Les étapes de mélange sont souvent délicates à appréhender, car il subsiste encore des lacunes sur les lois dynamiques qui régissent ces opérations. De ce fait, la prédiction de l’homogénéité d’un mélange de poudres nécessite encore de nombreux essais. Dans ce travail de thèse, nous nous attachons à développer une méthodologie qui permet de contribuer au développement de modèles prédictifs dans les mélangeurs de poudres tout en mettant en évidence des invariants possibles pour les changements d’échelle sur ces procédés. Ainsi nous avons étudié l’agitation de poudres, avec différentes résistances à l’écoulement, dans un mélangeur convectif planétaire de type Triaxe® d’une contenance de 48 L. Des mesures de propriétés rhéologiques à l’échelle du laboratoire (rhéomètre FT4, voluménomètre) sont effectuées afin de mieux comprendre le comportement des poudres à l’échelle du mélangeur. Un mélangeur convectif prototype a été conçu dans le cadre de cette thèse. Cet appareil polyvalent constitué d’une cuve cylindrique transparente et agité par un mobile constitué de quatre pales, permet de visualiser les régimes et les mécanismes d’écoulement tout en effectuant des mesures rhéologiques. Deux régimes d’écoulement ont été identifiés (roulement, cataracte), ainsi que trois mécanismes (convection, diffusion et avalanche). Ces mécanismes ont permis d’établir un modèle stochastique, dont les paramètres ont été évalués expérimentalement pour une poudre à écoulement libre et pour une poudre cohésive. / Mixing systems are usually difficult to understand, because there is a lack of knowledge concerning dynamic laws ruling these operations. Thus, nowadays, several tests are needed to predict properly the homogeneity of a powder mix. Throughout this PhD work, a method is developed to build predictive models for powder mixers and to bring out possible invariants for scale switching on these processes. Thus the stirring of powders is studied using different flow resistances within a 48L capacity Triaxe®, a convective planetary mixer. Rheological properties measurements are done at labscale (FT4 rheometer, volumenometer) for a better understanding of powder behavior at a wider mixer scale. A prototype blender has been built for this work. This polyvalent device, made of four blades and of a transparent vessel, allows to observe flow regimes and mechanisms, and to do rheological measurements. Two flow regimes have been identified (rolling, cataracting) and three flow mechanisms (convection, diffusion and avalanching). These mechanisms allowed to do stochastic modelling, for which parameters have been evaluated with experiments for free-flowing and cohesive powders.
|
155 |
Hierarchical Approximation Methods for Option Pricing and Stochastic Reaction NetworksBen Hammouda, Chiheb 22 July 2020 (has links)
In biochemically reactive systems with small copy numbers of one or more reactant molecules, stochastic effects dominate the dynamics. In the first part of this thesis, we design novel efficient simulation techniques for a reliable and fast estimation of various statistical quantities for stochastic biological and chemical systems under the framework of Stochastic Reaction Networks. In the first work, we propose a novel hybrid multilevel Monte Carlo (MLMC) estimator, for systems characterized by having simultaneously fast and slow timescales. Our hybrid multilevel estimator uses a novel split-step implicit tau-leap scheme at the coarse levels, where the explicit tau-leap method is not applicable due to numerical instability issues. In a second work, we address another challenge present in this context called the high kurtosis phenomenon, observed at the deep levels of the MLMC estimator. We propose a novel approach that combines the MLMC method with a pathwise-dependent importance sampling technique for simulating the coupled paths. Our theoretical estimates and numerical analysis show that our method improves the robustness and complexity of the multilevel estimator, with a negligible additional cost. In the second part of this thesis, we design novel methods for pricing financial derivatives. Option pricing is usually challenging due to: 1) The high dimensionality of the input space, and 2) The low regularity of the integrand on the input parameters. We address these challenges by developing different techniques for smoothing the integrand to uncover the available regularity. Then, we approximate the resulting integrals using hierarchical quadrature methods combined with Brownian bridge construction and Richardson extrapolation. In the first work, we apply our approach to efficiently price options under the rough Bergomi model. This model exhibits several numerical and theoretical challenges, implying classical numerical methods for pricing being either inapplicable or computationally expensive. In a second work, we design a numerical smoothing technique for cases where analytic smoothing is impossible. Our analysis shows that adaptive sparse grids’ quadrature combined with numerical smoothing outperforms the Monte Carlo approach. Furthermore, our numerical smoothing improves the robustness and the complexity of the MLMC estimator, particularly when estimating density functions.
|
156 |
Digital Education Resource Mining for Decision SupportAL Fanah, Muna M.S. January 2021 (has links)
Nowadays education becomes a competitive and challenging domain, both nationally and internationally in terms of quality, visibility, experience of academic delivery affecting institutions, applicants, regulatory bodies. Currently data becomes more available for the general and public use, and plays also an increasingly significant role in decision support for education topics. For example, world university rankings (WUR) such as Quacquarelli Symonds (QS), Central World University Rankings (CWUR), Times Higher Education (Times) and national university rankings (e.g. the Guardian newspaper Best UK Universities and the Complete University Guide league tables) have published their data for many years now and are increasingly used in such decision making processes by institutions and general public.
|
157 |
Modeling land-cover change in the Amazon using historical pathways of land cover change and Markov chains. A case study of Rondõnia, BrazilBecerra-Cordoba, Nancy 15 August 2008 (has links)
The present dissertation research has three purposes: the first one is to predict anthropogenic deforestation caused by small farmers firstly using only pathways of past land cover change and secondly using demographic, socioeconomic and land cover data at the farm level. The second purpose is to compare the explanatory and predictive capacity of both approaches at identifying areas at high risk of deforestation among small farms in Rondõnia, Brazil. The third purpose is to test the assumptions of stationary probabilities and homogeneous subjects, both commonly used assumptions in predictive stochastic models applied to small farmers' deforestation decisions. This study uses the following data: household surveys, maps, satellite images and their land cover classification at the pixel level, and pathways of past land cover change for each farm. These data are available for a panel sample of farms in three municipios in Rondõnia, Brazil (Alto Paraiso, Nova União, and Rolim de Moura) and cover a ten-year period of study (1992-2002). Pathways of past land cover change are graphic representations in the form of flow charts that depict Land Cover Change (LCC) in each farm during the ten-year period of study. Pathways were constructed using satellite images, survey data and maps, and a set of interviews performed on a sub-sample of 70 farms. A panel data analysis of the estimated empirical probabilities was conducted to test for subject and time effects using a Fixed Group Effects Model (FGEM), specifically the Least Square Dummy Variable (LSDV1) fixed effects technique.
Finally, the two predictive modeling approaches are compared. The first modeling approach predicts future LCC using only past land cover change data in the form of empirical transitional probabilities of LCC obtained from pathways of past LCC. These empirical probabilities are used in a LSDV1 for fixed–group effects, a LSDV1 for fixed-time effects, and an Ordinary Least Square model (OLS) for the pooled sample. Results from these models are entered in a modified Markov chain model's matrix multiplication. The second modeling approach predicts future LCC using socio-demographic and economic survey variables at the household level. The survey data is used to perform a multinomial logit regression model to predict the LC class of each pixel. In order to compare the explanatory and predictive capacity of both modeling approaches, LCC predictions at the pixel level are summarized in terms of percentage of cells in which future LC was predicted correctly. Percentage of correct predicted land cover class is compared against actual pixel classification from satellite images. The presence of differences among farmers in the LSDV1-fixed group effect by farmer suggests that small farmers are not a homogeneous group in term of their probabilities of LCC and that further classification of farmers into homogeneous subgroups will depict better their LCC decisions. Changes in the total area of landholdings proved a stronger influence in farmer's LCC decisions in their main property (primary lot) when compared to changes in the area of the primary lot. Panel data analysis of the LCC empirical transition probabilities (LSDV1 fixed time effects model) does not find enough evidence to prefer the fixed time effects model when compared to a Ordinary Least Square (OLS) pooled version of the probabilities. When applying the results of the panel data analysis to a modified markov chain model the LSDV1-farmer model provided a slightly better accuracy (59.25% accuracy) than the LSDV1-time and the OLS-pooled models (57.54% and 57.18%, respectively). The main finding for policy and planning purposes is that owners type 1—with stable total landholdings over time—tend to preserve forest with a much higher probability (0.9033) than owner with subdividing or expanding properties (probs. of 0.0013 and 0.0030). The main implication for policy making and planning is to encourage primary forest preservation, given that the Markov chain analysis shows that primary forest changes into another land cover, it will never go back to this original land cover class. Policy and planning recommendations are provided to encourage owner type 1 to continue their pattern of high forest conservation rates. Some recommendations include: securing land titling, providing health care and alternative sources of income for the OT1's family members and elderly owners to remain in the lot. Future research is encouraged to explore spatial autocorrelation in the pixel's probabilities of land cover change, effects of local policies and macro-economic variables in the farmer's LCC decisions. / Ph. D.
|
158 |
Continuous Approximations of Discrete Phylogenetic Migration ModelsHuss, Simon, Mosetti Björk, Theodor January 2024 (has links)
Phylogenetics explores the evolutionary relationships among species and one of the main approaches is to construct phylogenetic trees through inference-based methods. Beyond the evolutionary insights these trees provide, the underlying tree structure can also be used to study geographical migration of species. These geographical models, reminiscent of models of DNA sequence evolution, have predominantly been discrete in their nature. However, this poses a multitude of challenges, especially with high-dimensional state-spaces. Previous work has explored the possibility of using continuous diffusion models for geographical migration, however these were not aiming to model non-local migration and large state-spaces. This paper presents and evaluates a scalable continuous phylogenetic migration model which aims to approximate conventional discrete migration models in the case of local and non-local migration.
|
159 |
Accelerating Monte Carlo methods for Bayesian inference in dynamical modelsDahlin, Johan January 2016 (has links)
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal. / Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
|
160 |
Infinite-state Stochastic and Parameterized SystemsBen Henda, Noomene January 2008 (has links)
<p>A major current challenge consists in extending formal methods in order to handle infinite-state systems. Infiniteness stems from the fact that the system operates on unbounded data structure such as stacks, queues, clocks, integers; as well as parameterization.</p><p>Systems with unbounded data structure are natural models for reasoning about communication protocols, concurrent programs, real-time systems, etc. While parameterized systems are more suitable if the system consists of an arbitrary number of identical processes which is the case for cache coherence protocols, distributed algorithms and so forth. </p><p>In this thesis, we consider model checking problems for certain fundamental classes of probabilistic infinite-state systems, as well as the verification of safety properties in parameterized systems. First, we consider probabilistic systems with unbounded data structures. In particular, we study probabilistic extensions of Lossy Channel Systems (PLCS), Vector addition Systems with States (PVASS) and Noisy Turing Machine (PNTM). We show how we can describe the semantics of such models by infinite-state Markov chains; and then define certain abstract properties, which allow model checking several qualitative and quantitative problems.</p><p>Then, we consider parameterized systems and provide a method which allows checking safety for several classes that differ in the topologies (linear or tree) and the semantics (atomic or non-atomic). The method is based on deriving an over-approximation which allows the use of a symbolic backward reachability scheme. For each class, the over-approximation we define guarantees monotonicity of the induced approximate transition system with respect to an appropriate order. This property is convenient in the sense that it preserves upward closedness when computing sets of predecessors.</p>
|
Page generated in 0.0409 seconds