Spelling suggestions: "subject:"probabilistic modelchecking"" "subject:"probabilistic demodelchecking""
1 |
Bounds for the Maximum-Time Stochastic Shortest Path ProblemKozhokanova, Anara Bolotbekovna 13 December 2014 (has links)
A stochastic shortest path problem is an undiscounted infinite-horizon Markov decision process with an absorbing and costree target state, where the objective is to reach the target state while optimizing total expected cost. In almost all cases, the objective in solving a stochastic shortest path problem is to minimize the total expected cost to reach the target state. But in probabilistic model checking, it is also useful to solve a problem where the objective is to maximize the expected cost to reach the target state. This thesis considers the maximum-time stochastic shortest path problem, which is a special case of the maximum-cost stochastic shortest path problem where actions have unit cost. The contribution is an efficient approach to computing high-quality bounds on the optimal solution for this problem. The bounds are useful in themselves, but can also be used by other algorithms to accelerate search for an optimal solution.
|
2 |
Adaptation Timing in Self-Adaptive SystemsMoreno, Gabriel A. 01 April 2017 (has links)
Software-intensive systems are increasingly expected to operate under changing and uncertain conditions, including not only varying user needs and workloads, but also fluctuating resource capacity. Self-adaptation is an approach that aims to address this problem, giving systems the ability to change their behavior and structure to adapt to changes in themselves and their operating environment without human intervention. Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, some adaptation tactics—atomic adaptation actions that leave the system in a consistent state—have latency and take some time to produce their effect. In that case, reactive adaptation causes the system to lag behind environment changes. What is worse, a long running adaptation action may prevent the system from performing other adaptations until it completes, further limiting its ability to effectively deal with the environment changes. To address these limitations and improve the effectiveness of self-adaptation, we present proactive latency-aware adaptation, an approach that considers the timing of adaptation (i) leveraging predictions of the near future state of the environment to adapt proactively; (ii) considering the latency of adaptation tactics when deciding how to adapt; and (iii) executing tactics concurrently. We have developed three different solution approaches embodying these principles. One is based on probabilistic model checking, making it inherently able to deal with the stochastic behavior of the environment, and guaranteeing optimal adaptation choices over a finite decision horizon. The second approach uses stochastic dynamic programming to make adaptation decisions, and thanks to performing part of the computations required to make those decisions off-line, it achieves a speedup of an order of magnitude over the first solution approach without compromising optimality. A third solution approach makes adaptation decisions based on repertoires of adaptation strategies— predefined compositions of adaptation tactics. This approach is more scalable than the other two because the solution space is smaller, allowing an adaptive system to reap some of the benefits of proactive latency-aware adaptation even if the number of ways in which it could adapt is too large for the other approaches to consider all these possibilities. We evaluate the approach using two different classes of systems with different adaptation goals, and different repertoires of adaptation strategies. One of them is a web system, with the adaptation goal of utility maximization. The other is a cyberphysical system operating in a hostile environment. In that system, self-adaptation must not only maximize the reward gained, but also keep the probability of surviving a mission above a threshold. In both cases, our results show that proactive latency-aware adaptation improves the effectiveness of self-adaptation with respect to reactive time-agnostic adaptation.
|
3 |
Quantitative Analysis of Information Leakage in Probabilistic and Nondeterministic SystemsAndrés, Miguel 01 July 2011 (has links) (PDF)
As we dive into the digital era, there is growing concern about the amount of personal digital information that is being gathered about us. Websites often track people's browsing behavior, health care insurers gather medical data, and many smartphones and navigation systems store or trans- mit information that makes it possible to track the physical location of their users at any time. Hence, anonymity, and privacy in general, are in- creasingly at stake. Anonymity protocols counter this concern by offering anonymous communication over the Internet. To ensure the correctness of such protocols, which are often extremely complex, a rigorous framework is needed in which anonymity properties can be expressed, analyzed, and ulti- mately verified. Formal methods provide a set of mathematical techniques that allow us to rigorously specify and verify anonymity properties. This thesis addresses the foundational aspects of formal methods for applications in security and in particular in anonymity. More concretely, we develop frameworks for the specification of anonymity properties and propose algorithms for their verification. Since in practice anonymity pro- tocols always leak some information, we focus on quantitative properties which capture the amount of information leaked by a protocol. We start our research on anonymity from its very foundations, namely conditional probabilities - these are the key ingredient of most quantitative anonymity properties. In Chapter 2 we present cpCTL, the first temporal logic making it possible to specify conditional probabilities. In addition, we present an algorithm to verify cpCTL formulas in a model-checking fashion. This logic, together with the model-checker, allows us to specify and verify quantitative anonymity properties over complex systems where probabilistic and nondeterministic behavior may coexist. We then turn our attention to more practical grounds: the constructions of algorithms to compute information leakage. More precisely, in Chapter 3 we present polynomial algorithms to compute the (information-theoretic) leakage of several kinds of fully probabilistic protocols (i.e. protocols with- out nondeterministic behavior). The techniques presented in this chapter are the first ones enabling the computation of (information-theoretic) leak- age in interactive protocols. In Chapter 4 we attack a well known problem in distributed anonymity protocols, namely full-information scheduling. To overcome this problem, we propose an alternative definition of schedulers together with several new definitions of anonymity (varying according to the attacker's power), and revise the famous definition of strong-anonymity from the literature. Furthermore, we provide a technique to verify that a distributed protocol satisfies some of the proposed definitions. In Chapter 5 we provide (counterexample-based) techniques to debug complex systems, allowing for the detection of flaws in security protocols. Finally, in Chapter 6 we briefly discuss extensions to the frameworks and techniques proposed in Chapters 3 and 4.
|
4 |
Safety-aware apprenticeship learningZhou, Weichao 03 July 2018 (has links)
It is well acknowledged in the AI community that finding a good reward function for reinforcement learning is extremely challenging. Apprenticeship learning (AL) is a class of “learning from demonstration” techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent uses inverse reinforcement learning (IRL) methods to recover expert policy from a set of expert demonstrations. However, as the agent learns exclusively from observations, given a constraint on the probability of the agent running into unwanted situations, there is no verification, nor guarantee, for the learnt policy on the satisfaction of the restriction. In this dissertation, we study the problem of how to guide AL to learn a policy that is inherently safe while still meeting its learning objective. By combining formal methods with imitation learning, a Counterexample-Guided Apprenticeship Learning algorithm is proposed. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure both safety and performance of the learnt policy. This algorithm guarantees that given some formal safety specification defined by probabilistic temporal logic, the learnt policy shall satisfy this specification. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.
|
5 |
Systematic model-based safety assessment via probabilistic model checkingGOMES, Adriano José Oliveira 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:59:55Z (GMT). No. of bitstreams: 2
arquivo5803_1.pdf: 2496332 bytes, checksum: b4666e127bf620dbcb7437f9d83c2344 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2010 / Faculdade de Amparo à Ciência e Tecnologia do Estado de Pernambuco / A análise da segurança (Safety Assessment) é um processo bem conhecido que serve para
garantir que as restrições de segurança de um sistema crítico sejam cumpridas. Dentro dele, a
análise de segurança quantitativa lida com essas restrições em um contexto numérico
(probabilístico).
Os métodos de análise de segurança, como a tradicional Fault Tree Analysis (FTA), são
utilizados no processo de avaliação da segurança quantitativo, seguindo as diretrizes de
certificação (por exemplo, a ARP4761 Guia de Práticas Recomendadas da Aviação). No
entanto, este método é geralmente custoso e requer muito tempo e esforço para validar um
sistema como um todo, uma vez que para uma aeronave chegam a ser construídas, em média,
10.000 árvores de falha e também porque dependem fortemente das habilidades humanas para
lidar com suas limitações temporais que restringem o âmbito e o nível de detalhe que a análise e
os resultados podem alcançar. Por outro lado, as autoridades certificadoras também permitem a
utilização da análise de Markov, que, embora seus modelos sejam mais poderosos que as
árvores de falha, a indústria raramente adota esta análise porque seus modelos são mais
complexos e difíceis de lidar. Diante disto, FTA tem sido amplamente utilizada neste processo,
principalmente porque é conceitualmente mais simples e fácil de entender.
À medida que a complexidade e o time-to-market dos sistemas aumentam, o interesse em
abordar as questões de segurança durante as fases iniciais do projeto, ao invés de nas fases
intermediárias/finais, tornou comum a adoção de projetos, ferramentas e técnicas baseados em
modelos. Simulink é o exemplo padrão atualmente utilizado na indústria aeronáutica.
Entretanto, mesmo neste cenário, as soluções atuais seguem o que os engenheiros já utilizavam
anteriormente. Por outro lado, métodos formais que são linguagens, ferramentas e métodos
baseados em lógica e matemática discreta e não seguem as abordagens da engenharia
tradicional, podem proporcionar soluções inovadoras de baixo custo para engenheiros.
Esta dissertação define uma estratégia para a avaliação quantitativa de segurança baseada na
análise de Markov. Porém, em vez de lidar com modelos de Markov diretamente, usamos a
linguagem formal Prism (uma especificação em Prism é semanticamente interpretada como um
modelo de Markov). Além disto, esta especificação em Prism é extraída de forma sistemática a
partir de um modelo de alto nível (diagramas Simulink anotados com lógicas de falha do
sistema), através da aplicação de regras de tradução. A verificação sob o aspecto quantitativo
dos requisitos de segurança do sistema é realizada utilizando o verificador de modelos de Prism,
no qual os requisitos de segurança tornam-se fórmulas probabilísticas em lógica temporal.
O objetivo imediato do nosso trabalho é evitar o esforço de se criar várias árvores de falhas
até ser constatado que um requisito de segurança foi violado. Prism não constrói árvores de
falha para chegar neste resultado. Ele simplesmente verifica de uma só vez se um requisito de
segurança é satisfeito ou não no modelo inteiro.
Finalmente, nossa estratégia é ilustrada com um sistema simples (um projeto-piloto), mas
representativo, projetado pela Embraer
|
6 |
Computer-Aided Synthesis of Probabilistic Models / Computer-Aided Synthesis of Probabilistic ModelsAndriushchenko, Roman January 2020 (has links)
Předkládaná práce se zabývá problémem automatizované syntézy pravděpodobnostních systémů: máme-li rodinu Markovských řetězců, jak lze efektivně identifikovat ten který odpovídá zadané specifikaci? Takové rodiny často vznikají v nejrůznějších oblastech inženýrství při modelování systémů s neurčitostí a rozhodování i těch nejjednodušších syntézních otázek představuje NP-těžký problém. V dané práci my zkoumáme existující techniky založené na protipříklady řízené induktivní syntéze (counterexample-guided inductive synthesis, CEGIS) a na zjemňování abstrakce (counterexample-guided abstraction refinement, CEGAR) a navrhujeme novou integrovanou metodu pro pravděpodobnostní syntézu. Experimenty nad relevantními modely demonstrují, že navržená technika je nejenom srovnatelná s moderními metodami, ale ve většině případů dokáže výrazně překonat, někdy i o několik řádů, existující přístupy.
|
7 |
Probabilistic Model Checking for Temporal Logics in Weighted StructuresWunderlich, Sascha 23 September 2024 (has links)
Model checking is a well-established method for automatic system verification. Besides the extensively studied qualitative case, there is also an increasing interest in the quantitative analysis of system properties. Many important quantities can be formalised as the accumulated values of weight functions. These measures include resource usage such as energy consumption, or performance metrics such as the cost-utility ratio or reliability guarantees. Different kinds of accumulation like summation, averaging and ratios are necessary to cover the diverse spectrum of quantities.
This work provides a general framework for the formalisation and verification of system models and property specifications with accumulative values.
On the modelling side, we rely on weighted extensions of well-known modelling formalisms. Besides weighted transition systems, we investigate weighted probabilistic models such as Markov chains and Markov decision processes (MDPs). The weights in this sense are functions, mapping each state or transition in the model to a value, e.g., a rational vector.
For the specification side, we provide a language in the form of an extension of temporal logic with new modalities that impose restrictions on the accumulated weight along path fragments. These fragments are regular and can be characterised by finite automata, so called monitors. Specifically, we extend linear temporal logic (LTL) and (probabilistic) computation tree logic (CTL) with such constraints.
The framework allows variation to weaker formalisms, like non-negative or integral weight functions and bounded accumulation. We study the border of decidability of the model-checking problem for different combinations of these restrictions and give complexity results and algorithms for the decidable fragment.
An implementation of the model-checking algorithms on top of the popular probabilistic model checker PRISM is provided. We also investigate several optimization techniques that can be applied to a broad range of formula patterns. The practical behaviour of the implementation and its optimization methods is put to the test by a set of scaling experiments for each model type.:1. Introduction
1.1. Goal of the Thesis
1.2. Main Contributions
1.3. Related Work
1.4. Outline
1.5. Resources
2. Preliminaries
2.1. Modeling Formalisms
2.2. Finite Automata
2.3. Propositional Logic
2.4. Temporal Logics
2.4.1. Linear Temporal Logic
2.4.2. Computation Tree Logic
2.5. Model-Checking Problems
2.5.1. Markov Decision Processes
2.5.2. Markov Chains
2.5.3. Transition Systems
2.5.4. Calculate Probabilities
3. Specifications with Weight Accumulation
3.1. Weight Constraints
3.1.1. Syntax of Weight Constraints
3.1.2. Weighted Models
3.1.3. Interpretation of Weight Constraints
3.1.4. Properties of Weight Constraints
3.2. Monitor Automata
3.2.1. Automata Classes
3.2.2. Observing WMDP Paths
3.3. Variants
3.3.1. Weight Ratios
3.3.2. Other Linear Accumulation Operators
3.3.3. Other Weight Combinations
3.3.4. Filtered Semantics
4. Linear Temporal Logic with Accumulation
4.1. Syntax and Semantics of AccLTL
4.1.1. Syntax of AccLTL
4.1.2. Semantics of AccLTL
4.1.3. Past Variant
4.1.4. Transformation of Weight Functions
4.1.5. Examples for AccLTL Formulae
4.2. Decidability Results for Accumulation LTL
4.2.1. Encoding the Post Correspondence Problem
4.2.2. Reduction of the AccLTL Model-Checking Problem
4.3. Complexity Results for Bounded Accumulation LTL
4.3.1. Transformation to Unweighted MDP and LTL
4.3.2. Reduction to LTL Model-Checking Problems
4.3.3. Algorithm
4.4. Decidability Results for Conic Accumulation LTL and RMDPs
4.4.1. Transformation to Unweighted MDP and LTL
4.4.2. Simple Weight Constraints
4.4.3. 1-dimensional Weight Constraints
4.5. NP-hard and coNP-hard Formulae for WTS and WMCs
4.5.1. Formulae for WTS
4.5.2. Formulae for WMC
4.6. Efficiently Decidable Patterns
4.7. Summary
5. Computation Tree Logic with Accumulation
5.1. Syntax and Semantics
5.1.1. Syntax and Semantics of AccCTL
5.1.2. Syntax and Semantics of AccPCTL
5.2. Decidability Results for Accumulation (P)CTL
5.3. Complexity Results for Bounded Accumulation (P)CTL
5.3.1. Weighted Markov Decision Processes
5.3.2. Weighted Markov Chains
5.3.3. Weighted Transition Systems
5.4. Decidability Results for Conic Accumulation (P)CTL and RMDPs
5.5. Summary
6. Implementation and Experiments
6.1. Implementation Details
6.1.1. Formula Expression
6.1.2. Model Construction
6.2. Optimizations
6.2.1. Single Track Method
6.2.2. Rewriting Without Until
6.2.3. Monitor Filtering
6.2.4. Detection of Optimization Methods
6.3. Scaling Experiments
6.3.1. Scaling Dimensions
6.3.2. Setting
6.3.3. Model Description
6.3.4. Input Size
6.3.5. Optimization Effects
6.3.6. Filtering
7. Conclusions
7.1. Summary
7.2. Outlook and Future Work
A. Bibliography
B. Material for the experiments
B.1. Environment for the Experiments
B.1.1. Container Image
B.1.2. Model Definitions
|
8 |
Formal methods for the analysis of wireless network protocolsFruth, Matthias January 2011 (has links)
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.
|
9 |
Formal Methods for Probabilistic Energy ModelsDaum, Marcus 11 April 2019 (has links)
The energy consumption that arises from the utilisation of information processing systems adds a significant contribution to environmental pollution and has a big share of operation costs. This entails that we need to find ways to reduce the energy consumption of such systems. When trying to save energy it is important to ensure that the utility (e.g., user experience) of a system is not unnecessarily degraded, requiring a careful trade-off analysis between the consumed energy and the resulting utility. Therefore, research on energy efficiency has become a very active and important research topic that concerns many different scientific areas, and is as well of interest for industrial companies.
The concept of quantiles is already well-known in mathematical statistics, but its benefits for the formal quantitative analysis of probabilistic systems have been noticed only recently. For instance, with the help of quantiles it is possible to reason about the minimal energy that is required to obtain a desired system behaviour in a satisfactory manner, e.g., a required user experience will be achieved with a sufficient probability. Quantiles also allow the determination of the maximal utility that can be achieved with a reasonable probability while staying within a given energy budget. As those examples illustrate important measures that are of interest when analysing energy-aware systems, it is clear that it is beneficial to extend formal analysis-methods with possibilities for the calculation of quantiles.
In this monograph, we will see how we can take advantage of those quantiles as an instrument for analysing the trade-off between energy and utility in the field of probabilistic model checking. Therefore, we present algorithms for their computation over Markovian models. We will further investigate different techniques in order to improve the computational performance of implementations of those algorithms. The main feature that enables those improvements takes advantage of the specific characteristics of the linear programs that need to be solved for the computation of quantiles. Those improved algorithms have been implemented and integrated into the well-known probabilistic model checker PRISM. The performance of this implementation is then demonstrated by means of different protocols with an emphasis on the trade-off between the consumed energy and the resulting utility. Since the introduced methods are not restricted to the case of an energy-utility analysis only, the proposed framework can be used for analysing the interplay of cost and its resulting benefit in general.:1 Introduction
1.1 Related work
1.2 Contribution and outline
2 Preliminaries
3 Reward-bounded reachability properties and quantiles
3.1 Essentials
3.2 Dualities
3.3 Upper-reward bounded quantiles
3.3.1 Precomputation
3.3.2 Computation scheme
3.3.3 Qualitative quantiles
3.4 Lower-reward bounded quantiles
3.4.1 Precomputation
3.4.2 Computation scheme
3.5 Energy-utility quantiles
3.6 Quantiles under side conditions
3.6.1 Upper reward bounds
3.6.2 Lower reward bounds
3.6.2.1 Maximal reachability probabilities
3.6.2.2 Minimal reachability probabilities
3.7 Reachability quantiles and continuous time
3.7.1 Dualities
4 Expectation Quantiles
4.1 Computation scheme
4.2 Arbitrary models
4.2.1 Existential expectation quantiles
4.2.2 Universal expectation quantiles
5 Implementation
5.1 Computation optimisations
5.1.1 Back propagation
5.1.2 Reward window
5.1.3 Topological sorting of zero-reward sub-MDPs
5.1.4 Parallel computations
5.1.5 Multi-thresholds
5.1.6 Multi-state solution methods
5.1.7 Storage for integer sets
5.1.8 Elimination of zero-reward self-loops
5.2 Integration in Prism
5.2.1 Computation of reward-bounded reachability probabilities
5.2.2 Computation of quantiles in CTMCs
6 Analysed Protocols
6.1 Prism Benchmark Suite
6.1.1 Self-Stabilising Protocol
6.1.2 Leader-Election Protocol
6.1.3 Randomised Consensus Shared Coin Protocol
6.2 Energy-Aware Protocols
6.2.1 Energy-Aware Job-Scheduling Protocol
6.2.1.1 Energy-Aware Job-Scheduling Protocol with side conditions
6.2.1.2 Energy-Aware Job-Scheduling Protocol and expectation quantiles
6.2.1.3 Multiple shared resources
6.2.2 Energy-Aware Bonding Network Device (eBond)
6.2.3 HAECubie Demonstrator
6.2.3.1 Operational behaviour of the protocol
6.2.3.2 Formal analysis
7 Conclusion
7.1 Classification
7.2 Future prospects
Bibliography
List of Figures
List of Tables
|
10 |
Ontology-Mediated Probabilistic Model Checking: Extended VersionDubslaff, Clemens, Koopmann, Patrick, Turhan, Anni-Yasmin 20 June 2022 (has links)
Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of dynamic systems. On the other hand, description logics (DLs) provide a well-suited formalism to describe and reason about static knowledge, used in many areas to specify domain knowledge in an ontology. We investigate how such knowledge can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose a formalism that links ontologies to dynamic behaviors specified by guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Further, we present and implement a technique for their analysis relying on existing DL-reasoning and PMC tools. This way, we enable the application of standard PMC techniques to analyze knowledge-intensive systems. Our approach is implemented and evaluated on a multi-server system case study, where different DL-ontologies are used to provide specifications of different server platforms and situations the system is executed in.
|
Page generated in 0.1024 seconds