• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 1
  • Tagged with
  • 22
  • 22
  • 22
  • 12
  • 11
  • 9
  • 9
  • 9
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bounds for the Maximum-Time Stochastic Shortest Path Problem

Kozhokanova, Anara Bolotbekovna 13 December 2014 (has links)
A stochastic shortest path problem is an undiscounted infinite-horizon Markov decision process with an absorbing and costree target state, where the objective is to reach the target state while optimizing total expected cost. In almost all cases, the objective in solving a stochastic shortest path problem is to minimize the total expected cost to reach the target state. But in probabilistic model checking, it is also useful to solve a problem where the objective is to maximize the expected cost to reach the target state. This thesis considers the maximum-time stochastic shortest path problem, which is a special case of the maximum-cost stochastic shortest path problem where actions have unit cost. The contribution is an efficient approach to computing high-quality bounds on the optimal solution for this problem. The bounds are useful in themselves, but can also be used by other algorithms to accelerate search for an optimal solution.
2

Adaptation Timing in Self-Adaptive Systems

Moreno, Gabriel A. 01 April 2017 (has links)
Software-intensive systems are increasingly expected to operate under changing and uncertain conditions, including not only varying user needs and workloads, but also fluctuating resource capacity. Self-adaptation is an approach that aims to address this problem, giving systems the ability to change their behavior and structure to adapt to changes in themselves and their operating environment without human intervention. Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, some adaptation tactics—atomic adaptation actions that leave the system in a consistent state—have latency and take some time to produce their effect. In that case, reactive adaptation causes the system to lag behind environment changes. What is worse, a long running adaptation action may prevent the system from performing other adaptations until it completes, further limiting its ability to effectively deal with the environment changes. To address these limitations and improve the effectiveness of self-adaptation, we present proactive latency-aware adaptation, an approach that considers the timing of adaptation (i) leveraging predictions of the near future state of the environment to adapt proactively; (ii) considering the latency of adaptation tactics when deciding how to adapt; and (iii) executing tactics concurrently. We have developed three different solution approaches embodying these principles. One is based on probabilistic model checking, making it inherently able to deal with the stochastic behavior of the environment, and guaranteeing optimal adaptation choices over a finite decision horizon. The second approach uses stochastic dynamic programming to make adaptation decisions, and thanks to performing part of the computations required to make those decisions off-line, it achieves a speedup of an order of magnitude over the first solution approach without compromising optimality. A third solution approach makes adaptation decisions based on repertoires of adaptation strategies— predefined compositions of adaptation tactics. This approach is more scalable than the other two because the solution space is smaller, allowing an adaptive system to reap some of the benefits of proactive latency-aware adaptation even if the number of ways in which it could adapt is too large for the other approaches to consider all these possibilities. We evaluate the approach using two different classes of systems with different adaptation goals, and different repertoires of adaptation strategies. One of them is a web system, with the adaptation goal of utility maximization. The other is a cyberphysical system operating in a hostile environment. In that system, self-adaptation must not only maximize the reward gained, but also keep the probability of surviving a mission above a threshold. In both cases, our results show that proactive latency-aware adaptation improves the effectiveness of self-adaptation with respect to reactive time-agnostic adaptation.
3

Quantitative Analysis of Information Leakage in Probabilistic and Nondeterministic Systems

Andrés, Miguel 01 July 2011 (has links) (PDF)
As we dive into the digital era, there is growing concern about the amount of personal digital information that is being gathered about us. Websites often track people's browsing behavior, health care insurers gather medical data, and many smartphones and navigation systems store or trans- mit information that makes it possible to track the physical location of their users at any time. Hence, anonymity, and privacy in general, are in- creasingly at stake. Anonymity protocols counter this concern by offering anonymous communication over the Internet. To ensure the correctness of such protocols, which are often extremely complex, a rigorous framework is needed in which anonymity properties can be expressed, analyzed, and ulti- mately verified. Formal methods provide a set of mathematical techniques that allow us to rigorously specify and verify anonymity properties. This thesis addresses the foundational aspects of formal methods for applications in security and in particular in anonymity. More concretely, we develop frameworks for the specification of anonymity properties and propose algorithms for their verification. Since in practice anonymity pro- tocols always leak some information, we focus on quantitative properties which capture the amount of information leaked by a protocol. We start our research on anonymity from its very foundations, namely conditional probabilities - these are the key ingredient of most quantitative anonymity properties. In Chapter 2 we present cpCTL, the first temporal logic making it possible to specify conditional probabilities. In addition, we present an algorithm to verify cpCTL formulas in a model-checking fashion. This logic, together with the model-checker, allows us to specify and verify quantitative anonymity properties over complex systems where probabilistic and nondeterministic behavior may coexist. We then turn our attention to more practical grounds: the constructions of algorithms to compute information leakage. More precisely, in Chapter 3 we present polynomial algorithms to compute the (information-theoretic) leakage of several kinds of fully probabilistic protocols (i.e. protocols with- out nondeterministic behavior). The techniques presented in this chapter are the first ones enabling the computation of (information-theoretic) leak- age in interactive protocols. In Chapter 4 we attack a well known problem in distributed anonymity protocols, namely full-information scheduling. To overcome this problem, we propose an alternative definition of schedulers together with several new definitions of anonymity (varying according to the attacker's power), and revise the famous definition of strong-anonymity from the literature. Furthermore, we provide a technique to verify that a distributed protocol satisfies some of the proposed definitions. In Chapter 5 we provide (counterexample-based) techniques to debug complex systems, allowing for the detection of flaws in security protocols. Finally, in Chapter 6 we briefly discuss extensions to the frameworks and techniques proposed in Chapters 3 and 4.
4

Safety-aware apprenticeship learning

Zhou, Weichao 03 July 2018 (has links)
It is well acknowledged in the AI community that finding a good reward function for reinforcement learning is extremely challenging. Apprenticeship learning (AL) is a class of “learning from demonstration” techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent uses inverse reinforcement learning (IRL) methods to recover expert policy from a set of expert demonstrations. However, as the agent learns exclusively from observations, given a constraint on the probability of the agent running into unwanted situations, there is no verification, nor guarantee, for the learnt policy on the satisfaction of the restriction. In this dissertation, we study the problem of how to guide AL to learn a policy that is inherently safe while still meeting its learning objective. By combining formal methods with imitation learning, a Counterexample-Guided Apprenticeship Learning algorithm is proposed. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure both safety and performance of the learnt policy. This algorithm guarantees that given some formal safety specification defined by probabilistic temporal logic, the learnt policy shall satisfy this specification. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.
5

Systematic model-based safety assessment via probabilistic model checking

GOMES, Adriano José Oliveira 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:59:55Z (GMT). No. of bitstreams: 2 arquivo5803_1.pdf: 2496332 bytes, checksum: b4666e127bf620dbcb7437f9d83c2344 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Faculdade de Amparo à Ciência e Tecnologia do Estado de Pernambuco / A análise da segurança (Safety Assessment) é um processo bem conhecido que serve para garantir que as restrições de segurança de um sistema crítico sejam cumpridas. Dentro dele, a análise de segurança quantitativa lida com essas restrições em um contexto numérico (probabilístico). Os métodos de análise de segurança, como a tradicional Fault Tree Analysis (FTA), são utilizados no processo de avaliação da segurança quantitativo, seguindo as diretrizes de certificação (por exemplo, a ARP4761 Guia de Práticas Recomendadas da Aviação). No entanto, este método é geralmente custoso e requer muito tempo e esforço para validar um sistema como um todo, uma vez que para uma aeronave chegam a ser construídas, em média, 10.000 árvores de falha e também porque dependem fortemente das habilidades humanas para lidar com suas limitações temporais que restringem o âmbito e o nível de detalhe que a análise e os resultados podem alcançar. Por outro lado, as autoridades certificadoras também permitem a utilização da análise de Markov, que, embora seus modelos sejam mais poderosos que as árvores de falha, a indústria raramente adota esta análise porque seus modelos são mais complexos e difíceis de lidar. Diante disto, FTA tem sido amplamente utilizada neste processo, principalmente porque é conceitualmente mais simples e fácil de entender. À medida que a complexidade e o time-to-market dos sistemas aumentam, o interesse em abordar as questões de segurança durante as fases iniciais do projeto, ao invés de nas fases intermediárias/finais, tornou comum a adoção de projetos, ferramentas e técnicas baseados em modelos. Simulink é o exemplo padrão atualmente utilizado na indústria aeronáutica. Entretanto, mesmo neste cenário, as soluções atuais seguem o que os engenheiros já utilizavam anteriormente. Por outro lado, métodos formais que são linguagens, ferramentas e métodos baseados em lógica e matemática discreta e não seguem as abordagens da engenharia tradicional, podem proporcionar soluções inovadoras de baixo custo para engenheiros. Esta dissertação define uma estratégia para a avaliação quantitativa de segurança baseada na análise de Markov. Porém, em vez de lidar com modelos de Markov diretamente, usamos a linguagem formal Prism (uma especificação em Prism é semanticamente interpretada como um modelo de Markov). Além disto, esta especificação em Prism é extraída de forma sistemática a partir de um modelo de alto nível (diagramas Simulink anotados com lógicas de falha do sistema), através da aplicação de regras de tradução. A verificação sob o aspecto quantitativo dos requisitos de segurança do sistema é realizada utilizando o verificador de modelos de Prism, no qual os requisitos de segurança tornam-se fórmulas probabilísticas em lógica temporal. O objetivo imediato do nosso trabalho é evitar o esforço de se criar várias árvores de falhas até ser constatado que um requisito de segurança foi violado. Prism não constrói árvores de falha para chegar neste resultado. Ele simplesmente verifica de uma só vez se um requisito de segurança é satisfeito ou não no modelo inteiro. Finalmente, nossa estratégia é ilustrada com um sistema simples (um projeto-piloto), mas representativo, projetado pela Embraer
6

Computer-Aided Synthesis of Probabilistic Models / Computer-Aided Synthesis of Probabilistic Models

Andriushchenko, Roman January 2020 (has links)
Předkládaná práce se zabývá problémem automatizované syntézy pravděpodobnostních systémů: máme-li rodinu Markovských řetězců, jak lze efektivně identifikovat ten který odpovídá zadané specifikaci? Takové rodiny často vznikají v nejrůznějších oblastech inženýrství při modelování systémů s neurčitostí a rozhodování i těch nejjednodušších syntézních otázek představuje NP-těžký problém. V dané práci my zkoumáme existující techniky založené na protipříklady řízené induktivní syntéze (counterexample-guided inductive synthesis, CEGIS) a na zjemňování abstrakce (counterexample-guided abstraction refinement, CEGAR) a navrhujeme novou integrovanou metodu pro pravděpodobnostní syntézu. Experimenty nad relevantními modely demonstrují, že navržená technika je nejenom srovnatelná s moderními metodami, ale ve většině případů dokáže výrazně překonat, někdy i o několik řádů, existující přístupy.
7

Formal methods for the analysis of wireless network protocols

Fruth, Matthias January 2011 (has links)
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.
8

Formal Methods for Probabilistic Energy Models

Daum, Marcus 11 April 2019 (has links)
The energy consumption that arises from the utilisation of information processing systems adds a significant contribution to environmental pollution and has a big share of operation costs. This entails that we need to find ways to reduce the energy consumption of such systems. When trying to save energy it is important to ensure that the utility (e.g., user experience) of a system is not unnecessarily degraded, requiring a careful trade-off analysis between the consumed energy and the resulting utility. Therefore, research on energy efficiency has become a very active and important research topic that concerns many different scientific areas, and is as well of interest for industrial companies. The concept of quantiles is already well-known in mathematical statistics, but its benefits for the formal quantitative analysis of probabilistic systems have been noticed only recently. For instance, with the help of quantiles it is possible to reason about the minimal energy that is required to obtain a desired system behaviour in a satisfactory manner, e.g., a required user experience will be achieved with a sufficient probability. Quantiles also allow the determination of the maximal utility that can be achieved with a reasonable probability while staying within a given energy budget. As those examples illustrate important measures that are of interest when analysing energy-aware systems, it is clear that it is beneficial to extend formal analysis-methods with possibilities for the calculation of quantiles. In this monograph, we will see how we can take advantage of those quantiles as an instrument for analysing the trade-off between energy and utility in the field of probabilistic model checking. Therefore, we present algorithms for their computation over Markovian models. We will further investigate different techniques in order to improve the computational performance of implementations of those algorithms. The main feature that enables those improvements takes advantage of the specific characteristics of the linear programs that need to be solved for the computation of quantiles. Those improved algorithms have been implemented and integrated into the well-known probabilistic model checker PRISM. The performance of this implementation is then demonstrated by means of different protocols with an emphasis on the trade-off between the consumed energy and the resulting utility. Since the introduced methods are not restricted to the case of an energy-utility analysis only, the proposed framework can be used for analysing the interplay of cost and its resulting benefit in general.:1 Introduction 1.1 Related work 1.2 Contribution and outline 2 Preliminaries 3 Reward-bounded reachability properties and quantiles 3.1 Essentials 3.2 Dualities 3.3 Upper-reward bounded quantiles 3.3.1 Precomputation 3.3.2 Computation scheme 3.3.3 Qualitative quantiles 3.4 Lower-reward bounded quantiles 3.4.1 Precomputation 3.4.2 Computation scheme 3.5 Energy-utility quantiles 3.6 Quantiles under side conditions 3.6.1 Upper reward bounds 3.6.2 Lower reward bounds 3.6.2.1 Maximal reachability probabilities 3.6.2.2 Minimal reachability probabilities 3.7 Reachability quantiles and continuous time 3.7.1 Dualities 4 Expectation Quantiles 4.1 Computation scheme 4.2 Arbitrary models 4.2.1 Existential expectation quantiles 4.2.2 Universal expectation quantiles 5 Implementation 5.1 Computation optimisations 5.1.1 Back propagation 5.1.2 Reward window 5.1.3 Topological sorting of zero-reward sub-MDPs 5.1.4 Parallel computations 5.1.5 Multi-thresholds 5.1.6 Multi-state solution methods 5.1.7 Storage for integer sets 5.1.8 Elimination of zero-reward self-loops 5.2 Integration in Prism 5.2.1 Computation of reward-bounded reachability probabilities 5.2.2 Computation of quantiles in CTMCs 6 Analysed Protocols 6.1 Prism Benchmark Suite 6.1.1 Self-Stabilising Protocol 6.1.2 Leader-Election Protocol 6.1.3 Randomised Consensus Shared Coin Protocol 6.2 Energy-Aware Protocols 6.2.1 Energy-Aware Job-Scheduling Protocol 6.2.1.1 Energy-Aware Job-Scheduling Protocol with side conditions 6.2.1.2 Energy-Aware Job-Scheduling Protocol and expectation quantiles 6.2.1.3 Multiple shared resources 6.2.2 Energy-Aware Bonding Network Device (eBond) 6.2.3 HAECubie Demonstrator 6.2.3.1 Operational behaviour of the protocol 6.2.3.2 Formal analysis 7 Conclusion 7.1 Classification 7.2 Future prospects Bibliography List of Figures List of Tables
9

Ontology-Mediated Probabilistic Model Checking: Extended Version

Dubslaff, Clemens, Koopmann, Patrick, Turhan, Anni-Yasmin 20 June 2022 (has links)
Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of dynamic systems. On the other hand, description logics (DLs) provide a well-suited formalism to describe and reason about static knowledge, used in many areas to specify domain knowledge in an ontology. We investigate how such knowledge can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose a formalism that links ontologies to dynamic behaviors specified by guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Further, we present and implement a technique for their analysis relying on existing DL-reasoning and PMC tools. This way, we enable the application of standard PMC techniques to analyze knowledge-intensive systems. Our approach is implemented and evaluated on a multi-server system case study, where different DL-ontologies are used to provide specifications of different server platforms and situations the system is executed in.
10

Tools and Techniques for Evaluating Reliability Trade-offs for Nano-Architectures

Bhaduri, Debayan 20 May 2004 (has links)
It is expected that nano-scale devices and interconnections will introduce unprecedented level of defects in the substrates, and architectural designs need to accommodate the uncertainty inherent at such scales. This consideration motivates the search for new architectural paradigms based on redundancy based defect-tolerant designs. However, redundancy is not always a solution to the reliability problem, and often too much or too little redundancy may cause degradation in reliability. The key challenge is in determining the granularity at which defect tolerance is designed, and the level of redundancy to achieve a specific level of reliability. Analytical probabilistic models to evaluate such reliability-redundancy trade-offs are error prone and cumbersome, and do not scalewell for complex networks of gates. In this thesiswe develop different tools and techniques that can evaluate the reliability measures of combinational circuits, and can be used to analyze reliability-redundancy trade-offs for different defect-tolerant architectural configurations. In particular, we have developed two tools, one of which is based on probabilistic model checking and is named NANOPRISM, and another MATLAB based tool called NANOLAB. We also illustrate the effectiveness of our reliability analysis tools by pointing out certain anomalies which are counter-intuitive but can be easily discovered by these tools, thereby providing better insight into defecttolerant design decisions. We believe that these tools will help furthering research and pedagogical interests in this area, expedite the reliability analysis process and enhance the accuracy of establishing reliability-redundancy trade-off points. / Master of Science

Page generated in 0.0939 seconds