• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 9
  • 5
  • 3
  • 3
  • Tagged with
  • 75
  • 75
  • 23
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Analyse de dépendance des programmes à objet en utilisant les modèles probabilistes des entrées

Bouchoucha, Arbi 09 1900 (has links)
La tâche de maintenance ainsi que la compréhension des programmes orientés objet (OO) deviennent de plus en plus coûteuses. L’analyse des liens de dépendance peut être une solution pour faciliter ces tâches d’ingénierie. Cependant, analyser les liens de dépendance est une tâche à la fois importante et difficile. Nous proposons une approche pour l'étude des liens de dépendance internes pour des programmes OO, dans un cadre probabiliste, où les entrées du programme peuvent être modélisées comme un vecteur aléatoire, ou comme une chaîne de Markov. Dans ce cadre, les métriques de couplage deviennent des variables aléatoires dont les distributions de probabilité peuvent être étudiées en utilisant les techniques de simulation Monte-Carlo. Les distributions obtenues constituent un point d’entrée pour comprendre les liens de dépendance internes entre les éléments du programme, ainsi que leur comportement général. Ce travail est valable dans le cas où les valeurs prises par la métrique dépendent des entrées du programme et que ces entrées ne sont pas fixées à priori. Nous illustrons notre approche par deux études de cas. / The task of maintenance and understanding of object-oriented programs is becoming increasingly costly. Dependency analysis can be a solution to facilitate this engineering task. However, dependency analysis is a task both important and difficult. We propose a framework for studying program internal dependencies in a probabilistic setting, where the program inputs are modeled either as a random vector, or as a Markov chain. In that setting, coupling metrics become random variables whose probability distributions can be studied via Monte-Carlo simulation. The obtained distributions provide an entry point for understanding the internal dependencies of program elements, as well as their general behaviour. This framework is appropriate for the (common) situation where the value taken by the metric does depend on the program inputs and where those inputs are not fixed a priori. We provide a concrete illustration with two case studies.
52

A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select

Baier, Christel, Engel, Benjamin, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus 03 December 2013 (has links) (PDF)
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.
53

Modeling of natural catastrophes / Modelování přírodních katastrof

Zuzák, Jaroslav January 2011 (has links)
This thesis introduces various approaches to natural catastrophe risk assessment in (re)insurance environment. Most emphasis and further elaboration is put on probabilistic models in comparison to the standard model as proposed by Solvency II. The outcomes of natural catastrophe modeling play an important role in the design of proper actuarial models related to catastrophe risk. More specifically it is shown that they can be entirely understood in a wider actuarial context, namely risk theory. Within the Solvency II framework, probabilistic model outcomes are translated by means of the proposed decomposition methodology putting them into a similar language of the standard formula in order to create the ability to compare different results implied by either probabilistic model or standard formula. This enables both comparison of the implied dependence structure of probabilistic model to standardized correlations assumed in Solvency II, and scenario year loss factors of Solvency II to implied damage factors of probabilistic models in defined cresta zones. The introduced decomposition methodology is illustrated by flood and windstorm model outcomes calculated on exposure data of Czech insurance companies and compared to the respective standard formula parameters and outcomes. Finally, other applications of the proposed decomposition methodology are introduced, such as measurement of diversification effect or blending of different results calculated by different models or even approaches to natural catastrophe risk assessment.
54

Algoritmos de estimação de distribuição para predição ab initio de estruturas de proteínas / Estimation of distribution algorithms for ab initio protein structure prediction

Daniel Rodrigo Ferraz Bonetti 05 March 2015 (has links)
As proteínas são moléculas que desempenham funções essenciais para a vida. Para entender a função de uma proteína é preciso conhecer sua estrutura tridimensional. No entanto, encontrar a estrutura da proteína pode ser um processo caro e demorado, exigindo profissionais altamente qualificados. Neste sentido, métodos computacionais têm sido investigados buscando predizer a estrutura de uma proteína a partir de uma sequência de aminoácidos. Em geral, tais métodos computacionais utilizam conhecimentos de estruturas de proteínas já determinadas por métodos experimentais, para tentar predizer proteínas com estrutura desconhecida. Embora métodos computacionais como, por exemplo, o Rosetta, I-Tasser e Quark tenham apresentado sucesso em suas predições, são apenas capazes de produzir estruturas significativamente semelhantes às já determinadas experimentalmente. Com isso, por utilizarem conhecimento a priori de outras estruturas pode haver certa tendência em suas predições. Buscando elaborar um algoritmo eficiente para Predição de Estruturas de Proteínas livre de tendência foi desenvolvido um Algoritmo de Estimação de Distribuição (EDA) específico para esse problema, com modelagens full-atom e algoritmos ab initio. O fato do algoritmo proposto ser ab initio é mais interessante para aplicação envolvendo proteínas com baixa similaridade, com relação às estruturas já conhecidas. Três tipos de modelos probabilísticos foram desenvolvidos: univariado, bivariado e hierárquico. O univariado trata o aspecto de multi-modalidade de uma variável, o bivariado trata os ângulos diedrais (Φ Ψ) de um mesmo aminoácido como variáveis correlacionadas. O hierárquico divide o problema em subproblemas e tenta tratá-los separadamente. Os resultados desta pesquisa mostraram que é possível obter melhores resultados quando considerado a relação bivariada (Φ Ψ). O hierárquico também mostrou melhorias nos resultados obtidos, principalmente para proteínas com mais de 50 resíduos. Além disso, foi realiza uma comparação com algumas heurísticas da literatura, como: Busca Aleatória, Monte Carlo, Algoritmo Genético e Evolução Diferencial. Os resultados mostraram que mesmo uma metaheurística pouco eficiente, como a Busca Aleatória, pode encontrar a solução correta, porém utilizando muito conhecimento a priori (predição que pode ser tendenciosa). Por outro lado, o algoritmo proposto neste trabalho foi capaz de obter a estrutura da proteína esperada sem utilizar conhecimento a priori, caracterizando uma predição puramente ab initio (livre de tendência). / Proteins are molecules that perform critical roles in the living organism and they are essential for their lifes. To understand the function of a protein, its 3D structure should be known. However, to find the protein structure is an expensive and a time-consuming task, requiring highly skilled professionals. Aiming to overcome such a limitation, computational methods for Protein Structure Prediction (PSP) have been investigated, in order to predict the protein structure from its amino acid sequence. Most of computational methods require knowledge from already determined structures from experimental methods in order to predict an unknown protein. Although computational methods such as Rosetta, I-Tasser and Quark have showed success in their predictions, they are only capable to predict quite similar structures to already known proteins obtained experimentally. The use of such a prior knowledge in the predictions of Rosetta, I-Tasser and Quark may lead to biased predictions. In order to develop a computational algorithm for PSP free of bias, we developed an Estimation of Distribution Algorithm applied to PSP with full-atom and ab initio model. A computational algorithm with ab initio model is mainly interesting when dealing with proteins with low similarity with the known proteins. In this work, we developed an Estimation of Distribution Algorithm with three probabilistic models: univariate, bivariate and hierarchical. The univariate deals with multi-modality of the distribution of the data of a single variable. The bivariate treats the dihedral angles (Proteins are molecules that perform critical roles in the living organism and they are essential for their lifes. To understand the function of a protein, its 3D structure should be known. However, to find the protein structure is an expensive and a time-consuming task, requiring highly skilled professionals. Aiming to overcome such a limitation, computational methods for Protein Structure Prediction (PSP) have been investigated, in order to predict the protein structure from its amino acid sequence. Most of computational methods require knowledge from already determined structures from experimental methods in order to predict an unknown protein. Although computational methods such as Rosetta, I-Tasser and Quark have showed success in their predictions, they are only capable to predict quite similar structures to already known proteins obtained experimentally. The use of such a prior knowledge in the predictions of Rosetta, I-Tasser and Quark may lead to biased predictions. In order to develop a computational algorithm for PSP free of bias, we developed an Estimation of Distribution Algorithm applied to PSP with full-atom and ab initio model. A computational algorithm with ab initio model is mainly interesting when dealing with proteins with low similarity with the known proteins. In this work, we developed an Estimation of Distribution Algorithm with three probabilistic models: univariate, bivariate and hierarchical. The univariate deals with multi-modality of the distribution of the data of a single variable. The bivariate treats the dihedral angles (Φ Ψ) within an amino acid as correlated variables. The hierarchical approach splits the original problem into subproblems and attempts to treat these problems in a separated manner. The experiments show that, indeed, it is possible to achieve better results when modeling the correlation (Φ Ψ). The hierarchical model also showed that is possible to improve the quality of results, mainly for proteins above 50 residues. Besides, we compared our proposed techniques among other metaheuristics from literatures such as: Random Walk, Monte Carlo, Genetic Algorithm and Differential Evolution. The results show that even a less efficient metaheuristic such as Random Walk managed to find the correct structure, however using many prior knowledge (prediction that may be biased). On the other hand, our proposed EDA for PSP was able to find the correct structure with no prior knowledge at all, so we can call this prediction as pure ab initio (biased-free).
55

Lokalizace mobilního robota pomocí kamery / Mobile Robot Localization Using Camera

Vaverka, Filip January 2015 (has links)
This thesis describes design and implementation of an approach to the mobile robot localization. The proposed method is based purely on images taken by a monocular camera. The described solution handles localization as an association problem and, therefore, falls in the category of topological localization methods. The method is based on a generative probabilistic model of the environment appearance. The proposed solution is capable to eliminate some of the difficulties which are common in traditional localization approaches.
56

Text-Based Information Retrieval Using Relevance Feedback

Krishnan, Sharenya January 2011 (has links)
Europeana, a freely accessible digital library with an idea to make Europe's cultural and scientific heritage available to the public was founded by the European Commission in 2008. The goal was to deliver a semantically enriched digital content with multilingual access to it. Even though they managed to increase the content of data they slowly faced the problem of retrieving information in an unstructured form. So to complement the Europeana portal services, ASSETS (Advanced Search Service and Enhanced Technological Solutions) was introduced with services that sought to improve the usability and accessibility of Europeana. My contribution is to study different text-based information retrieval models, their relevance feedback techniques and to implement one simple model. The thesis explains a detailed overview of the information retrieval process along with the implementation of the chosen strategy for relevance feedback that generates automatic query expansion. Finally, the thesis concludes with the analysis made using relevance feedback, discussion on the model implemented and then an assessment on future use of this model both as a continuation of my work and using this model in ASSETS.
57

Quantitative Analysis of Configurable and Reconfigurable Systems

Dubslaff, Clemens 21 March 2022 (has links)
The often huge configuration spaces of modern software systems render the detection, prediction, and explanation of defects and inadvertent behaviors challenging tasks. Besides configurability, a further source of complexity is the integration of cyber-physical systems (CPSs). Behaviors in CPSs depend on quantitative aspects such as throughput, energy consumption, and probability of failure, which all play a central role in new technologies like 5G networks, tactile internet, autonomous driving, and the internet of things. The manifold environmental influences and human interactions within CPSs might also trigger reconfigurations, e.g., to ensure quality of service through adaptivity or fulfill user’s wishes by adjusting program settings and performing software updates. Such reconfigurations add yet another source of complexity to the quest of modeling and analyzing modern software systems. The main contribution of this thesis is a formal compositional modeling and analysis framework for systems that involve configurability, adaptivity through reconfiguration, and quantitative aspects. Existing modeling approaches for configurable systems are commonly divided into annotative and compositional approaches, both having complementary strengths and weaknesses. It has been a well-known open problem in the configurable systems community whether there is a hybrid approach that combines the strengths of both specification approaches. We provide a formal solution to this problem, prove its correctness, and show practical applicability to actual configurable systems by introducing a formal analysis framework and its implementation. While existing family-based analysis approaches for configurable systems mainly focused on software systems, we show effectiveness of such approaches also in the hardware domain. To explicate the impact of configuration options onto analysis results, we introduce the notion of feature causality that is inspired by the seminal counterfactual definition of causality by Halpern and Pearl. By means of several experimental studies, including a velocity controller of an aircraft system that required new techniques already for its analysis, we show how our notion of causality facilitates to identify root causes, to estimate the effects of features, and to detect feature interactions.:1 Introduction 2 Foundations 3 Probabilistic Configurable Systems 4 Analysis and Synthesis in Reconfigurable Systems 5 Experimental Studies 6 Causality in Configurable Systems 7 Conclusion
58

Formal Configuration of Fault-Tolerant Systems

Herrmann, Linda 28 May 2019 (has links)
Bit flips are known to be a source of strange system behavior, failures, and crashes. They can cause dramatic financial loss, security breaches, or even harm human life. Caused by energized particles arising from, e.g., cosmic rays or heat, they are hardly avoidable. Due to transistor sizes becoming smaller and smaller, modern hardware becomes more and more prone to bit flips. This yields a high scientific interest, and many techniques to make systems more resilient against bit flips are developed. Fault-tolerance techniques are techniques that detect and react to bit flips or their effects. Before using these techniques, they typically need to be configured for the particular system they shall protect, the grade of resilience that shall be achieved, and the environment. State-of-the-art configuration approaches have a high risk of being imprecise, of being affected by undesired side effects, and of yielding questionable resilience measures. In this thesis we encourage the usage of formal methods for resiliency configuration, point out advantages and investigate difficulties. We exemplarily investigate two systems that are equipped with fault-tolerance techniques, and we apply parametric variants of probabilistic model checking to obtain optimal configurations for pre-defined resilience criteria. Probabilistic model checking is an automated formal method that operates on Markov models, i.e., state-based models with probabilistic transitions, where costs or rewards can be assigned to states and transitions. Probabilistic model checking can be used to compute, e.g., the probability of having a failure, the conditional probability of detecting an error in case of bit-flip occurrence, or the overhead that arises due to error detection and correction. Parametric variants of probabilistic model checking allow parameters in the transition probabilities and in the costs and rewards. Instead of computing values for probabilities and overhead, parametric variants compute rational functions. These functions can then be analyzed for optimality. The considered fault-tolerant systems are inspired by the work of project partners. The first system is an inter-process communication protocol as it is used in the Fiasco.OC microkernel. The communication structures provided by the kernel are protected against bit flips by a fault-tolerance technique. The second system is inspired by the redo-based fault-tolerance technique \haft. This technique protects an application against bit flips by partitioning the application's instruction flow into transaction, adding redundance, and redoing single transactions in case of error detection. Driven by these examples, we study challenges when using probabilistic model checking for fault-tolerance configuration and present solutions. We show that small transition probabilities, as they arise in error models, can be a cause of previously known accuracy issues, when using numeric solver in probabilistic model checking. We argue that the use of non-iterative methods is an acceptable alternative. We debate on the usability of the rational functions for finding optimal configurations, and show that for relatively short rational functions the usage of mathematical methods is appropriate. The redo-based fault-tolerance model suffers from the well-known state-explosion problem. We present a new technique, counter-based factorization, that tackles this problem for system models that do not scale because of a counter, as it is the case for this fault-tolerance model. This technique utilizes the chain-like structure that arises from the counter, splits the model into several parts, and computes local characteristics (in terms of rational functions) for these parts. These local characteristics can then be combined to retrieve global resiliency and overhead measures. The rational functions retrieved for the redo-based fault-tolerance model are huge - for small model instances they already have the size of more than one gigabyte. We therefor can not apply precise mathematic methods to these functions. Instead, we use the short, matrix-based representation, that arises from factorization, to point-wise evaluate the functions. Using this approach, we systematically explore the design space of the redo-based fault-tolerance model and retrieve sweet-spot configurations.
59

A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select

Baier, Christel, Engel, Benjamin, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2013 (has links)
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.
60

Certificates and Witnesses for Probabilistic Model Checking

Jantsch, Simon 18 August 2022 (has links)
The ability to provide succinct information about why a property does, or does not, hold in a given system is a key feature in the context of formal verification and model checking. It can be used both to explain the behavior of the system to a user of verification software, and as a tool to aid automated abstraction and synthesis procedures. Counterexample traces, which are executions of the system that do not satisfy the desired specification, are a classical example. Specifications of systems with probabilistic behavior usually require that an event happens with sufficiently high (or low) probability. In general, single executions of the system are not enough to demonstrate that such a specification holds. Rather, standard witnesses in this setting are sets of executions which in sum exceed the required probability bound. In this thesis we consider methods to certify and witness that probabilistic reachability constraints hold in Markov decision processes (MDPs) and probabilistic timed automata (PTA). Probabilistic reachability constraints are threshold conditions on the maximal or minimal probability of reaching a set of target-states in the system. The threshold condition may represent an upper or lower bound and be strict or non-strict. We show that the model-checking problem for each type of constraint can be formulated as a satisfiability problem of a system of linear inequalities. These inequalities correspond closely to the probabilistic transition matrix of the MDP. Solutions of the inequalities are called Farkas certificates for the corresponding property, as they can indeed be used to easily validate that the property holds. By themselves, Farkas certificates do not explain why the corresponding probabilistic reachability constraint holds in the considered MDP. To demonstrate that the maximal reachability probability in an MDP is above a certain threshold, a commonly used notion are witnessing subsystems. A subsystem is a witness if the MDP satisfies the lower bound on the optimal reachability probability even if all states not included in the subsystem are made rejecting trap states. Hence, a subsystem is a part of the MDP which by itself satisfies the lower-bounded threshold constraint on the optimal probability of reaching the target-states. We consider witnessing subsystems for lower bounds on both the maximal and minimal reachability probabilities, and show that Farkas certificates and witnessing subsystems are related. More precisely, the support (i.e., the indices with a non-zero entry) of a Farkas certificate induces the state-space of a witnessing subsystem for the corresponding property. Vice versa, given a witnessing subsystem one can compute a Farkas certificate whose support corresponds to the state-space of the witness. This insight yields novel algorithms and heuristics to compute small and minimal witnessing subsystems. To compute minimal witnesses, we propose mixed-integer linear programming formulations whose solutions are Farkas certificates with minimal support. We show that the corresponding decision problem is NP-complete even for acyclic Markov chains, which supports the use of integer programs to solve it. As this approach does not scale well to large instances, we introduce the quotient-sum heuristic, which is based on iteratively solving a sequence of linear programs. The solutions of these linear programs are also Farkas certificates. In an experimental evaluation we show that the quotient-sum heuristic is competitive with state-of-the-art methods. A large part of the algorithms proposed in this thesis are implemented in the tool SWITSS. We study the complexity of computing minimal witnessing subsystems for probabilistic systems that are similar to trees or paths. Formally, this is captured by the notions of tree width and path width. Our main result here is that the problem of computing minimal witnessing subsystems remains NP-complete even for Markov chains with bounded path width. The hardness proof identifies a new source of combinatorial hardness in the corresponding decision problem. Probabilistic timed automata generalize MDPs by including a set of clocks whose values determine which transitions are enabled. They are widely used to model and verify real-time systems. Due to the continuously-valued clocks, their underlying state-space is inherently uncountable. Hence, the methods that we describe for finite-state MDPs do not carry over directly to PTA. Furthermore, a good notion of witness for PTA should also take into account timing aspects. We define two kinds of subsystems for PTA, one for maximal and one for minimal reachability probabilities, respectively. As for MDPs, a subsystem of a PTA is called a witness for a lower-bounded constraint on the (maximal or minimal) reachability probability, if it itself satisfies this constraint. Then, we show that witnessing subsystems of PTA induce Farkas certificates in certain finite-state quotients of the PTA. Vice versa, Farkas certificates of such a quotient induce witnesses of the PTA. Again, the support of the Farkas certificates corresponds to the states included in the subsystem. These insights are used to describe algorithms for the computation of minimal witnessing subsystems for PTA, with respect to three different notions of size. One of them counts the number of locations in the subsystem, while the other two take into account the possible clock valuations in the subsystem.:1 Introduction 2 Preliminaries 3 Farkas certificates 4 New techniques for witnessing subsystems 5 Probabilistic systems with low tree width 6 Explications for probabilistic timed automata 7 Conclusion

Page generated in 0.0718 seconds