• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 9
  • 5
  • 3
  • 3
  • Tagged with
  • 76
  • 76
  • 24
  • 16
  • 14
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Comparative Deterministic and Probabilistic Modeling in Geotechnics: Applications to Stabilization of Organic Soils, Determination of Unknown Foundations for Bridge Scour, and One-Dimensional Diffusion Processes

Yousefpour, Negin 16 December 2013 (has links)
This study presents different aspects on the use of deterministic methods including Artificial Neural Networks (ANNs), and linear and nonlinear regression, as well as probabilistic methods including Bayesian inference and Monte Carlo methods to develop reliable solutions for challenging problems in geotechnics. This study addresses the theoretical and computational advantages and limitations of these methods in application to: 1) prediction of the stiffness and strength of stabilized organic soils, 2) determination of unknown foundations for bridges vulnerable to scour, and 3) uncertainty quantification for one-dimensional diffusion processes. ANNs were successfully implemented in this study to develop nonlinear models for the mechanical properties of stabilized organic soils. ANN models were able to learn from the training examples and then generalize the trend to make predictions for the stiffness and strength of stabilized organic soils. A stepwise parameter selection and a sensitivity analysis method were implemented to identify the most relevant factors for the prediction of the stiffness and strength. Also, the variations of the stiffness and strength with respect to each factor were investigated. A deterministic and a probabilistic approach were proposed to evaluate the characteristics of unknown foundations of bridges subjected to scour. The proposed methods were successfully implemented and validated by collecting data for bridges in the Bryan District. ANN models were developed and trained using the database of bridges to predict the foundation type and embedment depth. The probabilistic Bayesian approach generated probability distributions for the foundation and soil characteristics and was able to capture the uncertainty in the predictions. The parametric and numerical uncertainties in the one-dimensional diffusion process were evaluated under varying observation conditions. The inverse problem was solved using Bayesian inference formulated by both the analytical and numerical solutions of the ordinary differential equation of diffusion. The numerical uncertainty was evaluated by comparing the mean and standard deviation of the posterior realizations of the process corresponding to the analytical and numerical solutions of the forward problem. It was shown that higher correlation in the structure of the observations increased both parametric and numerical uncertainties, whereas increasing the number of data dramatically decreased the uncertainties in the diffusion process.
52

Analyse de dépendance des programmes à objet en utilisant les modèles probabilistes des entrées

Bouchoucha, Arbi 09 1900 (has links)
La tâche de maintenance ainsi que la compréhension des programmes orientés objet (OO) deviennent de plus en plus coûteuses. L’analyse des liens de dépendance peut être une solution pour faciliter ces tâches d’ingénierie. Cependant, analyser les liens de dépendance est une tâche à la fois importante et difficile. Nous proposons une approche pour l'étude des liens de dépendance internes pour des programmes OO, dans un cadre probabiliste, où les entrées du programme peuvent être modélisées comme un vecteur aléatoire, ou comme une chaîne de Markov. Dans ce cadre, les métriques de couplage deviennent des variables aléatoires dont les distributions de probabilité peuvent être étudiées en utilisant les techniques de simulation Monte-Carlo. Les distributions obtenues constituent un point d’entrée pour comprendre les liens de dépendance internes entre les éléments du programme, ainsi que leur comportement général. Ce travail est valable dans le cas où les valeurs prises par la métrique dépendent des entrées du programme et que ces entrées ne sont pas fixées à priori. Nous illustrons notre approche par deux études de cas. / The task of maintenance and understanding of object-oriented programs is becoming increasingly costly. Dependency analysis can be a solution to facilitate this engineering task. However, dependency analysis is a task both important and difficult. We propose a framework for studying program internal dependencies in a probabilistic setting, where the program inputs are modeled either as a random vector, or as a Markov chain. In that setting, coupling metrics become random variables whose probability distributions can be studied via Monte-Carlo simulation. The obtained distributions provide an entry point for understanding the internal dependencies of program elements, as well as their general behaviour. This framework is appropriate for the (common) situation where the value taken by the metric does depend on the program inputs and where those inputs are not fixed a priori. We provide a concrete illustration with two case studies.
53

A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select

Baier, Christel, Engel, Benjamin, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus 03 December 2013 (has links) (PDF)
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.
54

Modeling of natural catastrophes / Modelování přírodních katastrof

Zuzák, Jaroslav January 2011 (has links)
This thesis introduces various approaches to natural catastrophe risk assessment in (re)insurance environment. Most emphasis and further elaboration is put on probabilistic models in comparison to the standard model as proposed by Solvency II. The outcomes of natural catastrophe modeling play an important role in the design of proper actuarial models related to catastrophe risk. More specifically it is shown that they can be entirely understood in a wider actuarial context, namely risk theory. Within the Solvency II framework, probabilistic model outcomes are translated by means of the proposed decomposition methodology putting them into a similar language of the standard formula in order to create the ability to compare different results implied by either probabilistic model or standard formula. This enables both comparison of the implied dependence structure of probabilistic model to standardized correlations assumed in Solvency II, and scenario year loss factors of Solvency II to implied damage factors of probabilistic models in defined cresta zones. The introduced decomposition methodology is illustrated by flood and windstorm model outcomes calculated on exposure data of Czech insurance companies and compared to the respective standard formula parameters and outcomes. Finally, other applications of the proposed decomposition methodology are introduced, such as measurement of diversification effect or blending of different results calculated by different models or even approaches to natural catastrophe risk assessment.
55

Algoritmos de estimação de distribuição para predição ab initio de estruturas de proteínas / Estimation of distribution algorithms for ab initio protein structure prediction

Daniel Rodrigo Ferraz Bonetti 05 March 2015 (has links)
As proteínas são moléculas que desempenham funções essenciais para a vida. Para entender a função de uma proteína é preciso conhecer sua estrutura tridimensional. No entanto, encontrar a estrutura da proteína pode ser um processo caro e demorado, exigindo profissionais altamente qualificados. Neste sentido, métodos computacionais têm sido investigados buscando predizer a estrutura de uma proteína a partir de uma sequência de aminoácidos. Em geral, tais métodos computacionais utilizam conhecimentos de estruturas de proteínas já determinadas por métodos experimentais, para tentar predizer proteínas com estrutura desconhecida. Embora métodos computacionais como, por exemplo, o Rosetta, I-Tasser e Quark tenham apresentado sucesso em suas predições, são apenas capazes de produzir estruturas significativamente semelhantes às já determinadas experimentalmente. Com isso, por utilizarem conhecimento a priori de outras estruturas pode haver certa tendência em suas predições. Buscando elaborar um algoritmo eficiente para Predição de Estruturas de Proteínas livre de tendência foi desenvolvido um Algoritmo de Estimação de Distribuição (EDA) específico para esse problema, com modelagens full-atom e algoritmos ab initio. O fato do algoritmo proposto ser ab initio é mais interessante para aplicação envolvendo proteínas com baixa similaridade, com relação às estruturas já conhecidas. Três tipos de modelos probabilísticos foram desenvolvidos: univariado, bivariado e hierárquico. O univariado trata o aspecto de multi-modalidade de uma variável, o bivariado trata os ângulos diedrais (Φ Ψ) de um mesmo aminoácido como variáveis correlacionadas. O hierárquico divide o problema em subproblemas e tenta tratá-los separadamente. Os resultados desta pesquisa mostraram que é possível obter melhores resultados quando considerado a relação bivariada (Φ Ψ). O hierárquico também mostrou melhorias nos resultados obtidos, principalmente para proteínas com mais de 50 resíduos. Além disso, foi realiza uma comparação com algumas heurísticas da literatura, como: Busca Aleatória, Monte Carlo, Algoritmo Genético e Evolução Diferencial. Os resultados mostraram que mesmo uma metaheurística pouco eficiente, como a Busca Aleatória, pode encontrar a solução correta, porém utilizando muito conhecimento a priori (predição que pode ser tendenciosa). Por outro lado, o algoritmo proposto neste trabalho foi capaz de obter a estrutura da proteína esperada sem utilizar conhecimento a priori, caracterizando uma predição puramente ab initio (livre de tendência). / Proteins are molecules that perform critical roles in the living organism and they are essential for their lifes. To understand the function of a protein, its 3D structure should be known. However, to find the protein structure is an expensive and a time-consuming task, requiring highly skilled professionals. Aiming to overcome such a limitation, computational methods for Protein Structure Prediction (PSP) have been investigated, in order to predict the protein structure from its amino acid sequence. Most of computational methods require knowledge from already determined structures from experimental methods in order to predict an unknown protein. Although computational methods such as Rosetta, I-Tasser and Quark have showed success in their predictions, they are only capable to predict quite similar structures to already known proteins obtained experimentally. The use of such a prior knowledge in the predictions of Rosetta, I-Tasser and Quark may lead to biased predictions. In order to develop a computational algorithm for PSP free of bias, we developed an Estimation of Distribution Algorithm applied to PSP with full-atom and ab initio model. A computational algorithm with ab initio model is mainly interesting when dealing with proteins with low similarity with the known proteins. In this work, we developed an Estimation of Distribution Algorithm with three probabilistic models: univariate, bivariate and hierarchical. The univariate deals with multi-modality of the distribution of the data of a single variable. The bivariate treats the dihedral angles (Proteins are molecules that perform critical roles in the living organism and they are essential for their lifes. To understand the function of a protein, its 3D structure should be known. However, to find the protein structure is an expensive and a time-consuming task, requiring highly skilled professionals. Aiming to overcome such a limitation, computational methods for Protein Structure Prediction (PSP) have been investigated, in order to predict the protein structure from its amino acid sequence. Most of computational methods require knowledge from already determined structures from experimental methods in order to predict an unknown protein. Although computational methods such as Rosetta, I-Tasser and Quark have showed success in their predictions, they are only capable to predict quite similar structures to already known proteins obtained experimentally. The use of such a prior knowledge in the predictions of Rosetta, I-Tasser and Quark may lead to biased predictions. In order to develop a computational algorithm for PSP free of bias, we developed an Estimation of Distribution Algorithm applied to PSP with full-atom and ab initio model. A computational algorithm with ab initio model is mainly interesting when dealing with proteins with low similarity with the known proteins. In this work, we developed an Estimation of Distribution Algorithm with three probabilistic models: univariate, bivariate and hierarchical. The univariate deals with multi-modality of the distribution of the data of a single variable. The bivariate treats the dihedral angles (Φ Ψ) within an amino acid as correlated variables. The hierarchical approach splits the original problem into subproblems and attempts to treat these problems in a separated manner. The experiments show that, indeed, it is possible to achieve better results when modeling the correlation (Φ Ψ). The hierarchical model also showed that is possible to improve the quality of results, mainly for proteins above 50 residues. Besides, we compared our proposed techniques among other metaheuristics from literatures such as: Random Walk, Monte Carlo, Genetic Algorithm and Differential Evolution. The results show that even a less efficient metaheuristic such as Random Walk managed to find the correct structure, however using many prior knowledge (prediction that may be biased). On the other hand, our proposed EDA for PSP was able to find the correct structure with no prior knowledge at all, so we can call this prediction as pure ab initio (biased-free).
56

Lokalizace mobilního robota pomocí kamery / Mobile Robot Localization Using Camera

Vaverka, Filip January 2015 (has links)
This thesis describes design and implementation of an approach to the mobile robot localization. The proposed method is based purely on images taken by a monocular camera. The described solution handles localization as an association problem and, therefore, falls in the category of topological localization methods. The method is based on a generative probabilistic model of the environment appearance. The proposed solution is capable to eliminate some of the difficulties which are common in traditional localization approaches.
57

Text-Based Information Retrieval Using Relevance Feedback

Krishnan, Sharenya January 2011 (has links)
Europeana, a freely accessible digital library with an idea to make Europe's cultural and scientific heritage available to the public was founded by the European Commission in 2008. The goal was to deliver a semantically enriched digital content with multilingual access to it. Even though they managed to increase the content of data they slowly faced the problem of retrieving information in an unstructured form. So to complement the Europeana portal services, ASSETS (Advanced Search Service and Enhanced Technological Solutions) was introduced with services that sought to improve the usability and accessibility of Europeana. My contribution is to study different text-based information retrieval models, their relevance feedback techniques and to implement one simple model. The thesis explains a detailed overview of the information retrieval process along with the implementation of the chosen strategy for relevance feedback that generates automatic query expansion. Finally, the thesis concludes with the analysis made using relevance feedback, discussion on the model implemented and then an assessment on future use of this model both as a continuation of my work and using this model in ASSETS.
58

Quantitative Analysis of Configurable and Reconfigurable Systems

Dubslaff, Clemens 21 March 2022 (has links)
The often huge configuration spaces of modern software systems render the detection, prediction, and explanation of defects and inadvertent behaviors challenging tasks. Besides configurability, a further source of complexity is the integration of cyber-physical systems (CPSs). Behaviors in CPSs depend on quantitative aspects such as throughput, energy consumption, and probability of failure, which all play a central role in new technologies like 5G networks, tactile internet, autonomous driving, and the internet of things. The manifold environmental influences and human interactions within CPSs might also trigger reconfigurations, e.g., to ensure quality of service through adaptivity or fulfill user’s wishes by adjusting program settings and performing software updates. Such reconfigurations add yet another source of complexity to the quest of modeling and analyzing modern software systems. The main contribution of this thesis is a formal compositional modeling and analysis framework for systems that involve configurability, adaptivity through reconfiguration, and quantitative aspects. Existing modeling approaches for configurable systems are commonly divided into annotative and compositional approaches, both having complementary strengths and weaknesses. It has been a well-known open problem in the configurable systems community whether there is a hybrid approach that combines the strengths of both specification approaches. We provide a formal solution to this problem, prove its correctness, and show practical applicability to actual configurable systems by introducing a formal analysis framework and its implementation. While existing family-based analysis approaches for configurable systems mainly focused on software systems, we show effectiveness of such approaches also in the hardware domain. To explicate the impact of configuration options onto analysis results, we introduce the notion of feature causality that is inspired by the seminal counterfactual definition of causality by Halpern and Pearl. By means of several experimental studies, including a velocity controller of an aircraft system that required new techniques already for its analysis, we show how our notion of causality facilitates to identify root causes, to estimate the effects of features, and to detect feature interactions.:1 Introduction 2 Foundations 3 Probabilistic Configurable Systems 4 Analysis and Synthesis in Reconfigurable Systems 5 Experimental Studies 6 Causality in Configurable Systems 7 Conclusion
59

Formal Configuration of Fault-Tolerant Systems

Herrmann, Linda 28 May 2019 (has links)
Bit flips are known to be a source of strange system behavior, failures, and crashes. They can cause dramatic financial loss, security breaches, or even harm human life. Caused by energized particles arising from, e.g., cosmic rays or heat, they are hardly avoidable. Due to transistor sizes becoming smaller and smaller, modern hardware becomes more and more prone to bit flips. This yields a high scientific interest, and many techniques to make systems more resilient against bit flips are developed. Fault-tolerance techniques are techniques that detect and react to bit flips or their effects. Before using these techniques, they typically need to be configured for the particular system they shall protect, the grade of resilience that shall be achieved, and the environment. State-of-the-art configuration approaches have a high risk of being imprecise, of being affected by undesired side effects, and of yielding questionable resilience measures. In this thesis we encourage the usage of formal methods for resiliency configuration, point out advantages and investigate difficulties. We exemplarily investigate two systems that are equipped with fault-tolerance techniques, and we apply parametric variants of probabilistic model checking to obtain optimal configurations for pre-defined resilience criteria. Probabilistic model checking is an automated formal method that operates on Markov models, i.e., state-based models with probabilistic transitions, where costs or rewards can be assigned to states and transitions. Probabilistic model checking can be used to compute, e.g., the probability of having a failure, the conditional probability of detecting an error in case of bit-flip occurrence, or the overhead that arises due to error detection and correction. Parametric variants of probabilistic model checking allow parameters in the transition probabilities and in the costs and rewards. Instead of computing values for probabilities and overhead, parametric variants compute rational functions. These functions can then be analyzed for optimality. The considered fault-tolerant systems are inspired by the work of project partners. The first system is an inter-process communication protocol as it is used in the Fiasco.OC microkernel. The communication structures provided by the kernel are protected against bit flips by a fault-tolerance technique. The second system is inspired by the redo-based fault-tolerance technique \haft. This technique protects an application against bit flips by partitioning the application's instruction flow into transaction, adding redundance, and redoing single transactions in case of error detection. Driven by these examples, we study challenges when using probabilistic model checking for fault-tolerance configuration and present solutions. We show that small transition probabilities, as they arise in error models, can be a cause of previously known accuracy issues, when using numeric solver in probabilistic model checking. We argue that the use of non-iterative methods is an acceptable alternative. We debate on the usability of the rational functions for finding optimal configurations, and show that for relatively short rational functions the usage of mathematical methods is appropriate. The redo-based fault-tolerance model suffers from the well-known state-explosion problem. We present a new technique, counter-based factorization, that tackles this problem for system models that do not scale because of a counter, as it is the case for this fault-tolerance model. This technique utilizes the chain-like structure that arises from the counter, splits the model into several parts, and computes local characteristics (in terms of rational functions) for these parts. These local characteristics can then be combined to retrieve global resiliency and overhead measures. The rational functions retrieved for the redo-based fault-tolerance model are huge - for small model instances they already have the size of more than one gigabyte. We therefor can not apply precise mathematic methods to these functions. Instead, we use the short, matrix-based representation, that arises from factorization, to point-wise evaluate the functions. Using this approach, we systematically explore the design space of the redo-based fault-tolerance model and retrieve sweet-spot configurations.
60

A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select

Baier, Christel, Engel, Benjamin, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2013 (has links)
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.

Page generated in 0.1274 seconds