• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 9
  • 5
  • 3
  • 3
  • Tagged with
  • 66
  • 66
  • 18
  • 16
  • 12
  • 11
  • 11
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Text-Based Information Retrieval Using Relevance Feedback

Krishnan, Sharenya January 2011 (has links)
Europeana, a freely accessible digital library with an idea to make Europe's cultural and scientific heritage available to the public was founded by the European Commission in 2008. The goal was to deliver a semantically enriched digital content with multilingual access to it. Even though they managed to increase the content of data they slowly faced the problem of retrieving information in an unstructured form. So to complement the Europeana portal services, ASSETS (Advanced Search Service and Enhanced Technological Solutions) was introduced with services that sought to improve the usability and accessibility of Europeana. My contribution is to study different text-based information retrieval models, their relevance feedback techniques and to implement one simple model. The thesis explains a detailed overview of the information retrieval process along with the implementation of the chosen strategy for relevance feedback that generates automatic query expansion. Finally, the thesis concludes with the analysis made using relevance feedback, discussion on the model implemented and then an assessment on future use of this model both as a continuation of my work and using this model in ASSETS.
52

Quantitative Analysis of Configurable and Reconfigurable Systems

Dubslaff, Clemens 21 March 2022 (has links)
The often huge configuration spaces of modern software systems render the detection, prediction, and explanation of defects and inadvertent behaviors challenging tasks. Besides configurability, a further source of complexity is the integration of cyber-physical systems (CPSs). Behaviors in CPSs depend on quantitative aspects such as throughput, energy consumption, and probability of failure, which all play a central role in new technologies like 5G networks, tactile internet, autonomous driving, and the internet of things. The manifold environmental influences and human interactions within CPSs might also trigger reconfigurations, e.g., to ensure quality of service through adaptivity or fulfill user’s wishes by adjusting program settings and performing software updates. Such reconfigurations add yet another source of complexity to the quest of modeling and analyzing modern software systems. The main contribution of this thesis is a formal compositional modeling and analysis framework for systems that involve configurability, adaptivity through reconfiguration, and quantitative aspects. Existing modeling approaches for configurable systems are commonly divided into annotative and compositional approaches, both having complementary strengths and weaknesses. It has been a well-known open problem in the configurable systems community whether there is a hybrid approach that combines the strengths of both specification approaches. We provide a formal solution to this problem, prove its correctness, and show practical applicability to actual configurable systems by introducing a formal analysis framework and its implementation. While existing family-based analysis approaches for configurable systems mainly focused on software systems, we show effectiveness of such approaches also in the hardware domain. To explicate the impact of configuration options onto analysis results, we introduce the notion of feature causality that is inspired by the seminal counterfactual definition of causality by Halpern and Pearl. By means of several experimental studies, including a velocity controller of an aircraft system that required new techniques already for its analysis, we show how our notion of causality facilitates to identify root causes, to estimate the effects of features, and to detect feature interactions.:1 Introduction 2 Foundations 3 Probabilistic Configurable Systems 4 Analysis and Synthesis in Reconfigurable Systems 5 Experimental Studies 6 Causality in Configurable Systems 7 Conclusion
53

Formal Configuration of Fault-Tolerant Systems

Herrmann, Linda 28 May 2019 (has links)
Bit flips are known to be a source of strange system behavior, failures, and crashes. They can cause dramatic financial loss, security breaches, or even harm human life. Caused by energized particles arising from, e.g., cosmic rays or heat, they are hardly avoidable. Due to transistor sizes becoming smaller and smaller, modern hardware becomes more and more prone to bit flips. This yields a high scientific interest, and many techniques to make systems more resilient against bit flips are developed. Fault-tolerance techniques are techniques that detect and react to bit flips or their effects. Before using these techniques, they typically need to be configured for the particular system they shall protect, the grade of resilience that shall be achieved, and the environment. State-of-the-art configuration approaches have a high risk of being imprecise, of being affected by undesired side effects, and of yielding questionable resilience measures. In this thesis we encourage the usage of formal methods for resiliency configuration, point out advantages and investigate difficulties. We exemplarily investigate two systems that are equipped with fault-tolerance techniques, and we apply parametric variants of probabilistic model checking to obtain optimal configurations for pre-defined resilience criteria. Probabilistic model checking is an automated formal method that operates on Markov models, i.e., state-based models with probabilistic transitions, where costs or rewards can be assigned to states and transitions. Probabilistic model checking can be used to compute, e.g., the probability of having a failure, the conditional probability of detecting an error in case of bit-flip occurrence, or the overhead that arises due to error detection and correction. Parametric variants of probabilistic model checking allow parameters in the transition probabilities and in the costs and rewards. Instead of computing values for probabilities and overhead, parametric variants compute rational functions. These functions can then be analyzed for optimality. The considered fault-tolerant systems are inspired by the work of project partners. The first system is an inter-process communication protocol as it is used in the Fiasco.OC microkernel. The communication structures provided by the kernel are protected against bit flips by a fault-tolerance technique. The second system is inspired by the redo-based fault-tolerance technique \haft. This technique protects an application against bit flips by partitioning the application's instruction flow into transaction, adding redundance, and redoing single transactions in case of error detection. Driven by these examples, we study challenges when using probabilistic model checking for fault-tolerance configuration and present solutions. We show that small transition probabilities, as they arise in error models, can be a cause of previously known accuracy issues, when using numeric solver in probabilistic model checking. We argue that the use of non-iterative methods is an acceptable alternative. We debate on the usability of the rational functions for finding optimal configurations, and show that for relatively short rational functions the usage of mathematical methods is appropriate. The redo-based fault-tolerance model suffers from the well-known state-explosion problem. We present a new technique, counter-based factorization, that tackles this problem for system models that do not scale because of a counter, as it is the case for this fault-tolerance model. This technique utilizes the chain-like structure that arises from the counter, splits the model into several parts, and computes local characteristics (in terms of rational functions) for these parts. These local characteristics can then be combined to retrieve global resiliency and overhead measures. The rational functions retrieved for the redo-based fault-tolerance model are huge - for small model instances they already have the size of more than one gigabyte. We therefor can not apply precise mathematic methods to these functions. Instead, we use the short, matrix-based representation, that arises from factorization, to point-wise evaluate the functions. Using this approach, we systematically explore the design space of the redo-based fault-tolerance model and retrieve sweet-spot configurations.
54

A Probabilistic Quantitative Analysis of Probabilistic-Write/Copy-Select

Baier, Christel, Engel, Benjamin, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2013 (has links)
Probabilistic-Write/Copy-Select (PWCS) is a novel synchronization scheme suggested by Nicholas Mc Guire which avoids expensive atomic operations for synchronizing access to shared objects. Instead, PWCS makes inconsistencies detectable and recoverable. It builds on the assumption that, for typical workloads, the probability for data races is very small. Mc Guire describes PWCS for multiple readers but only one writer of a shared data structure. In this paper, we report on the formal analysis of the PWCS protocol using a continuous-time Markov chain model and probabilistic model checking techniques. Besides the original PWCS protocol, we also considered a variant with multiple writers. The results were obtained by the model checker PRISM and served to identify scenarios in which the use of the PWCS protocol is justified by guarantees on the probability of data races. Moreover, the analysis showed several other quantitative properties of the PWCS protocol.
55

Développement d’un modèle de classification probabiliste pour la cartographie du couvert nival dans les bassins versants d’Hydro-Québec à l’aide de données de micro-ondes passives

Teasdale, Mylène 09 1900 (has links)
Chaque jour, des décisions doivent être prises quant à la quantité d'hydroélectricité produite au Québec. Ces décisions reposent sur la prévision des apports en eau dans les bassins versants produite à l'aide de modèles hydrologiques. Ces modèles prennent en compte plusieurs facteurs, dont notamment la présence ou l'absence de neige au sol. Cette information est primordiale durant la fonte printanière pour anticiper les apports à venir, puisqu'entre 30 et 40% du volume de crue peut provenir de la fonte du couvert nival. Il est donc nécessaire pour les prévisionnistes de pouvoir suivre l'évolution du couvert de neige de façon quotidienne afin d'ajuster leurs prévisions selon le phénomène de fonte. Des méthodes pour cartographier la neige au sol sont actuellement utilisées à l'Institut de recherche d'Hydro-Québec (IREQ), mais elles présentent quelques lacunes. Ce mémoire a pour objectif d'utiliser des données de télédétection en micro-ondes passives (le gradient de températures de brillance en position verticale (GTV)) à l'aide d'une approche statistique afin de produire des cartes neige/non-neige et d'en quantifier l'incertitude de classification. Pour ce faire, le GTV a été utilisé afin de calculer une probabilité de neige quotidienne via les mélanges de lois normales selon la statistique bayésienne. Par la suite, ces probabilités ont été modélisées à l'aide de la régression linéaire sur les logits et des cartographies du couvert nival ont été produites. Les résultats des modèles ont été validés qualitativement et quantitativement, puis leur intégration à Hydro-Québec a été discutée. / Every day, decisions must be made about the amount of hydroelectricity produced in Quebec. These decisions are based on the prediction of water inflow in watersheds based on hydrological models. These models take into account several factors, including the presence or absence of snow. This information is critical during the spring melt to anticipate future flows, since between 30 and 40 % of the flood volume may come from the melting of the snow cover. It is therefore necessary for forecasters to be able to monitor on a daily basis the snow cover to adjust their expectations about the melting phenomenon. Some methods to map snow on the ground are currently used at the Institut de recherche d'Hydro-Québec (IREQ), but they have some shortcomings. This master thesis's main goal is to use remote sensing passive microwave data (the vertically polarized brightness temperature gradient ratio (GTV)) with a statistical approach to produce snow maps and to quantify the classification uncertainty. In order to do this, the GTV has been used to calculate a daily probability of snow via a Gaussian mixture model using Bayesian statistics. Subsequently, these probabilities were modeled using linear regression models on logits and snow cover maps were produced. The models results were validated qualitatively and quantitatively, and their integration at Hydro-Québec was discussed.
56

Modelling and verification for DNA nanotechnology

Dannenberg, Frits Gerrit Willem January 2016 (has links)
DNA nanotechnology is a rapidly developing field that creates nanoscale devices from DNA, which enables novel interfaces with biological material. Their therapeutic use is envisioned and applications in other areas of basic science have already been found. These devices function at physiological conditions and, owing to their molecular scale, are subject to thermal fluctuations during both preparation and operation of the device. Troubleshooting a failed device is often difficult and we develop models to characterise two separate devices: DNA walkers and DNA origami. Our framework is that of continuous-time Markov chains, abstracting away much of the underlying physics. The resulting models are coarse but enable analysis of system-level performance, such as ‘the molecular computation eventually returns the correct answer with high probability’. We examine the applicability of probabilistic model checking to provide guarantees on the behaviour of nanoscale devices, and to this end we develop novel model checking methodology. We model a DNA walker that autonomously navigates a series of junctions, and we derive design principles that increase the probability of correct computational output. We also develop a novel parameter synthesis method for continuous-time Markov chains, for which the synthesised models guarantee a predetermined level of performance. Finally, we develop a novel discrete stochastic assembly model of DNA origami from first principles. DNA origami is a widespread method for creating nanoscale structures from DNA. Our model qualitatively reproduces experimentally observed behaviour and using the model we are able to rationally steer the folding pathway of a novel polymorphic DNA origami tile, controlling the eventual shape.
57

Nonlinear acoustic wave propagation in complex media : application to propagation over urban environments / Propagation d'ondes non linéaires en milieu complexe : application à la propagation en environnement urbain

Leissing, Thomas 30 November 2009 (has links)
Dans cette recherche, un modèle de propagation d’ondes de choc sur grandes distances sur un environnement urbain est construit et validé. L’approche consiste à utiliser l’Equation Parabolique Nonlinéaire (NPE) comme base. Ce modèle est ensuite étendu afin de prendre en compte d’autres effets relatifs à la propagation du son en milieu extérieur (surfaces non planes, couches poreuses, etc.). La NPE est résolue en utilisant la méthode des différences finies et donne des résultats en accord avec d’autres méthodes numériques. Ce modèle déterministe est ensuite utilisé comme base pour la construction d’un modèle stochastique de propagation sur environnements urbains. La Théorie de l’Information et le Principe du Maximum d’Entropie permettent la construction d’un modèle probabiliste d’incertitudes intégrant la variabilité du système dans la NPE. Des résultats de référence sont obtenus grâce à une méthode exacte et permettent ainsi de valider les développements théoriques et l’approche utilisée / This research aims at developing and validating a numerical model for the study of blast wave propagation over large distances and over urban environments. The approach consists in using the Nonlinear Parabolic Equation (NPE) model as a basis. The model is then extended to handle various features of sound propagation outdoors (non-flat ground topographies, porous ground layers, etc.). The NPE is solved using the finite-difference method and is proved to be in good agreement with other numerical methods. This deterministic model is then used as a basis for the construction of a stochastic model for sound propagation over urban environments. Information Theory and the Maximum Entropy Principle enable the construction of a probabilistic model of uncertainties, which takes into account the variability of the urban environment within the NPE model. Reference results are obtained with an exact numerical method and allow us to validate the theoretical developments and the approach used
58

Detektor tempa hudebních nahrávek na bázi neuronové sítě / Tempo detector based on a neural network

Suchánek, Tomáš January 2021 (has links)
This Master’s thesis deals with beat tracking systems, whose functionality is based on neural networks. It describes the structure of these systems and how the signal is processed in their individual blocks. Emphasis is then placed on recurrent and temporal convolutional networks, which by they nature can effectively detect tempo and beats in audio recordings. The selected methods, network architectures and their modifications are then implemented within a comprehensive detection system, which is further tested and evaluated through a cross-validation process on a genre-diverse data-set. The results show that the system, with proposed temporal convolutional network architecture, produces comparable results with foreign publications. For example, within the SMC dataset, it proved to be the most successful, on the contrary, in the case of other datasets it was slightly below the accuracy of state-of-the-art systems. In addition,the proposed network retains low computational complexity despite increased number of internal parameters.
59

Evoluční syntéza analogových elektronických obvodů s využitím algoritmů EDA / Evolutionary Synthesis of Analog Electronic Circuits Using EDA Algorithms

Slezák, Josef January 2014 (has links)
Disertační práce je zaměřena na návrh analogových elektronických obvodů pomocí algoritmů s pravěpodobnostními modely (algoritmy EDA). Prezentované metody jsou na základě požadovaných charakteristik cílových obvodů schopny navrhnout jak parametry použitých komponent tak také jejich topologii zapojení. Tři různé metody využití EDA algoritmů jsou navrženy a otestovány na příkladech skutečných problémů z oblasti analogových elektronických obvodů. První metoda je určena pro návrh pasivních analogových obvodů a využívá algoritmus UMDA pro návrh jak topologie zapojení tak také hodnot parametrů použitých komponent. Metoda je použita pro návrh admitanční sítě s požadovanou vstupní impedancí pro účely chaotického oscilátoru. Druhá metoda je také určena pro návrh pasivních analogových obvodů a využívá hybridní přístup - UMDA pro návrh topologie a metodu lokální optimalizace pro návrh parametrů komponent. Třetí metoda umožňuje návrh analogových obvodů obsahujících také tranzistory. Metoda využívá hybridní přístup - EDA algoritmus pro syntézu topologie a metoda lokální optimalizace pro určení parametrů použitých komponent. Informace o topologii je v jednotlivých jedincích populace vyjádřena pomocí grafů a hypergrafů.
60

Computing Quantiles in Markov Reward Models

Ummels, Michael, Baier, Christel January 2013 (has links)
Probabilistic model checking mainly concentrates on techniques for reasoning about the probabilities of certain path properties or expected values of certain random variables. For the quantitative system analysis, however, there is also another type of interesting performance measure, namely quantiles. A typical quantile query takes as input a lower probability bound p ∈ ]0,1] and a reachability property. The task is then to compute the minimal reward bound r such that with probability at least p the target set will be reached before the accumulated reward exceeds r. Quantiles are well-known from mathematical statistics, but to the best of our knowledge they have not been addressed by the model checking community so far. In this paper, we study the complexity of quantile queries for until properties in discrete-time finite-state Markov decision processes with nonnegative rewards on states. We show that qualitative quantile queries can be evaluated in polynomial time and present an exponential algorithm for the evaluation of quantitative quantile queries. For the special case of Markov chains, we show that quantitative quantile queries can be evaluated in pseudo-polynomial time.

Page generated in 0.0378 seconds