• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 9
  • 5
  • 3
  • 3
  • Tagged with
  • 74
  • 74
  • 23
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Développement d’un modèle de classification probabiliste pour la cartographie du couvert nival dans les bassins versants d’Hydro-Québec à l’aide de données de micro-ondes passives

Teasdale, Mylène 09 1900 (has links)
Chaque jour, des décisions doivent être prises quant à la quantité d'hydroélectricité produite au Québec. Ces décisions reposent sur la prévision des apports en eau dans les bassins versants produite à l'aide de modèles hydrologiques. Ces modèles prennent en compte plusieurs facteurs, dont notamment la présence ou l'absence de neige au sol. Cette information est primordiale durant la fonte printanière pour anticiper les apports à venir, puisqu'entre 30 et 40% du volume de crue peut provenir de la fonte du couvert nival. Il est donc nécessaire pour les prévisionnistes de pouvoir suivre l'évolution du couvert de neige de façon quotidienne afin d'ajuster leurs prévisions selon le phénomène de fonte. Des méthodes pour cartographier la neige au sol sont actuellement utilisées à l'Institut de recherche d'Hydro-Québec (IREQ), mais elles présentent quelques lacunes. Ce mémoire a pour objectif d'utiliser des données de télédétection en micro-ondes passives (le gradient de températures de brillance en position verticale (GTV)) à l'aide d'une approche statistique afin de produire des cartes neige/non-neige et d'en quantifier l'incertitude de classification. Pour ce faire, le GTV a été utilisé afin de calculer une probabilité de neige quotidienne via les mélanges de lois normales selon la statistique bayésienne. Par la suite, ces probabilités ont été modélisées à l'aide de la régression linéaire sur les logits et des cartographies du couvert nival ont été produites. Les résultats des modèles ont été validés qualitativement et quantitativement, puis leur intégration à Hydro-Québec a été discutée. / Every day, decisions must be made about the amount of hydroelectricity produced in Quebec. These decisions are based on the prediction of water inflow in watersheds based on hydrological models. These models take into account several factors, including the presence or absence of snow. This information is critical during the spring melt to anticipate future flows, since between 30 and 40 % of the flood volume may come from the melting of the snow cover. It is therefore necessary for forecasters to be able to monitor on a daily basis the snow cover to adjust their expectations about the melting phenomenon. Some methods to map snow on the ground are currently used at the Institut de recherche d'Hydro-Québec (IREQ), but they have some shortcomings. This master thesis's main goal is to use remote sensing passive microwave data (the vertically polarized brightness temperature gradient ratio (GTV)) with a statistical approach to produce snow maps and to quantify the classification uncertainty. In order to do this, the GTV has been used to calculate a daily probability of snow via a Gaussian mixture model using Bayesian statistics. Subsequently, these probabilities were modeled using linear regression models on logits and snow cover maps were produced. The models results were validated qualitatively and quantitatively, and their integration at Hydro-Québec was discussed.
62

Modelling and verification for DNA nanotechnology

Dannenberg, Frits Gerrit Willem January 2016 (has links)
DNA nanotechnology is a rapidly developing field that creates nanoscale devices from DNA, which enables novel interfaces with biological material. Their therapeutic use is envisioned and applications in other areas of basic science have already been found. These devices function at physiological conditions and, owing to their molecular scale, are subject to thermal fluctuations during both preparation and operation of the device. Troubleshooting a failed device is often difficult and we develop models to characterise two separate devices: DNA walkers and DNA origami. Our framework is that of continuous-time Markov chains, abstracting away much of the underlying physics. The resulting models are coarse but enable analysis of system-level performance, such as ‘the molecular computation eventually returns the correct answer with high probability’. We examine the applicability of probabilistic model checking to provide guarantees on the behaviour of nanoscale devices, and to this end we develop novel model checking methodology. We model a DNA walker that autonomously navigates a series of junctions, and we derive design principles that increase the probability of correct computational output. We also develop a novel parameter synthesis method for continuous-time Markov chains, for which the synthesised models guarantee a predetermined level of performance. Finally, we develop a novel discrete stochastic assembly model of DNA origami from first principles. DNA origami is a widespread method for creating nanoscale structures from DNA. Our model qualitatively reproduces experimentally observed behaviour and using the model we are able to rationally steer the folding pathway of a novel polymorphic DNA origami tile, controlling the eventual shape.
63

Nonlinear acoustic wave propagation in complex media : application to propagation over urban environments / Propagation d'ondes non linéaires en milieu complexe : application à la propagation en environnement urbain

Leissing, Thomas 30 November 2009 (has links)
Dans cette recherche, un modèle de propagation d’ondes de choc sur grandes distances sur un environnement urbain est construit et validé. L’approche consiste à utiliser l’Equation Parabolique Nonlinéaire (NPE) comme base. Ce modèle est ensuite étendu afin de prendre en compte d’autres effets relatifs à la propagation du son en milieu extérieur (surfaces non planes, couches poreuses, etc.). La NPE est résolue en utilisant la méthode des différences finies et donne des résultats en accord avec d’autres méthodes numériques. Ce modèle déterministe est ensuite utilisé comme base pour la construction d’un modèle stochastique de propagation sur environnements urbains. La Théorie de l’Information et le Principe du Maximum d’Entropie permettent la construction d’un modèle probabiliste d’incertitudes intégrant la variabilité du système dans la NPE. Des résultats de référence sont obtenus grâce à une méthode exacte et permettent ainsi de valider les développements théoriques et l’approche utilisée / This research aims at developing and validating a numerical model for the study of blast wave propagation over large distances and over urban environments. The approach consists in using the Nonlinear Parabolic Equation (NPE) model as a basis. The model is then extended to handle various features of sound propagation outdoors (non-flat ground topographies, porous ground layers, etc.). The NPE is solved using the finite-difference method and is proved to be in good agreement with other numerical methods. This deterministic model is then used as a basis for the construction of a stochastic model for sound propagation over urban environments. Information Theory and the Maximum Entropy Principle enable the construction of a probabilistic model of uncertainties, which takes into account the variability of the urban environment within the NPE model. Reference results are obtained with an exact numerical method and allow us to validate the theoretical developments and the approach used
64

Detektor tempa hudebních nahrávek na bázi neuronové sítě / Tempo detector based on a neural network

Suchánek, Tomáš January 2021 (has links)
This Master’s thesis deals with beat tracking systems, whose functionality is based on neural networks. It describes the structure of these systems and how the signal is processed in their individual blocks. Emphasis is then placed on recurrent and temporal convolutional networks, which by they nature can effectively detect tempo and beats in audio recordings. The selected methods, network architectures and their modifications are then implemented within a comprehensive detection system, which is further tested and evaluated through a cross-validation process on a genre-diverse data-set. The results show that the system, with proposed temporal convolutional network architecture, produces comparable results with foreign publications. For example, within the SMC dataset, it proved to be the most successful, on the contrary, in the case of other datasets it was slightly below the accuracy of state-of-the-art systems. In addition,the proposed network retains low computational complexity despite increased number of internal parameters.
65

Evoluční syntéza analogových elektronických obvodů s využitím algoritmů EDA / Evolutionary Synthesis of Analog Electronic Circuits Using EDA Algorithms

Slezák, Josef January 2014 (has links)
Disertační práce je zaměřena na návrh analogových elektronických obvodů pomocí algoritmů s pravěpodobnostními modely (algoritmy EDA). Prezentované metody jsou na základě požadovaných charakteristik cílových obvodů schopny navrhnout jak parametry použitých komponent tak také jejich topologii zapojení. Tři různé metody využití EDA algoritmů jsou navrženy a otestovány na příkladech skutečných problémů z oblasti analogových elektronických obvodů. První metoda je určena pro návrh pasivních analogových obvodů a využívá algoritmus UMDA pro návrh jak topologie zapojení tak také hodnot parametrů použitých komponent. Metoda je použita pro návrh admitanční sítě s požadovanou vstupní impedancí pro účely chaotického oscilátoru. Druhá metoda je také určena pro návrh pasivních analogových obvodů a využívá hybridní přístup - UMDA pro návrh topologie a metodu lokální optimalizace pro návrh parametrů komponent. Třetí metoda umožňuje návrh analogových obvodů obsahujících také tranzistory. Metoda využívá hybridní přístup - EDA algoritmus pro syntézu topologie a metoda lokální optimalizace pro určení parametrů použitých komponent. Informace o topologii je v jednotlivých jedincích populace vyjádřena pomocí grafů a hypergrafů.
66

Alternative Automata-based Approaches to Probabilistic Model Checking

Müller, David 13 November 2019 (has links)
In this thesis we focus on new methods for probabilistic model checking (PMC) with linear temporal logic (LTL). The standard approach translates an LTL formula into a deterministic ω-automaton with a double-exponential blow up. There are approaches for Markov chain analysis against LTL with exponential runtime, which motivates the search for non-deterministic automata with restricted forms of non-determinism that make them suitable for PMC. For MDPs, the approach via deterministic automata matches the double-exponential lower bound, but a practical application might benefit from approaches via non-deterministic automata. We first investigate good-for-games (GFG) automata. In GFG automata one can resolve the non-determinism for a finite prefix without knowing the infinite suffix and still obtain an accepting run for an accepted word. We explain that GFG automata are well-suited for MDP analysis on a theoretic level, but our experiments show that GFG automata cannot compete with deterministic automata. We have also researched another form of pseudo-determinism, namely unambiguity, where for every accepted word there is exactly one accepting run. We present a polynomial-time approach for PMC of Markov chains against specifications given by an unambiguous Büchi automaton (UBA). Its two key elements are the identification whether the induced probability is positive, and if so, the identification of a state set inducing probability 1. Additionally, we examine the new symbolic Muller acceptance described in the Hanoi Omega Automata Format, which we call Emerson-Lei acceptance. It is a positive Boolean formula over unconditional fairness constraints. We present a construction of small deterministic automata using Emerson-Lei acceptance. Deciding, whether an MDP has a positive maximal probability to satisfy an Emerson-Lei acceptance, is NP-complete. This fact has triggered a DPLL-based algorithm for deciding positiveness.
67

Computing Quantiles in Markov Reward Models

Ummels, Michael, Baier, Christel January 2013 (has links)
Probabilistic model checking mainly concentrates on techniques for reasoning about the probabilities of certain path properties or expected values of certain random variables. For the quantitative system analysis, however, there is also another type of interesting performance measure, namely quantiles. A typical quantile query takes as input a lower probability bound p ∈ ]0,1] and a reachability property. The task is then to compute the minimal reward bound r such that with probability at least p the target set will be reached before the accumulated reward exceeds r. Quantiles are well-known from mathematical statistics, but to the best of our knowledge they have not been addressed by the model checking community so far. In this paper, we study the complexity of quantile queries for until properties in discrete-time finite-state Markov decision processes with nonnegative rewards on states. We show that qualitative quantile queries can be evaluated in polynomial time and present an exponential algorithm for the evaluation of quantitative quantile queries. For the special case of Markov chains, we show that quantitative quantile queries can be evaluated in pseudo-polynomial time.
68

Model Checking Techniques for Design and Analysis of Future Hardware and Software Systems

Märcker, Steffen 12 April 2021 (has links)
Computer hardware and software laid the foundation for fundamental innovations in science, technology, economics and society. Novel application areas generate an ever-increasing demand for computation power and storage capacities. Classic CMOS-based hardware and the von Neumann architecture are approaching their limits in miniaturization, power density and communication speed. To meet future demands, researchers work on new device technologies and architecture approaches which in turn require new algorithms and a hardware/software co-design to exploit their capabilities. Since the overall system heterogeneity and complexity increases, the challenge is to build systems with these technologies that are both correct and performant by design. Formal methods in general and model checking in particular are established verification methods in hardware design, and have been successfully applied to many hardware, software and integrated hardware/software systems. In many systems, probabilistic effects arise naturally, e.g., from input patterns, production variations or the occurrence of faults. Probabilistic model checking facilitates the quantitative analysis of performance and reliability measures in stochastic models that formalize this probabilism. The interdisciplinary research project Center for Advancing Electronics Dresden, cfaed for short, aims to explore hardware and software technologies for future information processing systems. It joins the research efforts of different groups working on technologies for all system layers ranging from transistor device research over system architecture up to the application layer. The collaborations among the groups showed a demand for new formal methods and enhanced tools to assist the design and analysis of technologies at all system layers and their cross-layer integration. Addressing these needs is the goal of this thesis. This work contributes to probabilistic model checking for Markovian models with new methods to compute two essential measures in the analysis of hardware/software systems and a method to tackle the state-space explosion problem: 1) Conditional probabilities are well known in stochastic theory and statistics, but efficient methods did not exist to compute conditional expectations in Markov chains and extremal conditional probabilities in Markov decision processes. This thesis develops new polynomial-time algorithms, and it provides a mature implementation for the probabilistic model checker PRISM. 2) Relativized long-run and relativized conditional long-run averages are proposed in this work to reason about probabilities and expectations in Markov chains on the long run when zooming into sets of states or paths. Both types of long-run averages are implemented for PRISM. 3) Symmetry reduction is an effective abstraction technique to tame the state-space explosion problem. However, state-of-the-art probabilistic model checkers apply it only after building the full model and offer no support for specifying non-trivial symmetric components. This thesis fills this gap with a modeling language based on symmetric program graphs that facilitates symmetry reduction on the source level. The new language can be integrated seamlessly into the PRISM modeling language. This work contributes to the research on future hardware/software systems in cfaed with three practical studies that are enabled by the developed methods and their implementations. 1) To confirm relevance of the new methods in practice and to validate the results, the first study analyzes a well-understood synchronization protocol, a test-and-test-and-set spinlock. Beyond this confirmation, the analysis demonstrates the capability to compute properties that are hardly accessible to measurements. 2) Probabilistic write-copy/select is an alternative protocol to overcome the scalability issues of classic resource-locking mechanisms. A quantitative analysis verifies the protocol's principle of operation and evaluates the performance trade-offs to guide future implementations of the protocol. 3) The impact of a new device technology is hard to estimate since circuit-level simulations are not available in the early stages of research. This thesis proposes a formal framework to model and analyze circuit designs for novel transistor technologies. It encompasses an operational model of electrical circuits, a functional model of polarity-controllable transistor devices and algorithms for design space exploration in order to find optimal circuit designs using probabilistic model checking. A practical study assesses the model accuracy for a lab-device based on germanium nanowires and performs an automated exploration and performance analysis of the design space of a given switching function. The experiments demonstrate how the framework enables an early systematic design space exploration and performance evaluation of circuits for experimental transistor devices.:1. Introduction 1.1 Related Work 2. Preliminaries 3. Conditional Probabilities in Markovian Models 3.1 Methods for Discrete- and Continuous-Time Markov Chains 3.2 Reset Method for Markov Decision Processes 3.3 Implementation 3.4 Evaluation and Comparative Studies 3.5 Conclusion 4. Long-Run Averages in Markov Chains 4.1 Relativized Long-Run Average 4.2 Conditional State Evolution 4.3 Implementation 4.4 Conclusion 5. Language-Support for Immediate Symmetry Reduction 5.1 Probabilistic Program Graphs 5.2 Symmetric Probabilistic Program Graphs 5.3 Implementation 5.4 Conclusion 6. Practical Applications of the Developed Techniques 6.1 Test-and-Test-and-Set Spinlock: Quantitative Analysis of an Established Protocol 6.2 Probabilistic Write/Copy-Select: Quantitative Analysis as Design Guide for a Novel Protocol 6.3 Circuit Design for Future Transistor Technologies: Evaluating Polarity-Controllable Multiple-Gate FETs 7. Conclusion Bibliography Appendices A. Conditional Probabilities and Expectations A.1 Selection of Benchmark Models A.2 Additional Benchmark Results A.3 Comparison PRISM vs. Storm B. Language-Support for Immediate Symmetry Reduction B.1 Syntax of the PRISM Modeling Language B.2 Multi-Core Example C. Practical Applications of the Developed Techniques C.1 Test-and-Test-and-Set Spinlock C.2 Probabilistic Write/Copy-Select C.3 Circuit Design for Future Transistor Technologies
69

Analyses probabilistes pour l'étude des réseaux électriques de distribution / Probabilistic load flow computation for unbalanced distribution grids with distributed generation

Diop, Fallilou 25 June 2018 (has links)
Les mutations observées sur le système électrique (production décentralisée, véhicules électriques, stockage, micro réseau...) font émerger des problématiques d'ordres économiques et techniques dans la gestion de ce dernier. Parmi eux, l'impact sur les niveaux de tension et de courant de neutre des réseaux de distribution. Le but de cette thèse est d'étudier des modèles probabilistes pour estimer ces impacts. L'incertitude sur la puissance PV produite et sur l'utilisation des VEs implique la nécessité de développer des modèles probabilistes de consommation et de production d'électricité. Deux modèles différents de production et de consommation ont été étudiés : L'un basé sur l'approximation de données historiques par une densité de probabilité, l'autre reposant sur la répartition des données en groupes définis par un profil type et une probabilité d'occurrence. Des techniques de load flow probabilistes ont été étudiées dans cette thèse pour prendre en compte l'effet intermittente de la production PV et l'incertitude sur la consommation. Une technique basée sur la méthode de simulation Monte Carlo, une deuxième basée sur l'approximation PEM et une dernière basée sur l'utilisation du clustering appelée méthode pseudo Monte Carlo. Après avoir comparé la pertinence des méthodes sur deux réseaux test, la méthode pseudo Monte Carlo est appliquée, pour son gain en temps de simulation et son adaptabilité, dans un cas d'application qui porte sur l'estimation de la probabilité de dépassement des limites du courant de neutre en fonction du déséquilibre de production PV installée. / The current changes on the electrical system bring out economic and technical issues in the management of the latter. Among these issues, the impact of distributed generation and VEs on the technical constraints of the distribution network. The aim of this thesis is to study probabilistic models to estimate the impacts of photovoltaic production and electrical vehicles on medium and low voltage distribution networks. Two different probabilistic models of production and consumption were studied : one based on the fitting of historical data by one probability density function, the other one based on the data clustered in groups defined by a standard profile and a probability of occurrence. Three probabilistic load flow technics have been studied in this thesis. The first is based on the Monte Carlo simulation method, the second is based on the PEM approximation method and the last, based on the use of clustering, is called pseudo Monte Carlo method.
70

Text to Music Audio Generation using Latent Diffusion Model : A re-engineering of AudioLDM Model / Text till musik ljudgenerering med hjälp av latent diffusionsmodell : En omkonstruktion av AudioLDM-modellen

Wang, Ernan January 2023 (has links)
In the emerging field of audio generation using diffusion models, this project pioneers the adaptation of the AudioLDM model framework, initially designed for text-to-daily sounds generation, towards text-to-music audio generation. This shift addresses a gap in the current scope of audio diffusion models, predominantly focused on everyday sounds. The motivation for this thesis stems from AudioLDM’s remarkable generative capabilities in producing daily sounds from text descriptions. However, its application in music audio generation remains underexplored. The thesis aims to modify AudioLDM’s architecture and training objectives to cater to the unique nuances of musical audio. The re-engineering process involved two primary methods. First, a dataset was constructed by sourcing a variety of music audio samples from the A Dataset For Music Analysis (FMA) [1] and generating pseudo captions using a Large Language Model specified in music captioning. This dataset served as the foundation for training the adapted model. Second, the model’s diffusion backbone, a UNet architecture, was revised in its text conditioning approach by incorporating both the CLAP encoder and the T5 text encoder. This dualencoding method, coupled with a shift from the traditional noise prediction objective to the V-objective, aimed to enhance the model’s performance in generating coherent and musically relevant audio. The effectiveness of these adaptations was validated through both subjective and objective evaluations. Compared to the original AudioLDM model, the adapted version demonstrated superior quality in the audio output and a higher relevance between text prompts and generated music. This advancement not only proves the feasibility of transforming AudioLDM for music generation but also opens new avenues for research and application in text-to-music audio synthesis / Inom det framväxande området för ljudgenerering med användning av diffusionsmodeller, banar detta projekt för anpassningen av AudioLDMmodellramverket, som ursprungligen utformades för generering av text-tilldagliga ljud, mot ljudgenerering av text-till-musik. Denna förändring tar itu med en lucka i den nuvarande omfattningen av ljuddiffusionsmodeller, främst inriktade på vardagliga ljud. Motivationen för denna avhandling kommer från AudioLDM:s anmärkningsvärda generativa förmåga att producera dagliga ljud från textbeskrivningar. Dock är dess tillämpning i musikljudgenerering fortfarande underutforskad. Avhandlingen syftar till att modifiera AudioLDM:s arkitektur och utbildningsmål för att tillgodose de unika nyanserna av musikaliskt ljud. Omarbetningsprocessen involverade två primära metoder. Först konstruerades en datauppsättning genom att hämta en mängd olika musikljudprover från A Dataset For Music Analysis (FMA) [1] och generera pseudotexter med hjälp av en Large Language Model specificerad i musiktextning. Denna datauppsättning fungerade som grunden för att träna den anpassade modellen. För det andra reviderades modellens diffusionsryggrad, en UNet-arkitektur, i sin textkonditioneringsmetod genom att inkludera både CLAP-kodaren och T5-textkodaren. Denna dubbelkodningsmetod, i kombination med en övergång från det traditionella brusförutsägelsemålet till V-målet, syftade till att förbättra modellens prestanda för att generera sammanhängande och musikaliskt relevant ljud. Effektiviteten av dessa anpassningar validerades genom både subjektiva och objektiva utvärderingar. Jämfört med den ursprungliga AudioLDMmodellen visade den anpassade versionen överlägsen kvalitet i ljudutgången och en högre relevans mellan textmeddelanden och genererad musik. Detta framsteg bevisar inte bara möjligheten att transformera AudioLDM för musikgenerering utan öppnar också nya vägar för forskning och tillämpning inom text-till-musik ljudsyntes.

Page generated in 0.0569 seconds