• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 78
  • 29
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 77
  • 74
  • 54
  • 50
  • 45
  • 41
  • 38
  • 34
  • 32
  • 32
  • 31
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Python implementation of graphical models

Gouws, Almero 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: In this thesis we present GrMPy, a library of classes and functions implemented in Python, designed for implementing graphical models. GrMPy supports both undirected and directed models, exact and approximate probabilistic inference, and parameter estimation from complete and incomplete data. In this thesis we outline the necessary theory required to understand the tools implemented within GrMPy as well as provide pseudo-code algorithms that illustrate how GrMPy is implemented. / AFRIKAANSE OPSOMMING: In hierdie verhandeling bied ons GrMPy aan,'n biblioteek van klasse en funksies wat Python geim- plimenteer word en ontwerp is vir die implimentering van grafiese modelle. GrMPy ondersteun beide gerigte en ongerigte modelle, presies eenbenaderde moontlike gevolgtrekkings en parameterskat- tings van volledige en onvolledige inligting. In hierdie verhandeling beskryf ons die nodige teorie wat benodig word om die hulpmiddels wat binne GrMPy geimplimenteer word te verstaan sowel as die pseudo-kodealgoritmes wat illustreer hoe GrMPy geimplimenteer is.
42

Troubleshooting Trucks : Automated Planning and Diagnosis / Felsökning av lastbilar : automatiserad planering och diagnos

Warnquist, Håkan January 2015 (has links)
This thesis considers computer-assisted troubleshooting of heavy vehicles such as trucks and buses. In this setting, the person that is troubleshooting a vehicle problem is assisted by a computer that is capable of listing possible faults that can explain the problem and gives recommendations of which actions to take in order to solve the problem such that the expected cost of restoring the vehicle is low. To achieve this, such a system must be capable of solving two problems: the diagnosis problem of finding which the possible faults are and the decision problem of deciding which action should be taken. The diagnosis problem has been approached using Bayesian network models. Frameworks have been developed for the case when the vehicle is in the workshop only and for remote diagnosis when the vehicle is monitored during longer periods of time. The decision problem has been solved by creating planners that select actions such that the expected cost of repairing the vehicle is minimized. New methods, algorithms, and models have been developed for improving the performance of the planner. The theory developed has been evaluated on models of an auxiliary braking system, a fuel injection system, and an engine temperature control and monitoring system.
43

Flexible cross layer design for improved quality of service in MANETs

Kiourktsidis, Ilias January 2011 (has links)
Mobile Ad hoc Networks (MANETs) are becoming increasingly important because of their unique characteristics of connectivity. Several delay sensitive applications are starting to appear in these kinds of networks. Therefore, an issue in concern is to guarantee Quality of Service (QoS) in such constantly changing communication environment. The classical QoS aware solutions that have been used till now in the wired and infrastructure wireless networks are unable to achieve the necessary performance in the MANETs. The specialized protocols designed for multihop ad hoc networks offer basic connectivity with limited delay awareness and the mobility factor in the MANETs makes them even more unsuitable for use. Several protocols and solutions have been emerging in almost every layer in the protocol stack. The majority of the research efforts agree on the fact that in such dynamic environment in order to optimize the performance of the protocols, there is the need for additional information about the status of the network to be available. Hence, many cross layer design approaches appeared in the scene. Cross layer design has major advantages and the necessity to utilize such a design is definite. However, cross layer design conceals risks like architecture instability and design inflexibility. The aggressive use of cross layer design results in excessive increase of the cost of deployment and complicates both maintenance and upgrade of the network. The use of autonomous protocols like bio-inspired mechanisms and algorithms that are resilient on cross layer information unavailability, are able to reduce the dependence on cross layer design. In addition, properties like the prediction of the dynamic conditions and the adaptation to them are quite important characteristics. The design of a routing decision algorithm based on Bayesian Inference for the prediction of the path quality is proposed here. The accurate prediction capabilities and the efficient use of the plethora of cross layer information are presented. Furthermore, an adaptive mechanism based on the Genetic Algorithm (GA) is used to control the flow of the data in the transport layer. The aforementioned flow control mechanism inherits GA’s optimization capabilities without the need of knowing any details about the network conditions, thus, reducing the cross layer information dependence. Finally, is illustrated how Bayesian Inference can be used to suggest configuration parameter values to the other protocols in different layers in order to improve their performance.
44

A cortical model of object perception based on Bayesian networks and belief propagation

Durá-Bernal, Salvador January 2011 (has links)
Evidence suggests that high-level feedback plays an important role in visual perception by shaping the response in lower cortical levels (Sillito et al. 2006, Angelucci and Bullier 2003, Bullier 2001, Harrison et al. 2007). A notable example of this is reflected by the retinotopic activation of V1 and V2 neurons in response to illusory contours, such as Kanizsa figures, which has been reported in numerous studies (Maertens et al. 2008, Seghier and Vuilleumier 2006, Halgren et al. 2003, Lee 2003, Lee and Nguyen 2001). The illusory contour activity emerges first in lateral occipital cortex (LOC), then in V2 and finally in V1, strongly suggesting that the response is driven by feedback connections. Generative models and Bayesian belief propagation have been suggested to provide a theoretical framework that can account for feedback connectivity, explain psychophysical and physiological results, and map well onto the hierarchical distributed cortical connectivity (Friston and Kiebel 2009, Dayan et al. 1995, Knill and Richards 1996, Geisler and Kersten 2002, Yuille and Kersten 2006, Deneve 2008a, George and Hawkins 2009, Lee and Mumford 2003, Rao 2006, Litvak and Ullman 2009, Steimer et al. 2009). The present study explores the role of feedback in object perception, taking as a starting point the HMAX model, a biologically inspired hierarchical model of object recognition (Riesenhuber and Poggio 1999, Serre et al. 2007b), and extending it to include feedback connectivity. A Bayesian network that captures the structure and properties of the HMAX model is developed, replacing the classical deterministic view with a probabilistic interpretation. The proposed model approximates the selectivity and invariance operations of the HMAX model using the belief propagation algorithm. Hence, the model not only achieves successful feedforward recognition invariant to position and size, but is also able to reproduce modulatory effects of higher-level feedback, such as illusory contour completion, attention and mental imagery. Overall, the model provides a biophysiologically plausible interpretation, based on state-of-theart probabilistic approaches and supported by current experimental evidence, of the interaction between top-down global feedback and bottom-up local evidence in the context of hierarchical object perception.
45

Modélisation d'éléments de structure en béton armé dégradés par corrosion : la problématique de l'interface acier/béton en présence de corrosion / Modelling of reinforced concrete structures subjected to corrosion : the specific case of the steel/concrete interface with corrosion.

Richard, Benjamin 14 September 2010 (has links)
Une des causes majeures responsables de la perte de performance des ouvrages en béton armé (fissuration excessive, perte de capacité portante) peut être imputée à la corrosion des armatures présentes dans les éléments structuraux. Ce phénomène est susceptible de se développer soit par carbonatation, soit par pénétration des ions chlorures par le béton d'enrobage. C'est alors que des produits de corrosion apparaissent et génèrent des contraintes de traction qui, dès dépassement de la résistance en traction, favorisent l'apparition de fissures. D'un point de vue pratique, dès que les premières fissures sont remarquées à la surface du béton, la corrosion a généralement atteint un stade avancée et des actions de maintenance doivent être envisagées. Cela entraînent d'importants coûts évitables si une prédiction satisfaisante avait pu être réalisée. Cette étude vise à apporter des éléments de réponse à cette problématique. Deux objectifs essentiels sont considérés dans ces travaux de thèse : le premier consiste à proposer des lois de comportement robuste et satisfaisantes permettant de bien décrire le comportement des éléments de structure existants et le second vise à introduire une méthode probabiliste permettant d'actuali ser les paramètres des deux modèles sur la base de l'information disponible sur le terrain. Un cadre constitutif générique couplant élasticité, endommagement isotrope et glissement interne, thermodynamiquement admissible, est pour cela développé. Cette classe de modèles est particularisée au cas de l'interface acier/béton en présence de corrosion et au cas du béton. Ces dernières peuvent être utilisées non seulement sous chargement monotone mais aussi sous chargement cyclique. Les lois proposées permettent de prendre en compte les effets hystérétiques, les déformations permanentes et l'effet unilatéral. En outre, ces dernières ont été validées sur différents cas tests. Des versions multifibres des lois précédemment mentionnées ont également été développées pour offrir à l'ingénieur des modèles simplifiés. Une prise en compte du caractère imparfait de l'interface acier/béton au sein du formalisme multifibre est notamment considérée. L'étape d'identification des paramètres mat ériaux n'est pas toujours aisée à réaliser en raison d'une part des incertitudes qui entachent ces derniers et, d'autre part, de la méconnaissance des mécanismes locaux. Ainsi, une méthodologie probabiliste complète permettant d'actualiser les paramètres d'entrée sur la base d'observations extérieures est proposée. Elle s'appuie sur une utilisation conjointe des réseaux bayésiens et de la théorie de la fiabilité. Elle permet ainsi de réduire l'écart entre la prédiction numérique et les mesures réalisées sur le terrain. Ce travail de thèse devrait contribuer à fournir aux gestionnaires d'ouvrage des outils d'aide à la décision leur permettant de mieux gérer leurs ouvrages en béton armé / A major source of a noticeable loss of performance (excessive cracking, loss of carrying load capacity) can be attributed to the corrosion phenomena induced either by carbonation or by chloride ions ingress. The corrosion products being expansive, tensile stresses are generated and usually lead to the cover concrete cracking when tensile strength is exceeded. From a practical point of view, when first observable signs of degradation are noticed on site, it is generally too late and maintenance actions have to be made. This results in important expenses that could have been avoided if a satisfying prediction had been made. This thesis aims to propose some answers to that problem. Two main objectives have been handled. The first one consists in formulating reliable constitutive models for a better understanding of the mechanical behaviour of existing concrete structures. The second objective aims to develop a probabilistic approach for updating t he mechanical model according to experimental information available on site. A general constitutive framework, thermodynamically admissible, has been proposed coupling elasticity, isotropic damage and internal sliding. This general framework has been declined in two specific constitutive models, on one hand for modelling the steel/concrete interface including corrosion and, on the other hand for modelling the concrete behaviour. Both models are validated on several structural cases. They can be used for monotonic and cyclic loadings. Besides, they account for non linear hysteretic effects, quasi unilateral effect, permanent strains, etc. Simplified versions of the proposed constitutive models are also proposed for engineering purposes within the framework of the multifiber beams theory. In the case of the steel/concrete interface, although a Timoshenko based kinematic is assumed, a non-perfect interface between steel and concrete can be considered locally. The material para meter identification is not always straightforward. Therefore, the use of robust updating methods can improve the accuracy of mechanical models. A complete probabilistic approach based on Bayesian Networks has been proposed. It allows not only considering the uncertainties related to mechanical parameters but also reducing the gap between experimental measurements and numerical predictions. This study provides to stakeholders pertinent decision tools for predicting the structural behaviour of degraded reinforced concrete structures
46

New Methods for Large-Scale Analyses of Social Identities and Stereotypes

Joseph, Kenneth 01 June 2016 (has links)
Social identities, the labels we use to describe ourselves and others, carry with them stereotypes that have significant impacts on our social lives. Our stereotypes, sometimes without us knowing, guide our decisions on whom to talk to and whom to stay away from, whom to befriend and whom to bully, whom to treat with reverence and whom to view with disgust. Despite these impacts of identities and stereotypes on our lives, existing methods used to understand them are lacking. In this thesis, I first develop three novel computational tools that further our ability to test and utilize existing social theory on identity and stereotypes. These tools include a method to extract identities from Twitter data, a method to infer affective stereotypes from newspaper data and a method to infer both affective and semantic stereotypes from Twitter data. Case studies using these methods provide insights into Twitter data relevant to the Eric Garner and Michael Brown tragedies and both Twitter and newspaper data from the “Arab Spring”. Results from these case studies motivate the need for not only new methods for existing theory, but new social theory as well. To this end, I develop a new sociotheoretic model of identity labeling - how we choose which label to apply to others in a particular situation. The model combines data, methods and theory from the social sciences and machine learning, providing an important example of the surprisingly rich interconnections between these fields.
47

Implementing Bayesian Networks for online threat detection

Pappaterra, Mauro José January 2018 (has links)
Cybersecurity threats have surged in the past decades. Experts agree that conventional security measures will soon not be enough to stop the propagation of more sophisticated and harmful cyberattacks. Recently, there has been a growing interest in mastering the complexity of cybersecurity by adopting methods borrowed from Artificial Intelligence (AI) in order to support automation. Moreover, entire security frameworks, such as DETECT (Decision Triggering Event Composer and Tracker), are designed aimed to the automatic and early detection of threats against systems, by using model analysis and recognising sequences of events and other tropes, inherent to attack patterns. In this project, I concentrate on cybersecurity threat assessment by the translation of Attack Trees (AT) into probabilistic detection models based on Bayesian Networks (BN). I also show how these models can be integrated and dynamically updated as a detection engine in the existing DETECT framework for automated threat detection, hence enabling both offline and online threat assessment. Integration in DETECT is important to allow real-time model execution and evaluation for quantitative threat assessment. Finally, I apply my methodology to some real-world case studies, evaluate models with sample data, perform data sensitivity analyses, then present and discuss the results.
48

Fusion de décisions dédiée à la surveillance des systèmes complexes / Decision fusion dedicated to the monitoring of complex systems

Tidriri, Khaoula 16 October 2018 (has links)
Le niveau de complexité croissant des systèmes et les exigences de performances et de sûreté de fonctionnement qui leur sont associées ont induit la nécessité de développer de nouvelles approches de surveillance. Les travaux de cette thèse portent sur la surveillance des systèmes complexes, notamment la détection, le diagnostic et le pronostic de défauts, avec une méthodologie basée sur la fusion de décisions. L’objectif principal est de proposer une approche générique de fusion de diverses méthodes de surveillance, dont la performance serait meilleure que celles des méthodes individuelles la composant. Pour cela, nous avons proposé une nouvelle démarche de fusion de décisions, basée sur la théorie Bayésienne. Cette démarche s’appuie sur une déduction théorique des paramètres du Réseau Bayésien en fonction des objectifs de performance à atteindre en surveillance. Le développement conduit à un problème multi-objectif sous contraintes, résolu par une approche lexicographique. La première étape se déroule hors-ligne et consiste à définir les objectifs de performance à respecter afin d’améliorer les performances globales du système. Les paramètres du réseau Bayésien permettant de respecter ces objectifs sont ensuite déduits de façon théorique. Enfin, le réseau Bayésien paramétré est utilisé en ligne afin de tester les performances de la fusion de décisions. Cette méthodologie est adaptée et appliquée d’une part à la détection et au diagnostic, et d’autre part au pronostic. Les performances sont évaluées en termes de taux de diagnostic de défauts (FDR) et taux de fausses alarmes (FAR) pour l’étape de détection et de diagnostic, et en durée de fonctionnement avant la défaillance du système (RUL) pour le pronostic. / Nowadays, systems are becoming more and more complex and require new effective methods for their supervision. This latter comprises a monitoring phase that aims to improve the system’s performances and ensure a safety production for humans and materials. This thesis work deals with fault detection, diagnosis and prognosis, with a methodology based on decisions fusion. The main issue concerns the integration of different decisions emanating from individual monitoring methods in order to obtain more reliable results. The methodology is based on a theoretical learning of the Bayesian network parameters, according to monitoring objectives to be reached. The development leads to a multi-objective problem under constraints, which is solved with a lexicographic approach. The first step is offline and consists of defining the objectives to be achieved in order to improve the overall performance of the system. The Bayesian network parameters respecting these objectives are then deduced theoretically. Finally, the parametrized Bayesian network is used online to test the decision fusion performances. These performances are evaluated in terms of Fault Diagnostic Rate (FDR) and False Alarm Rate (FAR) for the detection and diagnosis stage, and in terms of Remaining Useful Life (RUL) for the prognosis.
49

Approche de diagnostic des défauts d’un produit par intégration des données de traçabilité unitaire produit/process et des connaissances expertes / Product defects diagnosis approach by integrating product / process unitary traceability data and expert knowledge

Diallo, Thierno M. L. 10 December 2015 (has links)
Ces travaux de thèse, menés dans le cadre du projet FUI Traçaverre, visent à optimiser le rappel pour un processus qui n'est pas de type batch avec une traçabilité unitaire des articles produits. L'objectif étant de minimiser le nombre d'articles rappelés tout en s'assurant que tous les articles avec défaut sont rappelés. Pour cela, nous avons proposé un processus de rappel efficient qui intègre, d'une part, les possibilités offertes par la traçabilité unitaire et, d'autre part, utilise une fonction de diagnostic devenue indispensable avant le rappel effectif des produits. Dans le cas des systèmes industriels complexes pour lesquels l'expertise humaine est insuffisante et dont nous n'avons pas de modèle physique, la traçabilité unitaire offre une possibilité pour mieux comprendre et analyser le procédé de fabrication par une reconstitution de la vie du produit à travers les données de traçabilité. Le couplage des données de traçabilité unitaire produit/process représente une source potentielle de connaissance à mettre en oeuvre et à exploiter. Ces travaux de thèse proposent un modèle de données pour le couplage de ces données. Ce modèle de données est basé sur deux standards, l'un dédié à la production et l'autre portant sur la traçabilité. Après l'identification et l'intégration des données nécessaires, nous avons développé une fonction de diagnostic à base de données. La construction de cette fonction diagnostic a été réalisée par apprentissage et comprend l'intégration des connaissances sur le système pour réduire la complexité de l'algorithme d'apprentissage. Dans le processus de rappel proposé, lorsque l'équipement à l'origine du défaut nécessitant le rappel est identifié, l'état de santé de cet équipement au voisinage de l'instant de fabrication du produit contrôlé défectueux est évalué afin d'identifier les autres produits susceptibles de présenter le même défaut. L'approche globale proposée est appliquée à deux études de cas. La première étude a concerné l'industrie verrière. Le second cas d'application a porté sur le process benchmark Tennessee Eastman / This thesis, which is part of the Traçaverre Project, aims to optimize the recall when the production process is not batch type with a unit traceability of produced items. The objective is to minimize the number of recalled items while ensuring that all items with defect are recalled. We propose an efficient recall procedure that incorporates possibilities offered by the unitary traceability and uses a diagnostic function. For complex industrial systems for which human expertise is not sufficient and for which we do not have a physical model, the unitary traceability provides opportunities to better understand and analyse the manufacturing process by a re-enactment of the life of the product through the traceability data. The integration of product and process unitary traceability data represents a potential source of knowledge to be implemented and operate. This thesis propose a data model for the coupling of these data. This data model is based on two standards, one dedicated to the production and the other dealing with the traceability. We developed a diagnostic function based on data after having identified and integrated the necessary data. The construction of this diagnosis function was performed by a learning approach and comprises the integration of knowledge on the system to reduce the complexity of the learning algorithm. In the proposed recall procedure, when the equipment causing the fault is identified, the health status of this equipment in the neighbourhood of the manufacturing time of the defective product is evaluated in order to identify other products likely to present the same defect. The global proposed approach was applied to two case studies. The first study focuses on the glass industry. The second case of application deals with the benchmark Tennessee Eastman process
50

Sistema evolutivo eficiente para aprendizagem estrutural de redes Bayesianas / Efficient evolutionary system for learning BN structures

Villanueva Talavera, Edwin Rafael 21 September 2012 (has links)
Redes Bayesianas (RB) são ferramentas probabilísticas amplamente aceitas para modelar e fazer inferências em domínios sob incertezas. Uma das maiores dificuldades na construção de uma RB é determinar a sua estrutura de modelo, a qual representa a estrutura de interdependências entre as variáveis modeladas. A estimativa exata da estrutura de modelo a partir de dados observados é, de forma geral, impraticável já que o número de estruturas possíveis cresce de forma super-exponencial com o número de variáveis. Métodos eficientes de aprendizagem aproximada tornam-se, portanto, essenciais para a construção de RBs verossímeis. O presente trabalho apresenta o Sistema Evolutivo Eficiente para Aprendizagem Estrutural de RBs, ou abreviadamente, EES-BN. Duas etapas de aprendizagem compõem EES-BN. A primeira etapa é encarregada de reduzir o espaço de busca mediante a aprendizagem de uma superestrutura. Para tal fim foram desenvolvidos dois métodos efetivos: Opt01SS e OptHPC, ambos baseados em testes de independência. A segunda etapa de EES-BN é um esquema de busca evolutiva que aproxima a estrutura do modelo respeitando as restrições estruturais aprendidas na superestrutura. Três blocos principais integram esta etapa: recombinação, mutação e injeção de diversidade. Para recombinação foi desenvolvido um novo operador (MergePop) visando ganhar eficiência de busca, o qual melhora o operador Merge de Wong e Leung (2004). Os operadores nos blocos de mutação e injeção de diversidade foram também escolhidos procurando um adequado equilíbrio entre exploração e utilização de soluções. Todos os blocos de EES-BN foram estruturados para operar colaborativamente e de forma auto-ajustável. Em uma serie de avaliações experimentais em RBs conhecidas de variado tamanho foi encontrado que EES-BN consegue aprender estruturas de RBs significativamente mais próximas às estruturas verdadeiras do que vários outros métodos representativos estudados (dois evolutivos: CCGA e GAK2, e dois não evolutivos: GS e MMHC). EES-BN tem mostrado também tempos computacionais competitivos, melhorando marcadamente os tempos dos outros métodos evolutivos e superando também ao GS nas redes de grande porte. A efetividade de EES-BN foi também comprovada em dois problemas relevantes em Bioinformática: i) reconstrução da rede deinterações intergênicas a partir de dados de expressão gênica, e ii) modelagem do chamado desequilíbrio de ligação a partir de dados genotipados de marcadores genéticos de populações humanas. Em ambas as aplicações, EES-BN mostrou-se capaz de capturar relações interessantes de significância biológica estabelecida. / Bayesian networks (BN) are probabilistic tools widely accepted for modeling and reasoning in domains under uncertainty. One of the most difficult tasks in the construction of a BN is the determination of its model structure, which is the inter-dependence structure of the problem variables. The exact estimation of the model structure from observed data is generally infeasible, since the number of possible structures grows super-exponentially with the number of variables. Efficient approximate methods are therefore essential for the construction of credible BN. In this work we present the Efficient Evolutionary System for learning BN structures (EES-BN). This system is composed by two learning phases. The first phase is responsible for the reduction of the search space by estimating a superstructure. For this task were developed two methods (Opt01SS and OptHPC), both based in independence tests. The second phase of EES-BN is an evolutionary design for finding the optimal model structure using the superstructure as the search space. Three main blocks compose this phase: recombination, mutation and diversity injection. With the aim to gain search efficiency was developed a new recombination operator (MergePop), which improves the Merge operator of Wong e Leung (2004). The operators for mutation and recombination blocks were also selected aiming to have an appropriate balance between exploitation and exploration of the solutions. All blocks in EES-BN were structured to operate in a collaborative and self-regulated fashion. Through a series of experiments and comparisons on benchmark BNs of varied dimensionality was found that EES-BN is able to learn BN structures markedly closer to the gold standard networks than various other representative methods (two evolutionary: CCGA and GAK2, and two non-evolutionary methods: GS and MMHC). The computational times of EES-BN were also found competitive, improving notably the times of the evolutionary methods and also the GS in the larger networks. The effectiveness of EES-BN was also verified in two real problems in bioinformatics: i) the reconstruction of a gene regulatory network from gene-expression data, and ii) the modeling of the linkage disequilibrium structures from genetic marker genotyped data of human populations. In both applications EES-BN proved to be able to recover interesting relationships with proven biological meaning.

Page generated in 0.0564 seconds