• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 8
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Raciocínio probabilístico aplicado ao diagnóstico de insuficiência cardíaca congestiva (ICC) / Probabilistic reasoning applied to the diagnosis of heart failure

Silvestre, André Meyer January 2003 (has links)
As Redes Bayesianas constituem um modelo computacional adequado para a realização de inferências probabilísticas em domínios que envolvem a incerteza. O raciocínio diagnóstico médico pode ser caracterizado como um ato de inferência probabilística em um domínio incerto, onde a elaboração de hipóteses diagnósticas é representada pela estratificação de doenças em função das probabilidades a elas associadas. A presente dissertação faz uma pesquisa sobre a metodologia para construção/validação de redes bayesianas voltadas à área médica, e utiliza estes conhecimentos para o desenvolvimento de uma rede probabilística para o auxílio diagnóstico da Insuficiência Cardíaca (IC). Esta rede bayesiana, implementada como parte do sistema SEAMED/AMPLIA, teria o papel de alerta para o diagnóstico e tratamento precoce da IC, o que proporcionaria uma maior agilidade e eficiência no atendimento de pacientes portadores desta patologia. / Bayesian networks (BN) constitute an adequate computational model to make probabilistic inference in domains that involve uncertainty. Medical diagnostic reasoning may be characterized as an act of probabilistic inference in an uncertain domain, where diagnostic hypotheses elaboration is represented by the stratification of diseases according to the related probabilities. The present dissertation researches the methodology used in the construction/validation of Bayesian Networks related to the medical field, and makes use of this knowledge for the development of a probabilistic network to aid in the diagnosis of Heart Failure (HF). This BN, implemented as part of the SEAMED/AMPLIA System, would engage in the role of alerting for early diagnosis and treatment of HF, which could provide faster and more efficient healthcare of patients carrying this pathology.
12

Techniques for Supporting Prediction of Security Breaches in Critical Cloud Infrastructures Using Bayesian Network and Markov Decision Process

January 2015 (has links)
abstract: Emerging trends in cyber system security breaches in critical cloud infrastructures show that attackers have abundant resources (human and computing power), expertise and support of large organizations and possible foreign governments. In order to greatly improve the protection of critical cloud infrastructures, incorporation of human behavior is needed to predict potential security breaches in critical cloud infrastructures. To achieve such prediction, it is envisioned to develop a probabilistic modeling approach with the capability of accurately capturing system-wide causal relationship among the observed operational behaviors in the critical cloud infrastructure and accurately capturing probabilistic human (users’) behaviors on subsystems as the subsystems are directly interacting with humans. In our conceptual approach, the system-wide causal relationship can be captured by the Bayesian network, and the probabilistic human behavior in the subsystems can be captured by the Markov Decision Processes. The interactions between the dynamically changing state graphs of Markov Decision Processes and the dynamic causal relationships in Bayesian network are key components in such probabilistic modelling applications. In this thesis, two techniques are presented for supporting the above vision to prediction of potential security breaches in critical cloud infrastructures. The first technique is for evaluation of the conformance of the Bayesian network with the multiple MDPs. The second technique is to evaluate the dynamically changing Bayesian network structure for conformance with the rules of the Bayesian network using a graph checker algorithm. A case study and its simulation are presented to show how the two techniques support the specific parts in our conceptual approach to predicting system-wide security breaches in critical cloud infrastructures. / Dissertation/Thesis / Masters Thesis Computer Science 2015
13

Explainable Fact Checking by Combining Automated Rule Discovery with Probabilistic Answer Set Programming

January 2018 (has links)
abstract: The goal of fact checking is to determine if a given claim holds. A promising ap- proach for this task is to exploit reference information in the form of knowledge graphs (KGs), a structured and formal representation of knowledge with semantic descriptions of entities and relations. KGs are successfully used in multiple appli- cations, but the information stored in a KG is inevitably incomplete. In order to address the incompleteness problem, this thesis proposes a new method built on top of recent results in logical rule discovery in KGs called RuDik and a probabilistic extension of answer set programs called LPMLN. This thesis presents the integration of RuDik which discovers logical rules over a given KG and LPMLN to do probabilistic inference to validate a fact. While automatically discovered rules over a KG are for human selection and revision, they can be turned into LPMLN programs with a minor modification. Leveraging the probabilistic inference in LPMLN, it is possible to (i) derive new information which is not explicitly stored in a KG with a probability associated with it, and (ii) provide supporting facts and rules for interpretable explanations for such decisions. Also, this thesis presents experiments and results to show that this approach can label claims with high precision. The evaluation of the system also sheds light on the role played by the quality of the given rules and the quality of the KG. / Dissertation/Thesis / Masters Thesis Computer Science 2018
14

Raciocínio probabilístico aplicado ao diagnóstico de insuficiência cardíaca congestiva (ICC) / Probabilistic reasoning applied to the diagnosis of heart failure

Silvestre, André Meyer January 2003 (has links)
As Redes Bayesianas constituem um modelo computacional adequado para a realização de inferências probabilísticas em domínios que envolvem a incerteza. O raciocínio diagnóstico médico pode ser caracterizado como um ato de inferência probabilística em um domínio incerto, onde a elaboração de hipóteses diagnósticas é representada pela estratificação de doenças em função das probabilidades a elas associadas. A presente dissertação faz uma pesquisa sobre a metodologia para construção/validação de redes bayesianas voltadas à área médica, e utiliza estes conhecimentos para o desenvolvimento de uma rede probabilística para o auxílio diagnóstico da Insuficiência Cardíaca (IC). Esta rede bayesiana, implementada como parte do sistema SEAMED/AMPLIA, teria o papel de alerta para o diagnóstico e tratamento precoce da IC, o que proporcionaria uma maior agilidade e eficiência no atendimento de pacientes portadores desta patologia. / Bayesian networks (BN) constitute an adequate computational model to make probabilistic inference in domains that involve uncertainty. Medical diagnostic reasoning may be characterized as an act of probabilistic inference in an uncertain domain, where diagnostic hypotheses elaboration is represented by the stratification of diseases according to the related probabilities. The present dissertation researches the methodology used in the construction/validation of Bayesian Networks related to the medical field, and makes use of this knowledge for the development of a probabilistic network to aid in the diagnosis of Heart Failure (HF). This BN, implemented as part of the SEAMED/AMPLIA System, would engage in the role of alerting for early diagnosis and treatment of HF, which could provide faster and more efficient healthcare of patients carrying this pathology.
15

Collective reasoning under uncertainty and inconsistency

Adamcik, Martin January 2014 (has links)
In this thesis we investigate some global desiderata for probabilistic knowledge merging given several possibly jointly inconsistent, but individually consistent knowledge bases. We show that the most naive methods of merging, which combine applications of a single expert inference process with the application of a pooling operator, fail to satisfy certain basic consistency principles. We therefore adopt a different approach. Following recent developments in machine learning where Bregman divergences appear to be powerful, we define several probabilistic merging operators which minimise the joint divergence between merged knowledge and given knowledge bases. In particular we prove that in many cases the result of applying such operators coincides with the sets of fixed points of averaging projective procedures - procedures which combine knowledge updating with pooling operators of decision theory. We develop relevant results concerning the geometry of Bregman divergences and prove new theorems in this field. We show that this geometry connects nicely with some desirable principles which have arisen in the epistemology of merging. In particular, we prove that the merging operators which we define by means of convex Bregman divergences satisfy analogues of the principles of merging due to Konieczny and Pino-Perez. Additionally, we investigate how such merging operators behave with respect to principles concerning irrelevant information, independence and relativisation which have previously been intensively studied in case of single-expert probabilistic inference. Finally, we argue that two particular probabilistic merging operators which are based on Kullback-Leibler divergence, a special type of Bregman divergence, have overall the most appealing properties amongst merging operators hitherto considered. By investigating some iterative procedures we propose algorithms to practically compute them.
16

Measuring inconsistency in probabilistic knowledge bases / Medindo inconsistência em bases de conhecimento probabilístico

Glauber De Bona 22 January 2016 (has links)
In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community. / Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA.
17

Real-time probabilistic reasoning system using Lambda architecture

Anikwue, Arinze January 2019 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2019 / The proliferation of data from sources like social media, and sensor devices has become overwhelming for traditional data storage and analysis technologies to handle. This has prompted a radical improvement in data management techniques, tools and technologies to meet the increasing demand for effective collection, storage and curation of large data set. Most of the technologies are open-source. Big data is usually described as very large dataset. However, a major feature of big data is its velocity. Data flow in as continuous stream and require to be actioned in real-time to enable meaningful, relevant value. Although there is an explosion of technologies to handle big data, they are usually targeted at processing large dataset (historic) and real-time big data independently. Thus, the need for a unified framework to handle high volume dataset and real-time big data. This resulted in the development of models such as the Lambda architecture. Effective decision-making requires processing of historic data as well as real-time data. Some decision-making involves complex processes, depending on the likelihood of events. To handle uncertainty, probabilistic systems were designed. Probabilistic systems use probabilistic models developed with probability theories such as hidden Markov models with inference algorithms to process data and produce probabilistic scores. However, development of these models requires extensive knowledge of statistics and machine learning, making it an uphill task to model real-life circumstances. A new research area called probabilistic programming has been introduced to alleviate this bottleneck. This research proposes the combination of modern open-source big data technologies with probabilistic programming and Lambda architecture on easy-to-get hardware to develop a highly fault-tolerant, and scalable processing tool to process both historic and real-time big data in real-time; a common solution. This system will empower decision makers with the capacity to make better informed resolutions especially in the face of uncertainty. The outcome of this research will be a technology product, built and assessed using experimental evaluation methods. This research will utilize the Design Science Research (DSR) methodology as it describes guidelines for the effective and rigorous construction and evaluation of an artefact. Probabilistic programming in the big data domain is still at its infancy, however, the developed artefact demonstrated an important potential of probabilistic programming combined with Lambda architecture in the processing of big data.
18

An Adaptive Approach to Securing Ubiquitous Smart Devices in IoT Environment with Probabilistic User Behavior Prediction

January 2016 (has links)
abstract: Cyber systems, including IoT (Internet of Things), are increasingly being used ubiquitously to vastly improve the efficiency and reduce the cost of critical application areas, such as finance, transportation, defense, and healthcare. Over the past two decades, computing efficiency and hardware cost have dramatically been improved. These improvements have made cyber systems omnipotent, and control many aspects of human lives. Emerging trends in successful cyber system breaches have shown increasing sophistication in attacks and that attackers are no longer limited by resources, including human and computing power. Most existing cyber defense systems for IoT systems have two major issues: (1) they do not incorporate human user behavior(s) and preferences in their approaches, and (2) they do not continuously learn from dynamic environment and effectively adapt to thwart sophisticated cyber-attacks. Consequently, the security solutions generated may not be usable or implementable by the user(s) thereby drastically reducing the effectiveness of these security solutions. In order to address these major issues, a comprehensive approach to securing ubiquitous smart devices in IoT environment by incorporating probabilistic human user behavioral inputs is presented. The approach will include techniques to (1) protect the controller device(s) [smart phone or tablet] by continuously learning and authenticating the legitimate user based on the touch screen finger gestures in the background, without requiring users’ to provide their finger gesture inputs intentionally for training purposes, and (2) efficiently configure IoT devices through controller device(s), in conformance with the probabilistic human user behavior(s) and preferences, to effectively adapt IoT devices to the changing environment. The effectiveness of the approach will be demonstrated with experiments that are based on collected user behavioral data and simulations. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2016
19

Estimation du risque aux intersections pour applications sécuritaires avec véhicules communicants / Risk estimation at road intersections for connected vehicle safety applications

Lefèvre, Stéphanie 22 October 2012 (has links)
Les intersections sont les zones les plus dangereuses du réseau routier. Les statistiques montrent que la plupart des accidents aux intersections sont causés par des erreurs des conducteurs, et que la plupart pourraient être évités à l'aide de systèmes d'aide à la conduite. En particulier, les communications inter-véhiculaires ouvrent de nouvelles opportunités pour les applications sécuritaires aux intersections. Le partage d'informations entre les véhicules via des liens sans fil permet aux véhicules de percevoir leur environnement au-delà des limites du champ de vision des capteurs embarqués. Grâce à cette représentation élargie de l'environnement dans l'espace et dans le temps, la compréhension de situation est améliorée et les situations dangereuses peuvent être détectées plus tôt. Cette thèse aborde le problème de l'estimation du risque aux intersections d'un nouveau point de vue : une structure de raisonnement est proposée pour analyser les situations routières et le risque de collision à un niveau sémantique plutôt qu'au niveau des trajectoires. Le risque est déterminé en estimant les intentions des conducteurs et en identifiant les potentiels conflits, sans avoir à prédire les futures trajectoires des véhicules. L'approche proposée a été validée par des expérimentations en environnement réel à l'aide de véhicules équipés de modems de communication véhicule-véhicule, ainsi qu'en simulation. Les résultats montrent que l'algorithme permet de détecter les situations dangereuses à l'avance et qu'il respecte les contraintes temps-réel des applications sécuritaires. Il y a deux différences principales entre l'approche proposée et les travaux existants. Premièrement, l'étape de prédiction de trajectoire est évitée. Les situations dangereuses sont identifiées en comparant ce que les conducteurs ont l'intention de faire avec ce qui est attendu d'eux d'après les règles de la circulation et le contexte. Le raisonnement sur les intentions et les attentes est réalisé de manière probabiliste afin de prendre en compte les incertitudes des mesures capteur et les ambiguïtés sur l'interprétation. Deuxièmement, le modèle proposé prend en compte les informations sur le contexte situationnel, c'est-à-dire que l'influence de la géométrie de l'intersection et des actions des autres véhicules est prise en compte lors de l'analyse du comportement d'un véhicule. / Intersections are the most complex and dangerous areas of the road network. Statistics show that most road intersection accidents are caused by driver error and that many of them could be avoided through the use of Advanced Driver Assistance Systems. In particular, vehicular communications open new opportunities for safety applications at road intersections. The sharing of information between vehicles over wireless links allows vehicles to perceive their environment beyond the field-of-view of their on-board sensors. Thanks to this enlarged representation of the environment in time and space, situation assessment is improved and dangerous situations can be detected earlier. This thesis tackles the problem of risk estimation at road intersections from a new perspective: a framework is proposed for reasoning about traffic situations and collision risk at a semantic level instead of at a trajectory level. Risk is assessed by estimating the intentions of drivers and looking for conflicts in them, rather than by predicting the future trajectories of the vehicles and looking for intersections between them. The proposed approach was validated in field trials using passenger vehicles equipped with vehicle-to-vehicle wireless communication modems, and in simulation. The results demonstrate that this algorithm allows the early detection of dangerous situations in a reliable manner and complies with real-time constraints. The proposed approach differs from previous works in two key aspects. Firstly, it does not rely on trajectory prediction to assess the risk of a situation. Dangerous situations are identified by comparing what drivers intend to do with what they are expected to do according to the traffic rules and the current context. The reasoning about intentions and expectations is performed in a probabilistic manner to take into account sensor uncertainties and interpretation ambiguities. Secondly, the proposed motion model includes information about the situational context. Both the layout of the intersection and the actions of other vehicles are taken into account as factors influencing the behavior of a vehicle.
20

Quantifiers as evidence of the language of uncertainty: A psycholinguistic approach / Cuantificadores como evidencia del lenguaje de la incertidumbre: un abordaje psicolingüístico

Bazán Guzmán, Jorge Luis, Aparicio Pereda, Ana Sofía 25 September 2017 (has links)
A theoretical approach is presented to study the quantifiers like evidence of the languageof uncertainty. The following topics are considered: language and uncertainty, probabilistic reasoning, learning and evaluation of the quantifiers, and the study of the quantifiers like study of the meaning and the understanding of words, which constitutes the psycholinguis- tics of the quantifiers. We argue that the quantifiers, as words, are part of the language of uncertainty that is part of the probabilistic reasoning, but the mechanism of its learning is not known. We also consider that it is important to locate the study of the quantifiers within the study of the meaning as evidence of internal psychological processes. Future investigatio-ns will facilitate a better understanding of the use of the quantifiers. / Se presenta una aproximación teórica para estudiar los cuantificadores como evidencia del lenguaje de incertidumbre. Se consideran los siguientes temas: lenguaje e incertidumbre, razonamiento probabilístico, aprendizaje y evaluación de los cuantificadores, y estudio de los cuantificadores como estudio del significado y la comprensión de palabras, los cuales constituyen la psicolingüística de los cuantificadores. Nosotros argumentamos que los cuantificadores, como palabras, forman parte del lenguaje de incertidumbre que es parte del razonamiento probabilístico, pero el mecanismo de su aprendizaje es desconocido. También consideramos que es importante situar el estudio de los cuantificadores dentro del estudio del significado como evidencia de procesos psicológicos internos. Investigaciones futuras ayudarán a una mayor comprensión del uso de los cuantificadores.

Page generated in 0.4278 seconds