Spelling suggestions: "subject:"probabilistic reasoning"" "subject:"probabilistica reasoning""
1 |
Integrating Probabilistic Reasoning with Constraint SatisfactionHsu, Eric 09 June 2011 (has links)
We hypothesize and confirm that probabilistic reasoning is closely related to constraint satisfaction at a formal level, and that this relationship yields effective algorithms for guiding constraint satisfaction and constraint optimization solvers.
By taking a unified view of probabilistic inference and constraint reasoning in terms of graphical models, we first associate a number of formalisms and techniques between the two areas. For instance, we characterize search and inference in constraint reasoning as summation and multiplication (or disjunction and conjunction) in the probabilistic space; necessary but insufficient consistency conditions for solutions to constraint problems (like arc-consistency) mirror approximate objective functions over probability distributions (like the Bethe free energy); and the polytope of feasible points for marginal probabilities represents the linear relaxation of a particular constraint satisfaction problem.
While such insights synthesize an assortment of existing formalisms from varied research
communities, they also yield an entirely novel set of “bias estimation” techniques that contribute to a growing body of research on applying probabilistic methods to constraint problems. In practical terms, these techniques estimate the percentage of solutions to a constraint satisfaction or optimization problem wherein a given variable is assigned a given value. By devising search methods that incorporate such information as heuristic guidance for variable and value ordering, we are able to outperform existing solvers on problems of interest from constraint satisfaction and constraint optimization–-as represented here by the SAT and MaxSAT problems.
Further, for MaxSAT we present an equivalent transformation” process that normalizes the
weights in constraint optimization problems, in order to encourage prunings of the search tree during branch-and-bound search. To control such computationally expensive processes, we determine promising situations for using them throughout the course of an individual search process. We accomplish this using a reinforcement learning-based control module that seeks a principled balance between the exploration of new strategies and the exploitation of existing
experiences.
|
2 |
Integrating Probabilistic Reasoning with Constraint SatisfactionHsu, Eric 09 June 2011 (has links)
We hypothesize and confirm that probabilistic reasoning is closely related to constraint satisfaction at a formal level, and that this relationship yields effective algorithms for guiding constraint satisfaction and constraint optimization solvers.
By taking a unified view of probabilistic inference and constraint reasoning in terms of graphical models, we first associate a number of formalisms and techniques between the two areas. For instance, we characterize search and inference in constraint reasoning as summation and multiplication (or disjunction and conjunction) in the probabilistic space; necessary but insufficient consistency conditions for solutions to constraint problems (like arc-consistency) mirror approximate objective functions over probability distributions (like the Bethe free energy); and the polytope of feasible points for marginal probabilities represents the linear relaxation of a particular constraint satisfaction problem.
While such insights synthesize an assortment of existing formalisms from varied research
communities, they also yield an entirely novel set of “bias estimation” techniques that contribute to a growing body of research on applying probabilistic methods to constraint problems. In practical terms, these techniques estimate the percentage of solutions to a constraint satisfaction or optimization problem wherein a given variable is assigned a given value. By devising search methods that incorporate such information as heuristic guidance for variable and value ordering, we are able to outperform existing solvers on problems of interest from constraint satisfaction and constraint optimization–-as represented here by the SAT and MaxSAT problems.
Further, for MaxSAT we present an equivalent transformation” process that normalizes the
weights in constraint optimization problems, in order to encourage prunings of the search tree during branch-and-bound search. To control such computationally expensive processes, we determine promising situations for using them throughout the course of an individual search process. We accomplish this using a reinforcement learning-based control module that seeks a principled balance between the exploration of new strategies and the exploitation of existing
experiences.
|
3 |
Measuring inconsistency in probabilistic knowledge bases / Medindo inconsistência em bases de conhecimento probabilísticoDe Bona, Glauber 22 January 2016 (has links)
In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community. / Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA.
|
4 |
Reasoning and Learning with Probabilistic Answer Set ProgrammingJanuary 2019 (has links)
abstract: Knowledge Representation (KR) is one of the prominent approaches to Artificial Intelligence (AI) that is concerned with representing knowledge in a form that computer systems can utilize to solve complex problems. Answer Set Programming (ASP), based on the stable model semantics, is a widely-used KR framework that facilitates elegant and efficient representations for many problem domains that require complex reasoning.
However, while ASP is effective on deterministic problem domains, it is not suitable for applications involving quantitative uncertainty, for example, those that require probabilistic reasoning. Furthermore, it is hard to utilize information that can be statistically induced from data with ASP problem modeling.
This dissertation presents the language LP^MLN, which is a probabilistic extension of the stable model semantics with the concept of weighted rules, inspired by Markov Logic. An LP^MLN program defines a probability distribution over "soft" stable models, which may not satisfy all rules, but the more rules with the bigger weights they satisfy, the bigger their probabilities. LP^MLN takes advantage of both ASP and Markov Logic in a single framework, allowing representation of problems that require both logical and probabilistic reasoning in an intuitive and elaboration tolerant way.
This dissertation establishes formal relations between LP^MLN and several other formalisms, discusses inference and weight learning algorithms under LP^MLN, and presents systems implementing the algorithms. LP^MLN systems can be used to compute other languages translatable into LP^MLN.
The advantage of LP^MLN for probabilistic reasoning is illustrated by a probabilistic extension of the action language BC+, called pBC+, defined as a high-level notation of LP^MLN for describing transition systems. Various probabilistic reasoning about transition systems, especially probabilistic diagnosis, can be modeled in pBC+ and computed using LP^MLN systems. pBC+ is further extended with the notion of utility, through a decision-theoretic extension of LP^MLN, and related with Markov Decision Process (MDP) in terms of policy optimization problems. pBC+ can be used to represent (PO)MDP in a succinct and elaboration tolerant way, which enables planning with (PO)MDP algorithms in action domains whose description requires rich KR constructs, such as recursive definitions and indirect effects of actions. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
|
5 |
Statistical reasoning in nonhuman primates and human childrenPlacì, Sarah 25 March 2019 (has links)
No description available.
|
6 |
Exploring Probabilistic Reasoning : A Study of How Students Contextualise Compound Chance Encounters in Explorative SettingsNilsson, Per January 2006 (has links)
This thesis aims at exploring how probabilistic reasoning arises in explorative learning situations that are random in nature. The focus is especially on what learners with scant experience of formal theories of probability do and can do when dealing with compound random situations in which they are offered opportunities to integrate different probabilistic lines of reasoning. Three studies were carried out for the purpose of gaining an understanding of how learners’ probabilistic reasoning is organised and re-organised in explorative, random-dependent situations. In two of the studies 12 to 13 year-old students acted within a dice-game setting, which was based on the total of two dice. The third study examined 14 to 16 year-old students’ ways of dealing with ICT-versions of compound, independent events viewed in a random-dependent ramified structure. To uncover the basis and the content of the students’ reasoning, behaviour has been regarded in terms of intentions. That is, to understand and make sense of the students’ reasoning, their activities have been matched and re-matched with conjectures about their intents to fulfil certain goals. Although the students were acting on the same learning material, the analyses revealed various kinds of probabilistic reasoning among the students. It has been argued that students’ various ways of dealing with chance encounters may be understood and explained with reference to the ways in which they interpret the learning situations. Thus, this thesis suggests that probabilistic reasoning takes form through a process of contextualisation, i.e. through a compound process where the cognitive activity oscillates between interpretations and reflections about context, the focal event and new information that comes into play. This thesis reveals that students, prior to instruction, are able to devise ideas of an underlying probability distribution in the case of compound random phenomena. The students bring into the discussion geometrical and numerical considerations, as well as arguments reflecting principles of the law of large numbers.
|
7 |
Requirement-based Root Cause Analysis Using Log DataZawawy, Hamzeh January 2012 (has links)
Root Cause Analysis for software systems is a challenging diagnostic task due to complexity emanating from the interactions between system components. Furthermore, the sheer size of the logged data makes it often difficult for human operators and administrators to perform problem diagnosis and root cause analysis. The diagnostic task is further complicated by the lack of models that could be used to support the diagnostic process. Traditionally, this diagnostic task is conducted by human experts who create mental models of systems, in order to generate hypotheses and conduct the analysis even in the presence of incomplete logged data. A challenge in this area is to provide the necessary concepts, tools, and techniques for the operators to focus their attention to specific parts of the logged data and ultimately to automate the diagnostic process.
The work described in this thesis aims at proposing a framework that includes techniques, formalisms, and algorithms aimed at automating the process of root cause analysis. In particular, this work uses annotated requirement goal models to represent the monitored systems' requirements and runtime behavior. The goal models are used in combination with log data to generate a ranked set of diagnostics that represent the combination of tasks that failed leading to the observed failure. In addition, the framework uses a combination of word-based and topic-based information retrieval techniques to reduce the size of log data by filtering out a subset of log data to facilitate the diagnostic process. The process of log data filtering and reduction is based on goal model annotations and generates a sequence of logical literals that represent the possible systems' observations. A second level of investigation consists of looking for evidence for any malicious (i.e., intentionally caused by a third party) activity leading to task failures. This analysis uses annotated anti-goal models that denote possible actions that can be taken by an external user to threaten a given system task. The framework uses a novel probabilistic approach based on Markov Logic Networks. Our experiments show that our approach improves over existing proposals by handling uncertainty in observations, using natively generated log data, and by providing ranked diagnoses. The proposed framework has been evaluated using a test environment based on commercial off-the-shelf software components, publicly available Java Based ATM machine, and the large publicly available dataset (DARPA 2000).
|
8 |
Requirement-based Root Cause Analysis Using Log DataZawawy, Hamzeh January 2012 (has links)
Root Cause Analysis for software systems is a challenging diagnostic task due to complexity emanating from the interactions between system components. Furthermore, the sheer size of the logged data makes it often difficult for human operators and administrators to perform problem diagnosis and root cause analysis. The diagnostic task is further complicated by the lack of models that could be used to support the diagnostic process. Traditionally, this diagnostic task is conducted by human experts who create mental models of systems, in order to generate hypotheses and conduct the analysis even in the presence of incomplete logged data. A challenge in this area is to provide the necessary concepts, tools, and techniques for the operators to focus their attention to specific parts of the logged data and ultimately to automate the diagnostic process.
The work described in this thesis aims at proposing a framework that includes techniques, formalisms, and algorithms aimed at automating the process of root cause analysis. In particular, this work uses annotated requirement goal models to represent the monitored systems' requirements and runtime behavior. The goal models are used in combination with log data to generate a ranked set of diagnostics that represent the combination of tasks that failed leading to the observed failure. In addition, the framework uses a combination of word-based and topic-based information retrieval techniques to reduce the size of log data by filtering out a subset of log data to facilitate the diagnostic process. The process of log data filtering and reduction is based on goal model annotations and generates a sequence of logical literals that represent the possible systems' observations. A second level of investigation consists of looking for evidence for any malicious (i.e., intentionally caused by a third party) activity leading to task failures. This analysis uses annotated anti-goal models that denote possible actions that can be taken by an external user to threaten a given system task. The framework uses a novel probabilistic approach based on Markov Logic Networks. Our experiments show that our approach improves over existing proposals by handling uncertainty in observations, using natively generated log data, and by providing ranked diagnoses. The proposed framework has been evaluated using a test environment based on commercial off-the-shelf software components, publicly available Java Based ATM machine, and the large publicly available dataset (DARPA 2000).
|
9 |
The evolutionary roots of intuitive statisticsEckert, Johanna 24 September 2018 (has links)
No description available.
|
10 |
Raciocínio probabilístico aplicado ao diagnóstico de insuficiência cardíaca congestiva (ICC) / Probabilistic reasoning applied to the diagnosis of heart failureSilvestre, André Meyer January 2003 (has links)
As Redes Bayesianas constituem um modelo computacional adequado para a realização de inferências probabilísticas em domínios que envolvem a incerteza. O raciocínio diagnóstico médico pode ser caracterizado como um ato de inferência probabilística em um domínio incerto, onde a elaboração de hipóteses diagnósticas é representada pela estratificação de doenças em função das probabilidades a elas associadas. A presente dissertação faz uma pesquisa sobre a metodologia para construção/validação de redes bayesianas voltadas à área médica, e utiliza estes conhecimentos para o desenvolvimento de uma rede probabilística para o auxílio diagnóstico da Insuficiência Cardíaca (IC). Esta rede bayesiana, implementada como parte do sistema SEAMED/AMPLIA, teria o papel de alerta para o diagnóstico e tratamento precoce da IC, o que proporcionaria uma maior agilidade e eficiência no atendimento de pacientes portadores desta patologia. / Bayesian networks (BN) constitute an adequate computational model to make probabilistic inference in domains that involve uncertainty. Medical diagnostic reasoning may be characterized as an act of probabilistic inference in an uncertain domain, where diagnostic hypotheses elaboration is represented by the stratification of diseases according to the related probabilities. The present dissertation researches the methodology used in the construction/validation of Bayesian Networks related to the medical field, and makes use of this knowledge for the development of a probabilistic network to aid in the diagnosis of Heart Failure (HF). This BN, implemented as part of the SEAMED/AMPLIA System, would engage in the role of alerting for early diagnosis and treatment of HF, which could provide faster and more efficient healthcare of patients carrying this pathology.
|
Page generated in 0.1125 seconds