Bayesian networks are used for building intelligent agents that act under uncertainty. They are a compact representation of agents' probabilistic knowledge. A Bayesian network can be viewed as representing a factorization of a full joint probability distribution into the multiplication of a set of conditional probability distributions. Independence of causal influence enables one to further factorize the conditional probability distributions into a combination of even smaller factors. The efficiency of inference in Bayesian networks depends on how these factors are combined. Finding an optimal combination is NP-hard.
We propose a new method for efficient inference in large Bayesian networks, which is a combination of new representations and new combination algorithms. We present new, purely multiplicative representations of independence of causal influence models. They are easy to use because any standard inference algorithm can work with them. Also, they allow for exploiting independence of causal influence fully because they do not impose any constraints on combination ordering. We develop combination algorithms that work with heuristics. Heuristics are generated automatically by using machine learning techniques. Empirical studies, based on the CPCS network for medical diagnosis, show that this method is more efficient and allows for inference in larger networks than existing methods. / Graduation date: 1999
Identifer | oai:union.ndltd.org:ORGSU/oai:ir.library.oregonstate.edu:1957/33530 |
Date | 15 October 1998 |
Creators | Takikawa, Masami |
Contributors | D'Ambrosio, Bruce |
Source Sets | Oregon State University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Page generated in 0.0057 seconds