• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 78
  • 29
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 77
  • 74
  • 54
  • 50
  • 45
  • 41
  • 38
  • 34
  • 32
  • 32
  • 31
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The automatic explanation of Multivariate Time Series with large time lags

Tucker, Allan Brice James January 2001 (has links)
No description available.
2

Bayesian participatory-based decision analysis : an evolutionary, adaptive formalism for integrated analysis of complex challenges to social-ecological system sustainability

Peter, Camaren January 2010 (has links)
Includes bibliographical references (pages. 379-400). / This dissertation responds to the need for integration between researchers and decision-makers who are dealing with complex social-ecological system sustainability and decision-making challenges. To this end, we propose a new approach, called Bayesian Participatory-based Decision Analysis (BPDA), which makes use of graphical causal maps and Bayesian networks to facilitate integration at the appropriate scales and levels of descriptions. The BPDA approach is not a predictive approach, but rather, caters for a wide range of future scenarios in anticipation of the need to adapt to unforeseeable changes as they occur. We argue that the graphical causal models and Bayesian networks constitute an evolutionary, adaptive formalism for integrating research and decision-making for sustainable development. The approach was implemented in a number of different interdisciplinary case studies that were concerned with social-ecological system scale challenges and problems, culminating in a study where the approach was implemented with decision-makers in Government. This dissertation introduces the BPDA approach, and shows how the approach helps identify critical cross-scale and cross-sector linkages and sensitivities, and addresses critical requirements for understanding system resilience and adaptive capacity.
3

Student Modeling within a Computer Tutor for Mathematics: Using Bayesian Networks and Tabling Methods

Wang, Yutao 15 September 2015 (has links)
"Intelligent tutoring systems rely on student modeling to understand student behavior. The result of student modeling can provide assessment for student knowledge, estimation of student¡¯s current affective states (ie boredom, confusion, concentration, frustration, etc), prediction of student performance, and suggestion of the next tutoring steps. There are three focuses of this dissertation. The first focus is on better predicting student performance by adding more information, such as student identity and information about how many assistance students needed. The second focus is to analyze different performance and feature set for modeling student short-term knowledge and longer-term knowledge. The third focus is on improving the affect detectors by adding more features. In this dissertation I make contributions to the field of data mining as well as educational research. I demonstrate novel Bayesian networks for student modeling, and also compared them with each other. This work contributes to educational research by broadening the task of analyzing student knowledge to student knowledge retention, which is a much more important and interesting question for researchers to look at. Additionally, I showed a set of new useful features as well as how to effectively use these features in real models. For instance, in Chapter 5, I showed that the feature of the number of different days a students has worked on a skill is a more predictive feature for knowledge retention. These features themselves are not a contribution to data mining so much as they are to education research more broadly, which can used by other educational researchers or tutoring systems. "
4

Computational Modeling of Cancer Progression

Shahrabi Farahani, Hossein January 2013 (has links)
Cancer is a multi-stage process resulting from accumulation of genetic mutations. Data obtained from assaying a tumor only contains the set of mutations in the tumor and lacks information about their temporal order. Learning the chronological order of the genetic mutations is an important step towards understanding the disease. The probability of introduction of a mutation to a tumor increases if certain mutations that promote it, already happened. Such dependencies induce what we call the monotonicity property in cancer progression. A realistic model of cancer progression should take this property into account. In this thesis, we present two models for cancer progression and algorithms for learning them. In the first model, we propose Progression Networks (PNs), which are a special class of Bayesian networks. In learning PNs the issue of monotonicity is taken into consideration. The problem of learning PNs is reduced to Mixed Integer Linear Programming (MILP), which is a NP-hard problem for which very good heuristics exist. We also developed a program, DiProg, for learning PNs. In the second model, the problem of noise in the biological experiments is addressed by introducing hidden variable. We call this model Hidden variable Oncogenetic Network (HON). In a HON, there are two variables assigned to each node, a hidden variable that represents the progression of cancer to the node and an observable random variable that represents the observation of the mutation corresponding to the node. We devised a structural Expectation Maximization (EM) algorithm for learning HONs. In the M-step of the structural EM algorithm, we need to perform a considerable number of inference tasks. Because exact inference is tractable only on Bayesian networks with bounded treewidth, we also developed an algorithm for learning bounded treewidth Bayesian networks by reducing the problem to a MILP. Our algorithms performed well on synthetic data. We also tested them on cytogenetic data from renal cell carcinoma. The learned progression networks from both algorithms are in agreement with the previously published results. MicroRNAs are short non-coding RNAs that are involved in post transcriptional regulation. A-to-I editing of microRNAs converts adenosine to inosine in the double stranded RNA. We developed a method for determining editing levels in mature microRNAs from the high-throughput RNA sequencing data from the mouse brain. Here, for the first time, we showed that the level of editing increases with development. / <p>QC 20130503</p>
5

Conditioning graphs: practical structures for inference in bayesian networks

Grant, Kevin John 16 January 2007
Probability is a useful tool for reasoning when faced with uncertainty. Bayesian networks offer a compact representation of a probabilistic problem, exploiting independence amongst variables that allows a factorization of the joint probability into much smaller local probability distributions.<p>The standard approach to probabilistic inference in Bayesian networks is to compile the graph into a join­tree, and perform computation over this secondary structure. While join­trees are among the most time­efficient methods of inference in Bayesian networks, they are not always appropriate for certain applications. The memory requirements of join­tree can be prohibitively large. The algorithms for computing over join­trees are large and involved, making them difficult to port to other systems or be understood by general programmers without Bayesian network expertise. <p>This thesis proposes a different method for probabilistic inference in Bayesian networks. We present a data structure called a conditioning graph, which is a run­time representation of Bayesian network inference. The structure mitigates many of the problems of join­tree inference. For example, conditioning graphs require much less space to store and compute over. The algorithm for calculating probabilities from a conditioning graph is small and basic, making it portable to virtually any architecture. And the details of Bayesian network inference are compiled away during the construction of the conditioning graph, leaving an intuitive structure that is easy to understand and implement without any Bayesian network expertise. <p>In addition to the conditioning graph architecture, we present several improvements to the model, that maintain its small and simplistic style while reducing the runtime required for computing over it. We present two heuristics for choosing variable orderings that result in shallower elimination trees, reducing the overall complexity of computing over conditioning graphs. We also demonstrate several compile and runtime extensions to the algorithm, that can produce substantial speedup to the algorithm while adding a small space constant to the implementation. We also show how to cache intermediate values in conditioning graphs during probabilistic computation, that allows conditioning graphs to perform at the same speed as standard methods by avoiding duplicate computation, at the price of more memory. The methods presented also conform to the basic style of the original algorithm. We demonstrate a novel technique for reducing the amount of required memory for caching. <p>We demonstrate empirically the compactness, portability, and ease of use of conditioning graphs. We also show that the optimizations of conditioning graphs allow competitive behaviour with standard methods in many circumstances, while still preserving its small and simple style. Finally, we show that the memory required under caching can be quite modest, meaning that conditioning graphs can be competitive with standard methods in terms of time, using a fraction of the memory.
6

Conditioning graphs: practical structures for inference in bayesian networks

Grant, Kevin John 16 January 2007 (has links)
Probability is a useful tool for reasoning when faced with uncertainty. Bayesian networks offer a compact representation of a probabilistic problem, exploiting independence amongst variables that allows a factorization of the joint probability into much smaller local probability distributions.<p>The standard approach to probabilistic inference in Bayesian networks is to compile the graph into a join­tree, and perform computation over this secondary structure. While join­trees are among the most time­efficient methods of inference in Bayesian networks, they are not always appropriate for certain applications. The memory requirements of join­tree can be prohibitively large. The algorithms for computing over join­trees are large and involved, making them difficult to port to other systems or be understood by general programmers without Bayesian network expertise. <p>This thesis proposes a different method for probabilistic inference in Bayesian networks. We present a data structure called a conditioning graph, which is a run­time representation of Bayesian network inference. The structure mitigates many of the problems of join­tree inference. For example, conditioning graphs require much less space to store and compute over. The algorithm for calculating probabilities from a conditioning graph is small and basic, making it portable to virtually any architecture. And the details of Bayesian network inference are compiled away during the construction of the conditioning graph, leaving an intuitive structure that is easy to understand and implement without any Bayesian network expertise. <p>In addition to the conditioning graph architecture, we present several improvements to the model, that maintain its small and simplistic style while reducing the runtime required for computing over it. We present two heuristics for choosing variable orderings that result in shallower elimination trees, reducing the overall complexity of computing over conditioning graphs. We also demonstrate several compile and runtime extensions to the algorithm, that can produce substantial speedup to the algorithm while adding a small space constant to the implementation. We also show how to cache intermediate values in conditioning graphs during probabilistic computation, that allows conditioning graphs to perform at the same speed as standard methods by avoiding duplicate computation, at the price of more memory. The methods presented also conform to the basic style of the original algorithm. We demonstrate a novel technique for reducing the amount of required memory for caching. <p>We demonstrate empirically the compactness, portability, and ease of use of conditioning graphs. We also show that the optimizations of conditioning graphs allow competitive behaviour with standard methods in many circumstances, while still preserving its small and simple style. Finally, we show that the memory required under caching can be quite modest, meaning that conditioning graphs can be competitive with standard methods in terms of time, using a fraction of the memory.
7

A probabilistic examplar based model

Rodriguez Martinez, Andres Florencio January 1998 (has links)
A central problem in case based reasoning (CBR) is how to store and retrieve cases. One approach to this problem is to use exemplar based models, where only the prototypical cases are stored. However, the development of an exemplar based model (EBM) requires the solution of several problems: (i) how can a EBM be represented? (ii) given a new case, how can a suitable exemplar be retrieved? (iii) what makes a good exemplar? (iv) how can an EBM be learned incrementally? This thesis develops a new model, called a probabilistic exemplar based model, that addresses these research questions. The model utilizes Bayesian networks to develop a suitable representation and uses probability theory to develop the foundations of the developed model. A probability propagation method is used to retrieve exemplars when a new case is presented and for assessing the prototypicality of an exemplar. The model learns incrementally by revising the exemplars retained and by updating the conditional probabilities required by the Bayesian network. The problem of ignorance, encountered when only a few cases have been observed, is tackled by introducing the concept of a virtual exemplar to represent all the unseen cases. The model is implemented in C and evaluated on three datasets. It is also contrasted with related work in CBR and machine learning (ML).
8

Approximate inference of Bayesian networks through edge deletion

Thornton, Julie Ann January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William Hsu / Bayesian networks are graphical models whose nodes represent random variables and whose edges represent conditional dependence between variables. Each node in a Bayesian network is equipped with a conditional probability function that expresses the likelihood that the node will take on different values given the values of its parents. A common task for a Bayesian network is to perform inference by computing the marginal probabilities of each possible value for each node. In this thesis, I introduce three new algorithms for approximate inference of Bayesian networks that use edge deletion techniques. The first reduces a network to its maximal weight spanning tree using the Kullback-Leibler information divergence as edge weights, and then runs Pearl’s algorithm on the resulting tree. Because Pearl’s algorithm can perform inference on a tree in linear time, as opposed to the exponential running time of all general exact inference algorithms, this reduction results in a tremendous speedup in inference. The second algorithm applies triangulation pre-processing rules that are guaranteed to be optimal if the original graph has a treewidth of four or less, and then deletes edges from the network and continues applying rules so that the resulting triangulated graph will have a maximum clique size of no more than five. The junction tree exact inference algorithm can then be run on the reduced triangulated graph. While the junction tree algorithm has an exponential worst-case running time in the size of the maximum clique in the triangulated graph, placing a bound on the clique size effectively places a polynomial time bound on the inference procedure. The third algorithm deletes edges from a triangulation of the original network until the maximum clique size in the triangulated graph is below a desired bound. Again, the junction tree algorithm can then be run on the resulting triangulated graph, and the bound on the maximum clique size will also polynomially bound the inference time. When tested for efficiency and accuracy on common Bayesian networks, these three algorithms perform up to 10,000 times faster than current exact and approximate techniques while achieving error values close to those of sampling techniques.
9

The development of object oriented Bayesian networks to evaluate the social, economic and environmental impacts of solar PV

Leicester, Philip A. January 2016 (has links)
Domestic and community low carbon technologies are widely heralded as valuable means for delivering sustainability outcomes in the form of social, economic and environmental (SEE) policy objectives. To accelerate their diffusion they have benefited from a significant number and variety of subsidies worldwide. Considerable aleatory and epistemic uncertainties exist, however, both with regard to their net energy contribution and their SEE impacts. Furthermore the socio-economic contexts themselves exhibit enormous variability, and commensurate uncertainties in their parameterisation. This represents a significant risk for policy makers and technology adopters. This work describes an approach to these problems using Bayesian Network models. These are utilised to integrate extant knowledge from a variety of disciplines to quantify SEE impacts and endogenise uncertainties. A large-scale Object Oriented Bayesian network has been developed to model the specific case of solar photovoltaics (PV) installed on UK domestic roofs. Three specific model components have been developed. The PV component characterises the yield of UK systems, the building energy component characterises the energy consumption of the dwellings and their occupants and a third component characterises the building stock in four English urban communities. Three representative SEE indicators, fuel affordability, carbon emission reduction and discounted cash flow are integrated and used to test the model s ability to yield meaningful outputs in response to varying inputs. The variability in the percentage of the three indicators is highly responsive to the dwellings built form, age and orientation, but is not just due to building and solar physics but also to socio-economic factors. The model can accept observations or evidence in order to create scenarios which facilitate deliberative decision making. The BN methodology contributes to the synthesis of new knowledge from extant knowledge located between disciplines . As well as insights into the impacts of high PV penetration, an epistemic contribution has been made to transdisciplinary building energy modelling which can be replicated with a variety of low carbon interventions.
10

Learning Optimal Bayesian Networks with Heuristic Search

Malone, Brandon M 11 August 2012 (has links)
Bayesian networks are a widely used graphical model which formalize reasoning under uncertainty. Unfortunately, construction of a Bayesian network by an expert is timeconsuming, and, in some cases, all expertsmay not agree on the best structure for a problem domain. Additionally, for some complex systems such as those present in molecular biology, experts with an understanding of the entire domain and how individual components interact may not exist. In these cases, we must learn the network structure from available data. This dissertation focuses on score-based structure learning. In this context, a scoring function is used to measure the goodness of fit of a structure to data. The goal is to find the structure which optimizes the scoring function. The first contribution of this dissertation is a shortest-path finding perspective for the problem of learning optimal Bayesian network structures. This perspective builds on earlier dynamic programming strategies, but, as we show, offers much more flexibility. Second, we develop a set of data structures to improve the efficiency of many of the integral calculations for structure learning. Most of these data structures benefit our algorithms, dynamic programming and other formulations of the structure learning problem. Next, we introduce a suite of algorithms that leverage the new data structures and shortest-path finding perspective for structure learning. These algorithms take advantage of a number of new heuristic functions to ignore provably sub-optimal parts of the search space. They also exploit regularities in the search that previous approaches could not. All of the algorithms we present have their own advantages. Some minimize work in a provable sense; others use external memory such as hard disk to scale to datasets with more variables. Several of the algorithms quickly find solutions and improve them as long as they are given more resources. Our algorithms improve the state of the art in structure learning by running faster, using less memory and incorporating other desirable characteristics, such as anytime behavior. We also pose unanswered questions to drive research into the future.

Page generated in 0.0536 seconds