• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • Tagged with
  • 12
  • 12
  • 12
  • 9
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dynamic Factored Particle Filtering for Context-Specific Correlations

Mostinski, Dimitri 03 May 2007 (has links)
In order to control any system one needs to know the system's current state. In many real-world scenarios the state of the system cannot be determined with certainty due to the sensors being noisy or simply missing. In cases like these one needs to use probabilistic inference techniques to compute the likely states of the system and because such cases are common, there are lots of techniques to choose from in the field of Artificial Intelligence. Formally, we must compute a probability distribution function over all possible states. Doing this exactly is difficult because the number of states is exponential in the number of variables in the system and because the joint PDF may not have a closed form. Many approximation techniques have been developed over the years, but none ideally suited the problem we faced. Particle filtering is a popular scheme that approximates the joint PDF over the variables in the system by a set of weighted samples. It works even when the joint PDF has no closed form and the size of the sample can be adjusted to trade off accuracy for computation time. However, with many variables the size of the sample required for a good approximation can still become prohibitively large. Factored particle filtering uses the structure of variable dependencies to split the problem into many smaller subproblems and scales better if such decomposition is possible. However, our problem was unusual because some normally independent variables would become strongly correlated for short periods of time. This dynamically-changing dependency structure was not handled effectively by existing techniques. Considering variables to be always correlated meant the problem did not scale, considering them to be always independent introduced errors too large to tolerate. It was necessary to develop an approach that would utilize variables' independence whenever possible, but not introduce large errors when variables become correlated. We have developed a new technique for monitoring the state of the system for a class of systems with context-specific correlations. It is based on the idea of caching the context in which correlations arise and otherwise keeping the variables independent. Our evaluation shows that our technique outperforms existing techniques and is the first viable solution for the class of problems we consider.
2

Dynamic Factored Particle Filtering for Context-Specific Correlations

Mostinski, Dimitri 03 May 2007 (has links)
In order to control any system one needs to know the system's current state. In many real-world scenarios the state of the system cannot be determined with certainty due to the sensors being noisy or simply missing. In cases like these one needs to use probabilistic inference techniques to compute the likely states of the system and because such cases are common, there are lots of techniques to choose from in the field of Artificial Intelligence. Formally, we must compute a probability distribution function over all possible states. Doing this exactly is difficult because the number of states is exponential in the number of variables in the system and because the joint PDF may not have a closed form. Many approximation techniques have been developed over the years, but none ideally suited the problem we faced. Particle filtering is a popular scheme that approximates the joint PDF over the variables in the system by a set of weighted samples. It works even when the joint PDF has no closed form and the size of the sample can be adjusted to trade off accuracy for computation time. However, with many variables the size of the sample required for a good approximation can still become prohibitively large. Factored particle filtering uses the structure of variable dependencies to split the problem into many smaller subproblems and scales better if such decomposition is possible. However, our problem was unusual because some normally independent variables would become strongly correlated for short periods of time. This dynamically-changing dependency structure was not handled effectively by existing techniques. Considering variables to be always correlated meant the problem did not scale, considering them to be always independent introduced errors too large to tolerate. It was necessary to develop an approach that would utilize variables' independence whenever possible, but not introduce large errors when variables become correlated. We have developed a new technique for monitoring the state of the system for a class of systems with context-specific correlations. It is based on the idea of caching the context in which correlations arise and otherwise keeping the variables independent. Our evaluation shows that our technique outperforms existing techniques and is the first viable solution for the class of problems we consider.
3

Knowledge and Reasoning for Image Understanding

January 2018 (has links)
abstract: Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning. Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
4

Autonomy through real-time learning and OpenNARS for Applications

Hammer, Patrick, 0000-0002-1891-9096 January 2021 (has links)
This work includes an attempt to enhance the autonomy of intelligent agents via real-time learning.In nature, the ability to learn at runtime gives species which can do so key advantages over others. While most AI systems do not need to have this ability but can be trained before deployment, it allows agents to adapt, at runtime, to changing and generally unknown circumstances, and then to exploit their environment for their own purposes. To reach this goal, in this thesis a pragmatic design (ONA) for a general-purpose reasoner incorporating Non-Axiomatic Reasoning System (NARS) theory is explored. The design and implementation is presented in detail, in addition to the theoretical foundation. Then, experiments related to various system capabilities are carried out and summarized, together with application projects where ONA is utilized: a traffic surveillance application in the Smart City domain to identify traffic anomalies through real-time reasoning and learning, and a system to help first responders by providing driving assistance and presenting of mission-critical information. Also it is shown how reliable real-time learning can help to increase autonomy of intelligent agents beyond the current state-of-the-art. Here, theoretical and practical comparisons with established frameworks and specific techniques such as Q-Learning are made, and it is shown that ONA does also work in non-Markovian environments where Q-Learning cannot be applied. Some of the reasoner's capabilities are also demonstrated on real robotic hardware. The experiments there show combining learning knowledge at runtime with the utilization of only partly complete mission-related background knowledge given by the designer, allowing the agent to perform a complex task from an only minimal mission specification which does not include learnable details. Overall, ONA is suitable for autonomous agents as it combines, in a single technique, the strengths of behavior learning, which is usually captured by Reinforcement Learning, and means-end reasoning (such as Belief-Desire-Intention models with planner) to effectively utilize knowledge expressed by a designer. / Computer and Information Science
5

Increasing Scalability in Algorithms for Centralized and Decentralized Partially Observable Markov Decision Processes: Efficient Decision-Making and Coordination in Uncertain Environments

Amato, Christopher 01 September 2010 (has links)
As agents are built for ever more complex environments, methods that consider the uncertainty in the system have strong advantages. This uncertainty is common in domains such as robot navigation, medical diagnosis and treatment, inventory management, sensor networks and e-commerce. When a single decision maker is present, the partially observable Markov decision process (POMDP) model is a popular and powerful choice. When choices are made in a decentralized manner by a set of decision makers, the problem can be modeled as a decentralized partially observable Markov decision process (DEC-POMDP). While POMDPs and DEC-POMDPs offer rich frameworks for sequential decision making under uncertainty, the computational complexity of each model presents an important research challenge. As a way to address this high complexity, this thesis develops several solution methods based on utilizing domain structure, memory-bounded representations and sampling. These approaches address some of the major bottlenecks for decision-making in real-world uncertain systems. The methods include a more efficient optimal algorithm for DEC-POMDPs as well as scalable approximate algorithms for POMDPs and DEC-POMDPs. Key contributions include optimizing compact representations as well as automatic structure extraction and exploitation. These approaches increase the scalability of algorithms, while also increasing their solution quality.
6

Policy Explanation and Model Refinement in Decision-Theoretic Planning

Khan, Omar Zia January 2013 (has links)
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts. Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy. The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback. In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations.
7

Automated Hierarchy Discovery for Planning in Partially Observable Domains

Charlin, Laurent January 2006 (has links)
Planning in partially observable domains is a notoriously difficult problem. However, in many real-world scenarios, planning can be simplified by decomposing the task into a hierarchy of smaller planning problems which, can then be solved independently of one another. Several approaches, mainly dealing with fully observable domains, have been proposed to optimize a plan that decomposes according to a hierarchy specified a priori. Some researchers have also proposed to discover hierarchies in fully observable domains. In this thesis, we investigate the problem of automatically discovering planning hierarchies in partially observable domains. The main advantage of discovering hierarchies is that, for a plan of a fixed size, hierarchical plans can be more expressive than non-hierarchical ones. Our solution frames the discovery and optimization of a hierarchical policy as a non-convex optimization problem. By encoding the hierarchical structure as variables of the optimization problem, we can automatically discover a hierarchy. Successfully solving the optimization problem therefore yields an optimal hierarchy and an optimal policy. We describe several techniques to solve the optimization problem. Namely, we provide results using general non-linear solvers, mixed-integer linear and non-linear solvers or a form of bounded hierarchical policy iteration. Our method is flexible enough to allow any parts of the hierarchy to be specified based on prior knowledge while letting the optimization discover the unknown parts. It can also discover hierarchical policies, including recursive policies, that are more compact (potentially infinitely fewer parameters).
8

Automated Hierarchy Discovery for Planning in Partially Observable Domains

Charlin, Laurent January 2006 (has links)
Planning in partially observable domains is a notoriously difficult problem. However, in many real-world scenarios, planning can be simplified by decomposing the task into a hierarchy of smaller planning problems which, can then be solved independently of one another. Several approaches, mainly dealing with fully observable domains, have been proposed to optimize a plan that decomposes according to a hierarchy specified a priori. Some researchers have also proposed to discover hierarchies in fully observable domains. In this thesis, we investigate the problem of automatically discovering planning hierarchies in partially observable domains. The main advantage of discovering hierarchies is that, for a plan of a fixed size, hierarchical plans can be more expressive than non-hierarchical ones. Our solution frames the discovery and optimization of a hierarchical policy as a non-convex optimization problem. By encoding the hierarchical structure as variables of the optimization problem, we can automatically discover a hierarchy. Successfully solving the optimization problem therefore yields an optimal hierarchy and an optimal policy. We describe several techniques to solve the optimization problem. Namely, we provide results using general non-linear solvers, mixed-integer linear and non-linear solvers or a form of bounded hierarchical policy iteration. Our method is flexible enough to allow any parts of the hierarchy to be specified based on prior knowledge while letting the optimization discover the unknown parts. It can also discover hierarchical policies, including recursive policies, that are more compact (potentially infinitely fewer parameters).
9

Policy Explanation and Model Refinement in Decision-Theoretic Planning

Khan, Omar Zia January 2013 (has links)
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts. Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy. The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback. In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations.
10

Inferring Genetic Regulatory Networks Using Cost-based Abduction and Its Relation to Bayesian Inference

Andrews, Emad Abdel-Thalooth 16 July 2014 (has links)
Inferring Genetic Regulatory Networks (GRN) from multiple data sources is a fundamental problem in computational biology. Computational models for GRN range from simple Boolean networks to stochastic differential equations. To successfully model GRN, a computational method has to be scalable and capable of integrating different biological data sources effectively and homogeneously. In this thesis, we introduce a novel method to model GRN using Cost-Based Abduction (CBA) and study the relation between CBA and Bayesian inference. CBA is an important AI formalism for reasoning under uncertainty that can integrate different biological data sources effectively. We use three different yeast genome data sources—protein-DNA, protein-protein, and knock-out data—to build a skeleton (unannotated) graph which acts as a theory to build a CBA system. The Least Cost Proof (LCP) for the CBA system fully annotates the skeleton graph to represent the learned GRN. Our results show that CBA is a promising tool in computational biology in general and in GRN modeling in particular because CBA knowledge representation can intrinsically implement the AND/OR logic in GRN while enforcing cis-regulatory logic constraints effectively, allowing the method to operate on a genome-wide scale.Besides allowing us to successfully learn yeast pathways such as the pheromone pathway, our method is scalable enough to analyze the full yeast genome in a single CBA instance, without sub-networking. The scalability power of our method comes from the fact that our CBA model size grows in a quadratic, rather than exponential, manner with respect to data size and path length. We also introduce a new algorithm to convert CBA into an equivalent binary linear program that computes the exact LCP for the CBA system, thus reaching the optimal solution. Our work establishes a framework to solve Bayesian networks using integer linear programming and high order recurrent neural networks through CBA as an intermediate representation.

Page generated in 0.1274 seconds