Spelling suggestions: "subject:"csrknowledge depresentation"" "subject:"csrknowledge prepresentation""
321 |
Dynamic architecture for multimodal applications to reinforce robot-environment interaction / Architectures et modèles dynamiques dédiés aux applications multimodales pour renforcer l'interaction robot-environnementAdjali, Omar 14 December 2017 (has links)
La représentation des connaissances et le raisonnement sont au cœur du grand défi de l'Intelligence Artificielle. Plus précisément, dans le contexte des applications robotiques, la représentation des connaissances et les approches de raisonnement sont nécessaires pour résoudre les problèmes de décision auxquels sont confrontés les robots autonomes lorsqu'ils évoluent dans des environnements incertains, dynamiques et complexes ou pour assurer une interaction naturelle dans l'environnement humain. Dans un système d'interaction robotique, l'information doit être représentée et traitée à différents niveaux d'abstraction: du capteur aux actions et plans. Ainsi, la représentation des connaissances fournit les moyens de décrire l'environnement avec différents niveaux d'abstraction qui permettent d'effectuer des décisions appropriées. Dans cette thèse, nous proposons une méthodologie pour résoudre le problème de l'interaction multimodale en décrivant une architecture d'interaction sémantique basée sur un cadre qui démontre une approche de représentation et de raisonnement avec le langage (EKRL environment knowledge representation language), afin d'améliorer l'interaction entre les robots et leur environnement. Ce cadre est utilisé pour gérer le processus d'interaction en représentant les connaissances impliquées dans l'interaction avec EKRL et en raisonnant pour faire une inférence. Le processus d'interaction comprend la fusion des valeurs des différents capteurs pour interpréter et comprendre ce qui se passe dans l'environnement, et la fission qui suggère un ensemble détaillé d'actions qui sont mises en œuvre. Avant que ces actions ne soient mises en œuvre par les actionneurs, ces actions sont d'abord évaluées dans un environnement virtuel qui reproduit l'environnement réel pour évaluer la faisabilité de la mise en œuvre de l'action dans le monde réel. Au cours de ces processus, des capacités de raisonnement sont nécessaires pour garantir une exécution globale d'un scénario d'interaction. Ainsi, nous avons fourni un ensemble de techniques de raisonnement pour effectuer de l’inférence déterministe grâce à des algorithmes d'unification et des inférences probabilistes pour gérer des connaissances incertaines en combinant des modèles relationnels statistiques à l'aide des réseaux logiques de Markov (MLN) avec EKRL. Le travail proposé est validé à travers des scénarios qui démontrent l’applicabilité et la performance de notre travail dans les applications du monde réel. / Knowledge Representation and Reasoning is at the heart of the great challenge of Artificial Intelligence. More specifically, in the context of robotic applications, knowledge representation and reasoning approaches are necessary to solve decision problems that autonomous robots face when it comes to evolve in uncertain, dynamic and complex environments or to ensure a natural interaction in human environment. In a robotic interaction system, information has to be represented and processed at various levels of abstraction: From sensor up to actions and plans. Thus, knowledge representation provides the means to describe the environment with different abstraction levels which allow performing appropriate decisions. In this thesis we propose a methodology to solve the problem of multimodal interaction by describing a semantic interaction architecture based on a framework that demonstrates an approach for representing and reasoning with environment knowledge representation language (EKRL), to enhance interaction between robots and their environment. This framework is used to manage the interaction process by representing the knowledge involved in the interaction with EKRL and reasoning on it to make inference. The interaction process includes fusion of values from different sensors to interpret and understand what is happening in the environment, and the fission which suggests a detailed set of actions that are for implementation. Before such actions are implemented by actuators, these actions are first evaluated in a virtual environment which mimics the real-world environment to assess the feasibility of the action implementation in the real world. During these processes, reasoning abilities are necessary to guarantee a global execution of a given interaction scenario. Thus, we provided EKRL framework with reasoning techniques to draw deterministic inferences thanks to unification algorithms and probabilistic inferences to manage uncertain knowledge by combining statistical relational models using Markov logic Networks(MLN) framework with EKRL. The proposed work is validated through scenarios that demonstrate the usability and the performance of our framework in real world applications.
|
322 |
Action, Time and Space in Description LogicsMilicic, Maja 19 June 2008 (has links)
Description Logics (DLs) are a family of logic-based knowledge representation (KR) formalisms designed to represent and reason about static conceptual knowledge in a semantically well-understood way. On the other hand, standard action formalisms are KR formalisms based on classical logic designed to model and reason about dynamic systems. The largest part of the present work is dedicated to integrating DLs with action formalisms, with the main goal of obtaining decidable action formalisms with an expressiveness significantly beyond propositional. To this end, we offer DL-tailored solutions to the frame and ramification problem. One of the main technical results is that standard reasoning problems about actions (executability and projection), as well as the plan existence problem are decidable if one restricts the logic for describing action pre- and post-conditions and the state of the world to decidable Description Logics. A smaller part of the work is related to decidable extensions of Description Logics with concrete datatypes, most importantly with those allowing to refer to the notions of space and time.
|
323 |
STRUCTURED PREDICTION: STATISTICAL AND COMPUTATIONAL GUARANTEES IN LEARNING AND INFERENCEKevin Segundo Bello Medina (11196552) 28 July 2021 (has links)
<div>Structured prediction consists of receiving a structured input and producing a combinatorial structure such as trees, clusters, networks, sequences, permutations, among others. From the computational viewpoint, structured prediction is in general considered <i>intractable</i> because of the size of the output space being exponential in the input size. For instance, in image segmentation tasks, the number of admissible segments is exponential in the number of pixels. A second factor is the combination of the input dimensionality along with the amount of data under availability. In structured prediction it is common to have the input live in a high-dimensional space, which involves to jointly reason about thousands or millions of variables, and at the same time contend with limited amount of data. Thus, learning and inference methods with strong computational and statistical guarantees are desired. The focus of our research is then to propose <i>principled methods</i> for structured prediction that are both polynomial time, i.e., <i>computationally efficient</i>, and require a polynomial number of data samples, i.e., <i>statistically efficient</i>.</div><div><br></div><div>The main contributions of this thesis are as follows:</div><div><br></div><div>(i) We develop an efficient and principled learning method of latent variable models for structured prediction under Gaussian perturbations. We derive a Rademacher-based generalization bound and argue that the use of non-convex formulations in learning latent-variable models leads to tighter bounds of the Gibbs decoder distortion.</div><div><br></div><div>(ii) We study the fundamental limits of structured prediction, i.e., we characterize the necessary sample complexity for learning factor graph models in the context of structured prediction. In particular, we show that the finiteness of our novel MaxPair-dimension is necessary for learning. Lastly, we show a connection between the MaxPair-dimension and the VC-dimension---which allows for using existing results on VC-dimension to calculate the MaxPair-dimension.</div><div><br></div><div>(iii) We analyze a generative model based on connected graphs, and find the structural conditions of the graph that allow for the exact recovery of the node labels. In particular, we show that exact recovery is realizable in polynomial time for a large class of graphs. Our analysis is based on convex relaxations, where we thoroughly analyze a semidefinite program and a degree-4 sum-of-squares program. Finally, we extend this model to consider linear constraints (e.g., fairness), and formally explain the effect of the added constraints on the probability of exact recovery.</div><div><br></div>
|
324 |
Variational Inference for Data-driven Stochastic ProgrammingPrateek Jaiswal (11210091) 30 July 2021 (has links)
<div>Stochastic programs are standard models for decision-making under uncertainty and have been extensively studied in the operations research literature. In general, stochastic programming involves minimizing an expected cost function, where the expectation is with respect to fully specified stochastic models that quantify the aleatoric or `inherent' uncertainty in the decision-making problem. In practice, however, the stochastic models are unknown but can be estimated from data, introducing an additional epistemic uncertainty into the decision-making problem. The Bayesian framework provides a coherent way to quantify the epistemic uncertainty through the posterior distribution by combining prior beliefs of the decision-makers with the observed data. Bayesian methods have been used for data-driven decision-making in various applications such as inventory management, portfolio design, machine learning, optimal scheduling, and staffing, etc.</div><div> </div><div>Bayesian methods are challenging to implement, mainly due to the fact that the posterior is computationally intractable, necessitating the computation of approximate posteriors. Broadly speaking, there are two methods in the literature implementing approximate posterior inference. First are sampling-based methods such as Markov Chain Monte Carlo. Sampling-based methods are theoretically well understood, but they suffer from various issues like high variance, poor scalability to high-dimensional problems, and have complex diagnostics. Consequently, we propose to use optimization-based methods collectively known as variational inference (VI) that use information projections to compute an approximation to the posterior. Empirical studies have shown that VI methods are computationally faster and easily scalable to higher-dimensional problems and large datasets. However, the theoretical guarantees of these methods are not well understood. Moreover, VI methods are empirically and theoretically less explored in the decision-theoretic setting.</div><div><br></div><div> In this thesis, we first propose a novel VI framework for risk-sensitive data-driven decision-making, which we call risk-sensitive variational Bayes (RSVB). In RSVB, we jointly compute a risk-sensitive approximation to the `true' posterior and the optimal decision by solving a minimax optimization problem. The RSVB framework includes the naive approach of first computing a VI approximation to the true posterior and then using it in place of the true posterior for decision-making. We show that the RSVB approximate posterior and the corresponding optimal value and decision rules are asymptotically consistent, and we also compute their rate of convergence. We illustrate our theoretical findings in both parametric as well as nonparametric setting with the help of three examples: the single and multi-product newsvendor model and Gaussian process classification. Second, we present the Bayesian joint chance-constrained stochastic program (BJCCP) for modeling decision-making problems with epistemically uncertain constraints. We discover that using VI methods for posterior approximation can ensure the convexity of the feasible set in (BJCCP) unlike any sampling-based methods and thus propose a VI approximation for (BJCCP). We also show that the optimal value computed using the VI approximation of (BJCCP) are statistically consistent. Moreover, we derive the rate of convergence of the optimal value and compute the rate at which a VI approximate solution of (BJCCP) is feasible under the true constraints. We demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queue. Finally, this thesis also contributes to the growing literature in understanding statistical performance of VI methods. In particular, we establish the frequentist consistency of an approximate posterior computed using a well known VI method that computes an approximation to the posterior distribution by minimizing the Renyi divergence from the ‘true’ posterior.</div>
|
325 |
Toward Knowledge-Centric Natural Language Processing: Acquisition, Representation, Transfer, and ReasoningZhen, Wang January 2022 (has links)
No description available.
|
326 |
Description Logics with Symbolic Number RestrictionsBaader, Franz, Sattler, Ulrike 18 May 2022 (has links)
Aus der Einleitung:
„Terminological knowledge representation systems (TKR systems) are powerful tools not only to represent but also to reason about the knowledge on the terminology of an application domain. Their particular power lie in their ability to infer implicit knowledge from the knowledge explicitly stored in a knowledge base. Mainly, a TKR system consists of three parts: First, a terminological knowledge base which contains the explicit description of the concepts relevant for the application domain. Second, an assertional knowledge base which contains the description of concrete individuals and their relations. This description of concrete individuals is realized using the terminology fixed in the terminological knowledge base. Third, a TKR system comprises an inference engine which is able to infer implicit properties of the defined concepts and individuals such as subclass/superclass relations amongst concepts (subsumption), the classifcation of all defned concepts with respect to the subclass/superclass relation. This yields the class taxonomy. whether there exists an interpretation of the terminology where a given concept has at least one instance (satisfiability), to enumerate all individuals that are instances of a given concept (retrieval), given a concrete individual, to enumerate the most specific concepts of the terminology this individual is an instance of.”
|
327 |
NExpTime-complete Description Logics with Concrete DomainsLutz, Carsten 20 May 2022 (has links)
Aus der Einleitung:
„Description logics (DLs) are a family of logical formalisms well-suited for the representation of and reasoning about conceptual knowledge on an abstract logical level. However, for many knowledge representation applications, it is essential to integrate the abstract logical knowledge with knowledge of a more concrete nature. As an example, consider the modeling of manufacturing processes, where it is necessary to represent 'abstract' entities like subprocesses and workpieces and also 'concrete' knowledge, e.g., about the duration of processes and physical dimensions of the manufactured objects [2; 25].”
|
328 |
Belief Revision in Expressive Knowledge Representation FormalismsFalakh, Faiq Miftakhul 10 January 2023 (has links)
We live in an era of data and information, where an immeasurable amount of discoveries, findings, events, news, and transactions are generated every second. Governments, companies, or individuals have to employ and process all that data for knowledge-based decision-making (i.e. a decision-making process that uses predetermined criteria to measure and ensure the optimal outcome for a specific topic), which then prompt them to view the knowledge as valuable resource. In this knowledge-based view, the capability to create and utilize knowledge is the key source of an organization or individual’s competitive advantage. This dynamic nature of knowledge leads us to the study of belief revision (or belief change), an area which emerged from work in philosophy and then impacted further developments in computer science and artificial intelligence.
In belief revision area, the AGM postulates by Alchourrón, Gärdenfors, and Makinson continue to represent a cornerstone in research related to belief change. Katsuno and Mendelzon (K&M) adopted the AGM postulates for changing belief bases and characterized AGM belief base revision in propositional logic over finite signatures. In this thesis, two research directions are considered. In the first, by considering the semantic point of view, we generalize K&M’s approach to the setting of (multiple) base revision in arbitrary Tarskian logics, covering all logics with a classical model-theoretic semantics and hence a wide variety of logics used in knowledge representation and beyond. Our generic formulation applies to various notions of “base”, such as belief sets, arbitrary or finite sets of sentences, or single sentences.
The core result is a representation theorem showing a two-way correspondence between AGM base revision operators and certain “assignments”: functions mapping belief bases to total — yet not transitive — “preference” relations between interpretations. Alongside, we present a companion result for the case when the AGM postulate of syntax-independence is abandoned. We also provide a characterization of all logics for which our result can be strengthened to assignments producing transitive preference relations (as in K&M’s original work), giving rise to two more representation theorems for such logics, according to syntax dependence vs. independence. The second research direction in this thesis explores two approaches for revising description logic knowledge bases under fixed-domain semantics, namely model-based approach and individual-based approach. In this logical setting, models of the knowledge bases can be enumerated and can be computed to produce the revision result, semantically. We show a characterization of the AGM revision operator for this logic and present a concrete model-based revision approach via distance between interpretations. In addition, by weakening the KB based on certain domain elements, a novel individual-based revision operator is provided as an alternative approach.
|
329 |
Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge GraphsNguyen, Vinh Thi Kim January 2017 (has links)
No description available.
|
330 |
Knowledge Acquisition in a SystemThomas, Christopher J. January 2012 (has links)
No description available.
|
Page generated in 0.1202 seconds