• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 529
  • 529
  • 148
  • 140
  • 124
  • 123
  • 119
  • 112
  • 104
  • 101
  • 97
  • 83
  • 80
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Computation of context as a cognitive tool

Sanscartier, Manon Johanne 09 November 2006
In the field of cognitive science, as well as the area of Artificial Intelligence (AI), the role of context has been investigated in many forms, and for many purposes. It is clear in both areas that consideration of contextual information is important. However, the significance of context has not been emphasized in the Bayesian networks literature. We suggest that consideration of context is necessary for acquiring knowledge about a situation and for refining current representational models that are potentially erroneous due to hidden independencies in the data.<p>In this thesis, we make several contributions towards the automation of contextual consideration by discovering useful contexts from probability distributions. We show how context-specific independencies in Bayesian networks and discovery algorithms, traditionally used for efficient probabilistic inference can contribute to the identification of contexts, and in turn can provide insight on otherwise puzzling situations. Also, consideration of context can help clarify otherwise counter intuitive puzzles, such as those that result in instances of Simpson's paradox. In the social sciences, the branch of attribution theory is context-sensitive. We suggest a method to distinguish between <i>dispositional causes</i> and <i>situational factors</i> by means of contextual models. Finally, we address the work of Cheng and Novick dealing with causal attribution by human adults. Their <i>probabilistic contrast model</i> makes use of contextual information, called focal sets, that must be determined by a human expert. We suggest a method for discovering complete <i>focal sets</i> from probabilistic distributions, without the human expert.
272

Computation of context as a cognitive tool

Sanscartier, Manon Johanne 09 November 2006 (has links)
In the field of cognitive science, as well as the area of Artificial Intelligence (AI), the role of context has been investigated in many forms, and for many purposes. It is clear in both areas that consideration of contextual information is important. However, the significance of context has not been emphasized in the Bayesian networks literature. We suggest that consideration of context is necessary for acquiring knowledge about a situation and for refining current representational models that are potentially erroneous due to hidden independencies in the data.<p>In this thesis, we make several contributions towards the automation of contextual consideration by discovering useful contexts from probability distributions. We show how context-specific independencies in Bayesian networks and discovery algorithms, traditionally used for efficient probabilistic inference can contribute to the identification of contexts, and in turn can provide insight on otherwise puzzling situations. Also, consideration of context can help clarify otherwise counter intuitive puzzles, such as those that result in instances of Simpson's paradox. In the social sciences, the branch of attribution theory is context-sensitive. We suggest a method to distinguish between <i>dispositional causes</i> and <i>situational factors</i> by means of contextual models. Finally, we address the work of Cheng and Novick dealing with causal attribution by human adults. Their <i>probabilistic contrast model</i> makes use of contextual information, called focal sets, that must be determined by a human expert. We suggest a method for discovering complete <i>focal sets</i> from probabilistic distributions, without the human expert.
273

Proof theory and algorithms for answer set programming

Gebser, Martin January 2011 (has links)
Answer Set Programming (ASP) is an emerging paradigm for declarative programming, in which a computational problem is specified by a logic program such that particular models, called answer sets, match solutions. ASP faces a growing range of applications, demanding for high-performance tools able to solve complex problems. ASP integrates ideas from a variety of neighboring fields. In particular, automated techniques to search for answer sets are inspired by Boolean Satisfiability (SAT) solving approaches. While the latter have firm proof-theoretic foundations, ASP lacks formal frameworks for characterizing and comparing solving methods. Furthermore, sophisticated search patterns of modern SAT solvers, successfully applied in areas like, e.g., model checking and verification, are not yet established in ASP solving. We address these deficiencies by, for one, providing proof-theoretic frameworks that allow for characterizing, comparing, and analyzing approaches to answer set computation. For another, we devise modern ASP solving algorithms that integrate and extend state-of-the-art techniques for Boolean constraint solving. We thus contribute to the understanding of existing ASP solving approaches and their interconnections as well as to their enhancement by incorporating sophisticated search patterns. The central idea of our approach is to identify atomic as well as composite constituents of a propositional logic program with Boolean variables. This enables us to describe fundamental inference steps, and to selectively combine them in proof-theoretic characterizations of various ASP solving methods. In particular, we show that different concepts of case analyses applied by existing ASP solvers implicate mutual exponential separations regarding their best-case complexities. We also develop a generic proof-theoretic framework amenable to language extensions, and we point out that exponential separations can likewise be obtained due to case analyses on them. We further exploit fundamental inference steps to derive Boolean constraints characterizing answer sets. They enable the conception of ASP solving algorithms including search patterns of modern SAT solvers, while also allowing for direct technology transfers between the areas of ASP and SAT solving. Beyond the search for one answer set of a logic program, we address the enumeration of answer sets and their projections to a subvocabulary, respectively. The algorithms we develop enable repetition-free enumeration in polynomial space without being intrusive, i.e., they do not necessitate any modifications of computations before an answer set is found. Our approach to ASP solving is implemented in clasp, a state-of-the-art Boolean constraint solver that has successfully participated in recent solver competitions. Although we do here not address the implementation techniques of clasp or all of its features, we present the principles of its success in the context of ASP solving. / Antwortmengenprogrammierung (engl. Answer Set Programming; ASP) ist ein Paradigma zum deklarativen Problemlösen, wobei Problemstellungen durch logische Programme beschrieben werden, sodass bestimmte Modelle, Antwortmengen genannt, zu Lösungen korrespondieren. Die zunehmenden praktischen Anwendungen von ASP verlangen nach performanten Werkzeugen zum Lösen komplexer Problemstellungen. ASP integriert diverse Konzepte aus verwandten Bereichen. Insbesondere sind automatisierte Techniken für die Suche nach Antwortmengen durch Verfahren zum Lösen des aussagenlogischen Erfüllbarkeitsproblems (engl. Boolean Satisfiability; SAT) inspiriert. Letztere beruhen auf soliden beweistheoretischen Grundlagen, wohingegen es für ASP kaum formale Systeme gibt, um Lösungsmethoden einheitlich zu beschreiben und miteinander zu vergleichen. Weiterhin basiert der Erfolg moderner Verfahren zum Lösen von SAT entscheidend auf fortgeschrittenen Suchtechniken, die in gängigen Methoden zur Antwortmengenberechnung nicht etabliert sind. Diese Arbeit entwickelt beweistheoretische Grundlagen und fortgeschrittene Suchtechniken im Kontext der Antwortmengenberechnung. Unsere formalen Beweissysteme ermöglichen die Charakterisierung, den Vergleich und die Analyse vorhandener Lösungsmethoden für ASP. Außerdem entwerfen wir moderne Verfahren zum Lösen von ASP, die fortgeschrittene Suchtechniken aus dem SAT-Bereich integrieren und erweitern. Damit trägt diese Arbeit sowohl zum tieferen Verständnis von Lösungsmethoden für ASP und ihrer Beziehungen untereinander als auch zu ihrer Verbesserung durch die Erschließung fortgeschrittener Suchtechniken bei. Die zentrale Idee unseres Ansatzes besteht darin, Atome und komposite Konstrukte innerhalb von logischen Programmen gleichermaßen mit aussagenlogischen Variablen zu assoziieren. Dies ermöglicht die Isolierung fundamentaler Inferenzschritte, die wir in formalen Charakterisierungen von Lösungsmethoden für ASP selektiv miteinander kombinieren können. Darauf aufbauend zeigen wir, dass unterschiedliche Einschränkungen von Fallunterscheidungen zwangsläufig zu exponentiellen Effizienzunterschieden zwischen den charakterisierten Methoden führen. Wir generalisieren unseren beweistheoretischen Ansatz auf logische Programme mit erweiterten Sprachkonstrukten und weisen analytisch nach, dass das Treffen bzw. Unterlassen von Fallunterscheidungen auf solchen Konstrukten ebenfalls exponentielle Effizienzunterschiede bedingen kann. Die zuvor beschriebenen fundamentalen Inferenzschritte nutzen wir zur Extraktion inhärenter Bedingungen, denen Antwortmengen genügen müssen. Damit schaffen wir eine Grundlage für den Entwurf moderner Lösungsmethoden für ASP, die fortgeschrittene, ursprünglich für SAT konzipierte, Suchtechniken mit einschließen und darüber hinaus einen transparenten Technologietransfer zwischen Verfahren zum Lösen von ASP und SAT erlauben. Neben der Suche nach einer Antwortmenge behandeln wir ihre Aufzählung, sowohl für gesamte Antwortmengen als auch für Projektionen auf ein Subvokabular. Hierfür entwickeln wir neuartige Methoden, die wiederholungsfreies Aufzählen in polynomiellem Platz ermöglichen, ohne die Suche zu beeinflussen und ggf. zu behindern, bevor Antwortmengen berechnet wurden.
274

Empirically-based self-diagnosis and repair of domain knowledge

Jones, Joshua K. 17 December 2009 (has links)
In this work, I view incremental experiential learning in intelligent software agents as progressive agent self-adaptation. When an agent produces an incorrect behavior, then it may reflect on, and thus diagnose and repair, the reasoning and knowledge that produced the incorrect behavior. In particular, I focus on the self-diagnosis and self-repair of an agent's domain knowledge. The implementation of systems with the capability to self-diagnose and self-repair involves building both reasoning processes capable of such learning and knowledge representations capable of supporting those reasoning processes. The core issue my dissertation addresses is: what kind of metaknowledge (knowledge about knowledge) may enable the agent to diagnose faults in its domain knowledge? In providing a solution to this issue, the central contribution of this research is a theory of the kind of metaknowledge that enables a system to reason about and adapt its conceptual knowledge. For this purpose, I propose a representation that explicitly encodes metaknowledge in the form of procedures called Empirical Verification Procedures (EVPs). In the proposed knowledge representation, an EVP is associated with each concept within the agent's domain knowledge. Each EVP explicitly semantically grounds the associated concept in the agent's perception, and can thus be used as a test to determine the validity of knowledge of that concept during diagnosis. I present the formal and empirical evaluation of a system, Augur, that makes use of EVP metaknowledge to adapt its own domain knowledge in the context of a particular subclass of classification problem that I call compositional classification, in which the overall classification task can be broken into a hierarchically organized set of subtasks. I hypothesize that EVP metaknowledge will enable a system to automatically adapt its knowledge in two ways: first, by adjusting the ways that inputs are categorized by a concept, in accordance with semantics fixed by an associated EVP; and second, by adjusting the semantics of concepts themselves when they fail to contribute appropriately to system goals. The latter adaptation is realized by altering the EVP associated with the concept in question. I further hypothesize that the semantic grounding of domain concepts in perception through the use of EVPs will increase the generalization power of a learner that operates over those concepts, and thus make learning more efficient. Beyond the support of these hypotheses, I also present results pertinent to the understanding of learning in compositional classification settings using structured knowledge representations.
275

Ontology as a means for systematic biology

Tirmizi, Syed Hamid Ali 03 July 2012 (has links)
Biologists use ontologies as a method to organize and publish their acquired knowledge. Computer scientists have shown the value of ontologies as a means for knowledge discovery. This dissertation makes a number of contributions to enable systematic biologists to better leverage their ontologies in their research. Systematic biology, or phylogenetics, is the study of evolution. “Assembling a Tree of Life” (AToL) is an NSF grand challenge to describe all life on Earth and estimate its evolutionary history. AToL projects commonly include a study a taxon (organism) to create an ontology to capture its anatomy. Such anatomy ontologies are manually curated based on the data from morphology-based phylogenetic studies. Annotated digital imagery, morphological characters and phylogenetic (evolutionary) trees are the key components of morphological studies. Given the scale of AToL, building an anatomy ontology for each taxon manually is infeasible. The primary contribution of this dissertation is automatic inference and concomitant formalization required to compute anatomy ontologies. New anatomy ontologies are formed by applying transformations on an existing anatomy ontology for a model organism. The conditions for the transformations are derived from observational data recorded as morphological characters. We automatically created the Cypriniformes Gill and Hyoid Arches Ontology using the morphological character data provided by the Cypriniformes Tree of Life (CTOL) project. The method is based on representing all components of a phylogenetic study as an ontology using a domain meta-model. For this purpose we developed Morphster, a domain-specific knowledge acquisition tool for biologists. Digital images often serve as proxies for natural specimens and are the basis of many observations. A key problem for Morphster is the treatment of images in conjunction with ontologies. We contributed a formal system for integrating images with ontologies where images either capture observations of nature or scientific hypotheses. Our framework for image-ontology integration provides opportunities for building workflows that allow biologists to synthesize and align ontologies. Biologists building ontologies often had to choose between two ontology systems: Open Biomedical Ontologies (OBO) or the Semantic Web. It was critical to bridge the gap between the two systems to leverage biological ontologies for inference. We created a methodology and a lossless round-trip mapping for OBO ontologies to the Semantic Web. Using the Semantic Web as a guide to organize OBO, we developed a mapping system which is now a community standard. / text
276

Modeling the clinical predictivity of palpitation symptom reports : mapping body cognition onto cardiac and neurophysiological measurements / Mapping body cognition onto cardiac and neurophysiological measurements

McNally, Robert Owen 30 January 2012 (has links)
This dissertation models the relationship between symptoms of heart rhythm fluctuations and cardiac measurements in order to better identify the probabilities of either a primarily organic or psychosomatic cause, and to better understand cognition of the internal body. The medical system needs to distinguish patients with actual cardiac problems from those who are misperceiving benign heart rhythms due to psychosomatic conditions. Cognitive neuroscience needs models showing how the brain processes sensations of palpitations. Psychologists and philosophers want data and analyses that address longstanding controversies about the validity of introspective methods. I therefore undertake a series of measurements to model how well patient descriptions of heartbeat fluctuations correspond to cardiac arrhythmias. First, I employ a formula for Bayesian inference and an initial probability for disease. The presence of particular phrases in symptom reports is shown to modify the probability that a patient has a clinically significant heart rhythm disorder. A second measure of body knowledge accuracy uses a corpus of one hundred symptom reports to estimate the positive predictive value for arrhythmias contained in language about palpitations. This produces a metric representing average predictivity for cardiac arrhythmias in a population. A third effort investigates the percentage of patients with palpitations report actually diagnosed with arrhythmias by examining data from a series of studies. The major finding suggests that phenomenological reports about heartbeats are as or are more predictive of clinically significant arrhythmias than non-introspection-based data sources. This calculation can help clinicians who must diagnose an organic or psychosomatic etiology. Secondly, examining a corpus of reports for how well they predict the presence of cardiac rhythm disorders yielded a mean positive predictive value of 0.491. Thirdly, I reviewed studies of palpitations reporters, half of which showed between 15% and 26% of patients had significant or serious arrhythmias. In addition, evidence is presented that psychosomatic-based palpitation reports are likely due to cognitive filtering and processing of cardiac afferents by brainstem, thalamic, and cortical neurons. A framework is proposed to model these results, integrating neurophysiological, cognitive, and clinical levels of explanation. Strategies for developing therapies for patients suffering from identifiably psychosomatic-based palpitations are outlined. / text
277

Answering Object Queries over Knowledge Bases with Expressive Underlying Description Logics

Wu, Jiewen January 2013 (has links)
Many information sources can be viewed as collections of objects and descriptions about objects. The relationship between objects is often characterized by a set of constraints that semantically encode background knowledge of some domain. The most straightforward and fundamental way to access information in these repositories is to search for objects that satisfy certain selection criteria. This work considers a description logics (DL) based representation of such information sources and object queries, which allows for automated reasoning over the constraints accompanying objects. Formally, a knowledge base K=(T, A) captures constraints in the terminology (a TBox) T, and objects with their descriptions in the assertions (an ABox) A, using some DL dialect L. In such a setting, object descriptions are L-concepts and object identifiers correspond to individual names occurring in K. Correspondingly, object queries are the well known problem of instance retrieval in the underlying DL knowledge base K, which returns the identifiers of qualifying objects. This work generalizes instance retrieval over knowledge bases to provide users with answers in which both identifiers and descriptions of qualifying objects are given. The proposed query paradigm, called assertion retrieval, is favoured over instance retrieval since it provides more informative answers to users. A more compelling reason is related to performance: assertion retrieval enables a transfer of basic relational database techniques, such as caching and query rewriting, in the context of an assertion retrieval algebra. The main contributions of this work are two-fold: one concerns optimizing the fundamental reasoning task that underlies assertion retrieval, namely, instance checking, and the other establishes a query compilation framework based on the assertion retrieval algebra. The former is necessary because an assertion retrieval query can entail a large volume of instance checking requests in the form of K|= a:C, where "a" is an individual name and "C" is a L-concept. This work thus proposes a novel absorption technique, ABox absorption, to improve instance checking. ABox absorption handles knowledge bases that have an expressive underlying dialect L, for instance, that requires disjunctive knowledge. It works particularly well when knowledge bases contain a large number of concrete domain concepts for object descriptions. This work further presents a query compilation framework based on the assertion retrieval algebra to make assertion retrieval more practical. In the framework, a suite of rewriting rules is provided to generate a variety of query plans, with a focus on plans that avoid reasoning w.r.t. the background knowledge bases when sufficient cached results of earlier requests exist. ABox absorption and the query compilation framework have been implemented in a prototypical system, dubbed CARE Assertion Retrieval Engine (CARE). CARE also defines a simple yet effective cost model to search for the best plan generated by query rewriting. Empirical studies of CARE have shown that the proposed techniques in this work make assertion retrieval a practical application over a variety of domains.
278

Associative classification, linguistic entity relationship extraction, and description-logic representation of biomedical knowledge applied to MEDLINE

Rak, Rafal Unknown Date
No description available.
279

Developing an inter-organisational knowledge transfer framework for SMEs

Chen, Shizhong January 2005 (has links)
This thesis aims to develop an inter-organisational knowledge transfer (KT) framework for SMEs, to help them have better understanding of the process of the KT between a SME and its customer (or supplier). The motivation is that knowledge management issues in SMEs is very neglected, which is not in line with the importance of SMEs in the UK national economy; moreover, compared to KT within an organisation, between organisations is more complicated, harder to understand, and has received much less attention. Firstly, external knowledge is generally believed to be of prime importance for SMEs. However, there is little empirical evidence to confirm this hypothesis. In order to empirically evaluate the hypothesis, and also specifically to identify SMEs' needs for external knowledge, a mail questionnaire survey is carried out. Then, based on the key findings of the survey, some 5MB managers are interviewed. The conclusions triangulated from both the key findings and the interview results strongly support the hypothesis, and demonstrate that SMEs have very strong needs for inter-organisational KT, and thus provide very strong empirical underpinning for the necessity of the development of the framework. Secondly, drawing support from a process view, a four-stage process model was proposed for inter-organisational KT. Then a co-ordinating mechanism underpinned by social networks and organisational learning is developed. The process model, co-ordinating mechanism together with cultural difference between organisations constitute an initial framework. Through interviews with SME managers, the initial framework is revised a final framework. The framework validation exercise shows that the final framework could help SMEs have better understanding of the KT. In order to remind and help SMEs to address the 'boundary paradox' embedded in interorganisational KT, and further reflect its complexities and difficulties, the important factors related to each stage of the framework are identified from a strategic perspective, with the help of the co-ordinating mechanism and relevant literature. The factors are also verified by interviews in SMEs. As a result, the initial factors are revised by removing the factors that are perceived as unimportant. The interview results demonstrate that the important factors, as a checklist, can remind and help SMEs to address the 'paradox', and are thus very useful for them.
280

A Reasoning Module for Long-lived Cognitive Agents

Vassos, Stavros 03 March 2010 (has links)
In this thesis we study a reasoning module for agents that have cognitive abilities, such as memory, perception, action, and are expected to function autonomously for long periods of time. The module provides the ability to reason about action and change using the language of the situation calculus and variants of the basic action theories. The main focus of this thesis is on the logical problem of progressing an action theory. First, we investigate the conjecture by Lin and Reiter that a practical first-order definition of progression is not appropriate for the general case. We show that Lin and Reiter were indeed correct in their intuitions by providing a proof for the conjecture, thus resolving the open question about the first-order definability of progression and justifying the need for a second-order definition. Then we proceed to identify three cases where it is possible to obtain a first-order progression with the desired properties: i) we extend earlier work by Lin and Reiter and present a case where we restrict our attention to a practical class of queries that may only quantify over situations in a limited way; ii) we revisit the local-effect assumption of Liu and Levesque that requires that the effects of an action are fixed by the arguments of the action and show that in this case a first-order progression is suitable; iii) we investigate a way that the local-effect assumption can be relaxed and show that when the initial knowledge base is a database of possible closures and the effects of the actions are range-restricted then a first-order progression is also suitable under a just-in-time assumption. Finally, we examine a special case of the action theories with range-restricted effects and present an algorithm for computing a finite progression. We prove the correctness and the complexity of the algorithm, and show its application in a simple example that is inspired by video games.

Page generated in 0.3791 seconds