Spelling suggestions: "subject:"knowledgerepresentation"" "subject:"knowledgesrepresentation""
201 |
Goal driven theorem proving using conceptual graphs and Peirce logicHeaton, John Edward January 1994 (has links)
The thesis describes a rational reconstruction of Sowa's theory of Conceptual Graphs. The reconstruction produces a theory with a firmer logical foundation than was previously the case and which is suitable for computation whilst retaining the expressiveness of the original theory. Also, several areas of incompleteness are addressed. These mainly concern the scope of operations on conceptual graphs of different types but include extensions for logics of higher orders than first order. An important innovation is the placing of negation onto a sound representational basis. A comparison of theorem proving techniques is made from which the principles of theorem proving in Peirce logic are identified. As a result, a set of derived inference rules, suitable for a goal driven approach to theorem proving, is developed from Peirce's beta rules. These derived rules, the first of their kind for Peirce logic and conceptual graphs, allow the development of a novel theorem proving approach which has some similarities to a combined semantic tableau and resolution methodology. With this methodology it is shown that a logically complete yet tractable system is possible. An important result is the identification of domain independent heuristics which follow directly from the methodology. In addition to the theorem prover, an efficient system for the detection of selectional constraint violations is developed. The proof techniques are used to build a working knowledge base system in Prolog which can accept arbitrary statements represented by conceptual graphs and test their semantic and logical consistency against a dynamic knowledge base. The same proof techniques are used to find solutions to arbitrary queries. Since the system is logically complete it can maintain the integrity of its knowledge base and answer queries in a fully automated manner. Thus the system is completely declarative and does not require any programming whatever by a user with the result that all interaction with a user is conversational. Finally, the system is compared with other theorem proving systems which are based upon Conceptual Graphs and conclusions about the effectiveness of the methodology are drawn.
|
202 |
Bridging the Semantic Gap between Sensor Data and Ontological KnowledgeAlirezaie, Marjan January 2015 (has links)
The rapid growth of sensor data can potentially enable a better awareness of the environment for humans. In this regard, interpretation of data needs to be human-understandable. For this, data interpretation may include semantic annotations that hold the meaning of numeric data. This thesis is about bridging the gap between quantitative data and qualitative knowledge to enrich the interpretation of data. There are a number of challenges which make the automation of the interpretation process non-trivial. Challenges include the complexity of sensor data, the amount of available structured knowledge and the inherent uncertainty in data. Under the premise that high level knowledge is contained in ontologies, this thesis investigates the use of current techniques in ontological knowledge representation and reasoning to confront these challenges. Our research is divided into three phases, where the focus of the first phase is on the interpretation of data for domains which are semantically poor in terms of available structured knowledge. During the second phase, we studied publicly available ontological knowledge for the task of annotating multivariate data. Our contribution in this phase is about applying a diagnostic reasoning algorithm to available ontologies. Our studies during the last phase have been focused on the design and development of a domain-independent ontological representation model equipped with a non-monotonic reasoning approach with the purpose of annotating time-series data. Our last contribution is related to coupling the OWL-DL ontology with a non-monotonic reasoner. The experimental platforms used for validation consist of a network of sensors which include gas sensors whose generated data is complex. A secondary data set includes time series medical signals representing physiological data, as well as a number of publicly available ontologies such as NCBO Bioportal repository.
|
203 |
LoCo : a logic for configuration problemsAschinger, Markus Wolfgang January 2014 (has links)
This thesis deals with the problem of technical product configuration: Connect individual components conforming to a component catalogue in order to meet a given objective while respecting certain constraints. Solving such configuration problems is one of the major success stories of applied AI research: In industrial environments they support the configuration of complex products and, compared to manual processes, help to reduce error rates and increase throughput. Practical applications are nowadays ubiquitous and range from configurable cars to the configuration of telephone communication switching units. In the classical definition of a configuration problem the number of components to be used is fixed while in practice, however, the number of components needed is often not easily stated beforehand. Existing knowledge representation (KR) formalisms expressive enough to deal with this dynamic aspect of configuration require that explicit bounds on all generated components are given as well as extensive knowledge about the underlying solving algorithms. To date there is still a lack of high-level KR tools being able to cope with these demands. In this work we present LoCo, a fragment of classical first order logic that has been carefully tailored for expressing technical product configuration problems. The core feature of LoCo is that the number of components used in configurations does not have to be finitely bounded explicitly, but instead is bounded implicitly through the axioms. We identify configurations with models of the logic; hence, configuration finding becomes model finding. LoCo serves as a high-level representation language which allows the modelling of general configuration problems in an intuitive and declarative way without the need of having knowledge about underlying solving algorithms; in fact, the specification gets automatically translated into low-level executable code. LoCo allows translations into different target languages. We present the language, related algorithms and complexity results as well as a prototypical implementation via answer-set programming.
|
204 |
Ontology module extraction and applications to ontology classificationArmas Romero, Ana January 2015 (has links)
Module extraction is the task of computing a (preferably small) fragment <i>M</i> of an ontology <i>O</i> that preserves a class of entailments over a signature of interest ∑. Existing practical approaches ensure that <i>M</i> preserves all second-order entailments of <i>O</i> over ∑, which is a stronger condition than is required in many applications. In the first part of this thesis, we propose a novel approach to module extraction which, based on a reduction to a datalog reasoning problem, makes it possible to compute modules that are tailored to preserve only specific kinds of entailments. This leads to obtaining modules that are often significantly smaller than those produced by other practical approaches, as shown in an empirical evaluation. In the second part of this thesis, we consider the application of module extraction to the optimisation of ontology classification. Classification is a fundamental reasoning task in ontology design, and there is currently a wide range of reasoners that provide this service. Reasoners aimed at so-called lightweight ontology languages are much more efficient than those aimed at more expressive ones, but they do not offer completeness guarantees for ontologies containing axioms outside the relevant language. We propose an original approach to classification based on exploiting module extraction techniques to divide the workload between a general purpose reasoner and a more efficient reasoner for a lightweight language in such a way that the bulk of the workload is assigned to the latter. We show how the proposed approach can be realised using two particular module extraction techniques, including the one presented in the first part of the thesis. Furthermore, we present the results of an empirical evaluation that shows that this approach can lead to a significant performance improvement in many cases.
|
205 |
Knowledge production in a think tank: a case study of the Africa Institute of South Africa (AISA)Muzondo, Shingirirai January 2009 (has links)
The study sought to investigate the system of knowledge production at AISA and assess the challenges of producing knowledge at the institution. The objectives of the study were to: identify AISA‟s main achievements in knowledge production; determine AISA‟s challenges in producing knowledge; find out how AISA‟s organizational culture impacts on internal knowledge production; and suggest ways of improving knowledge production at AISA. A case study was used as a research method and purposive sampling used to select 50 cases out of a study population of 70. Questionnaires were prepared and distributed to AISA employees and where possible face-to-face interviews were conducted. Both quantitative and qualitative methods were used to analyze the data which were collected. Findings of the study may be used by governments across sub-Saharan Africa to produce relevant knowledge for formulating and implementing economic, social and technological policies. It is also important in identifying challenges that may hinder the successful production of knowledge. The study revealed that AISA has a well defined system of knowledge production and has had many achievements that have contributed to its relevance as a think tank today. The study found out that AISA has faced different challenges with the main one being organizational culture. From the findings, the researcher recommended that AISA should establish itself as a knowledge-based organization. It should also create a knowledge friendly culture as a framework for addressing the issue of organizational culture.
|
206 |
Knowledge representation and problem solving for an intelligent tutoring systemLi, Vincent January 1990 (has links)
As part of an effort to develop an intelligent tutoring system, a set of knowledge representation
frameworks was proposed to represent expert domain knowledge. A general representation of time points and temporal relations was developed to facilitate temporal concept deductions as well as facilitating explanation capabilities vital in an intelligent advisor system. Conventional representations of time use a single-referenced timeline and assigns a single unique value to the time of occurrence of an event. They fail to capture the notion of events, such as changes in signal states in microcomputer systems, which do not occur at precise points in time, but rather over a range of time with some probability distribution. Time is, fundamentally, a relative quantity. In conventional representations,
this relative relation is implicitly defined with a fixed reference, "time-zero", on the timeline. This definition is insufficient if an explanation of the temporal relations is to be constructed. The proposed representation of time solves these two problems by representing a time point as a time-range and making the reference point explicit.
An architecture of the system was also proposed to provide a means of integrating various modules as the system evolves, as well as a modular development approach. A production rule EXPERT based on the rule framework used in the Graphic Interactive LISP tutor (GIL) [44, 45], an intelligent tutor for LISP programming, was implemented to demonstrate the inference process using this time point representation. The EXPERT is goal-driven and is intended to be an integral part of a complete intelligent tutoring system. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
207 |
Efficient Algorithms for Learning Combinatorial Structures from Limited DataAsish Ghoshal (5929691) 15 May 2019 (has links)
<div>Recovering combinatorial structures from noisy observations is a recurrent problem in many application domains, including, but not limited to, natural language processing, computer vision, genetics, health care, and automation. For instance, dependency parsing in natural language processing entails recovering parse trees from sentences which are inherently ambiguous. From a computational standpoint, such problems are typically intractable and call for designing efficient approximation or randomized algorithms with provable guarantees. From a statistical standpoint, algorithms that recover the desired structure using an optimal number of samples are of paramount importance.</div><div><br></div><div>We tackle several such problems in this thesis and obtain computationally and statistically efficient procedures. We demonstrate optimality of our methods by proving fundamental lower bounds on the number of samples needed by any method for recovering the desired structures. Specifically, the thesis makes the following contributions:</div><div><br></div><div>(i) We develop polynomial-time algorithms for learning linear structural equation models --- which are a widely used class of models for performing causal inference --- that recover the correct directed acyclic graph structure under identifiability conditions that are weaker than existing conditions. We also show that the sample complexity of our method is information-theoretically optimal.</div><div><br></div><div>(ii) We develop polynomial-time algorithms for learning the underlying graphical game from observations of the behavior of self-interested agents. The key combinatorial problem here is to recover the Nash equilibria set of the true game from behavioral data. We obtain fundamental lower bounds on the number of samples required for learning games and show that our method is statistically optimal.</div><div><br></div><div>(iii) Lastly, departing from the generative model framework, we consider the problem of structured prediction where the goal is to learn predictors from data that predict complex structured objects directly from a given input. We develop efficient learning algorithms that learn structured predictors by approximating the partition function and obtain generalization guarantees for our method. We demonstrate that randomization can not only improve efficiency but also generalization to unseen data.</div><div><br></div>
|
208 |
Generation of cyber attack data using generative techniquesNidhi Nandkishor Sakhala (6636128) 15 May 2019 (has links)
<div><div><div><p>The presence of attacks in day-to-day traffic flow in connected networks is considerably less compared to genuine traffic flow. Yet, the consequences of these attacks are disastrous. It is very important to identify if the network is being attacked and block these attempts to protect the network system. Failure to block these attacks can lead to loss of confidential information and reputation and can also lead to financial loss. One of the strategies to identify these attacks is to use machine learning algorithms that learn to identify attacks by looking at previous examples. But since the number of attacks is small, it is difficult to train these machine learning algorithms. This study aims to use generative techniques to create new attack samples that can be used to train the machine learning based intrusion detection systems to identify more attacks. Two metrics are used to verify that the training has improved and a binary classifier is used to perform a two-sample test for verifying the generated attacks.</p></div></div></div>
|
209 |
Heterogeneous Graph Based Neural Network for Social Recommendations with Balanced Random Walk InitializationAmirreza Salamat (9740444) 07 January 2021 (has links)
Research on social networks and understanding the interactions of the users can be modeled as a task of graph mining, such as predicting nodes and edges in networks.Dealing with such unstructured data in large social networks has been a challenge for researchers in several years. Neural Networks have recently proven very successful in performing predictions on number of speech, image, and text data and have become the de facto method when dealing with such data in a large volume. Graph NeuralNetworks, however, have only recently become mature enough to be used in real large-scale graph prediction tasks, and require proper structure and data modeling to be viable and successful. In this research, we provide a new modeling of the social network which captures the attributes of the nodes from various dimensions. We also introduce the Neural Network architecture that is required for optimally utilizing the new data structure. Finally, in order to provide a hot-start for our model, we initialize the weights of the neural network using a pre-trained graph embedding method. We have also developed a new graph embedding algorithm. We will first explain how previous graph embedding methods are not optimal for all types of graphs, and then provide a solution on how to combat those limitations and come up with a new graph embedding method.
|
210 |
Modeling Knowledge and Functional Intent for Context-Aware Pragmatic AnalysisVedula, Nikhita January 2020 (has links)
No description available.
|
Page generated in 0.1253 seconds