• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 110
  • 110
  • 51
  • 42
  • 37
  • 37
  • 34
  • 34
  • 32
  • 25
  • 23
  • 23
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Standard and Non-standard reasoning in Description Logics

Brandt, Sebastian-Philipp 05 April 2006 (has links)
The present work deals with Description Logics (DLs), a class of knowledge representation formalisms used to represent and reason about classes of individuals and relations between such classes in a formally well-defined way. We provide novel results in three main directions. (1) Tractable reasoning revisited: in the 1990s, DL research has largely answered the question for practically relevant yet tractable DL formalisms in the negative. Due to novel application domains, especially the Life Sciences, and a surprising tractability result by Baader, we have re-visited this question, this time looking in a new direction: general terminologies (TBoxes) and extensions thereof defined over the DL EL and extensions thereof. As main positive result, we devise EL++(D)-CBoxes as a tractable DL formalism with optimal expressivity in the sense that every additional standard DL constructor, every extension of the TBox formalism, or every more powerful concrete domain, makes reasoning intractable. (2) Non-standard inferences for knowledge maintenance: non-standard inferences, such as matching, can support domain experts in maintaining DL knowledge bases in a structured and well-defined way. In order to extend their availability and promote their use, the present work extends the state of the art of non-standard inferences both w.r.t. theory and implementation. Our main results are implementations and performance evaluations of known matching algorithms for the DLs ALE and ALN, optimal non-deterministic polynomial time algorithms for matching under acyclic side conditions in ALN and sublanguages, and optimal algorithms for matching w.r.t. cyclic (and hybrid) EL-TBoxes. (3) Non-standard inferences over general concept inclusion (GCI) axioms: the utility of GCIs in modern DL knowledge bases and the relevance of non-standard inferences to knowledge maintenance naturally motivate the question for tractable DL formalism in which both can be provided. As main result, we propose hybrid EL-TBoxes as a solution to this hitherto open question.
102

Relational Exploration: Combining Description Logics and Formal Concept Analysis for Knowledge Specification

Rudolph, Sebastian 01 December 2006 (has links)
Facing the growing amount of information in today's society, the task of specifying human knowledge in a way that can be unambiguously processed by computers becomes more and more important. Two acknowledged fields in this evolving scientific area of Knowledge Representation are Description Logics (DL) and Formal Concept Analysis (FCA). While DL concentrates on characterizing domains via logical statements and inferring knowledge from these characterizations, FCA builds conceptual hierarchies on the basis of present data. This work introduces Relational Exploration, a method for acquiring complete relational knowledge about a domain of interest by successively consulting a domain expert without ever asking redundant questions. This is achieved by combining DL and FCA: DL formalisms are used for defining FCA attributes while FCA exploration techniques are deployed to obtain or refine DL knowledge specifications.
103

Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis

Distel, Felix 27 April 2011 (has links)
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
104

Formal Concept Analysis Methods for Description Logics

Sertkaya, Baris 15 November 2007 (has links)
This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets.
105

Learning OWL Class Expressions

Lehmann, Jens 09 June 2010 (has links)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
106

Representing and Reasoning on Conceptual Queries Over Image Databases

Rigotti, Christophe, Hacid, Mohand-Saïd 20 May 2022 (has links)
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient. / This is an extended version of the article in: Eleventh International Symposium on Methodologies for Intelligent Systems, Warsaw, Poland, 1999.
107

[pt] A LÓGICA SOBRE LEIS IALC: IMPLEMENTAÇÃO DE PROVAS DE CORREÇÃO E COMPLETUDE E PROPOSTA DE FORMALIZAÇÃO DA LEGISLAÇÃO BRASILEIRA / [en] THE LOGIC ON LAWS IALC: IMPLEMENTATION OF SOUNDNESS AND COMPLETENESS PROOFS AND A PROPOSAL FOR FORM- ALIZATION OF BRAZILIAN LAW

BERNARDO PINTO DE ALKMIM 19 March 2020 (has links)
[pt] A lógica iALC é uma lógica de descrição de caráter intuicionista, criada para lidar com textos jurídicos como alternativa à mais comumente utilizada lógica deôntica, por conseguir contornar problemas que se encontra ao utilizar esta última. Nesta dissertação, introduzimos os principais conceitos que formam iALC, argumentamos sobre sua utilização em vez de demais lógicas para formalização de leis, implementamos suas provas de correção e completude no assistente de provas L(existe algum)(para cada)N, e apresentamos uma proposta de formalização de leis brasileiras em iALC. Além disso, mostramos um exemplo de aplicação desta formalização para resolução de questões de múltipla escolha da primeira fase do exame da OAB, que tem por objetivo avaliar a aptidão dos candidatos para a prática da advocacia no Brasil. São vistos três exemplos de questões, cujas características são discutidas e comparadas umas às outras. / [en] The logic iALC is a description logic with an intuitionistic aspect to it, created to deal with legal texts as an alternative to the more common deontic logic, by being able to avoid problems found when utilizing the latter. In this dissertation, we introduce the core concepts which form iALC, debate on its utilization instead of other logics for legal formalization, implement the soundness and completeness proofs for it in the proof assistant L(there is some)(for each)N, and present a proposal for formalization of Brazilian law in iALC. Furthermore, we show an example of application of this formalization in order to reason on multiple choice questions of the first part of the OAB Exam (the Brazilian national Bar exam), which aims to test candidates for their aptitude to practice the law in Brazil. We will show three examples, whose characteristics will be discussed and then compared to the others.
108

LTCS-Report

Technische Universität Dresden 17 March 2022 (has links)
This series consists of technical reports produced by the members of the Chair for Automata Theory at TU Dresden. The purpose of these reports is to provide detailed information (e.g., formal proofs, worked out examples, experimental results, etc.) for articles published in conference proceedings with page limits. The topics of these reports lie in different areas of the overall research agenda of the chair, which includes Logic in Computer Science, symbolic AI, Knowledge Representation, Description Logics, Automated Deduction, and Automata Theory and its applications in the other fields.
109

Integración de argumentación rebatible y ontologías en el contexto de la web semántica : formalización y aplicaciones

Gómez, Sergio Alejandro 25 June 2009 (has links)
La World Wide Web actual está compuesta principalmente por documentos escritos para su presentación visual para usuarios humanos. Sin embargo, para obtener todo el potencial de la web es necesario que los programas de computadoras o agentes sean capaces de comprender la información presente en la web. En este sentido, la Web Semántica es una visión futura de la web donde la información tiene significado exacto, permitiendo así que las computadoras entiendan y razonen en base a la información hallada en la web. La Web Semántica propone resolver el problema de la asignación de semántica a los recursos web por medio de metadatos cuyo significado es dado a través de definiciones de ontologías, que son formalizaciones del conocimiento de un dominio de aplicación. El estándar del World Wide Web Consortium propone que las ontologías sean definidas en el lenguaje OWL, el cual se halla basado en las Lógicas para la Descripción. A pesar de que las definiciones de ontologías expresadas en Lógicas para la Descripción pueden ser procesadas por razonadores estándar, tales razonadores son incapaces de lidiar con ontologías inconsistentes. Los sistemas argumentativos constituyen una formalización del razonamiento rebatible donde se pone especial enfasis en la noción de argumento. Así, la construcción de argumentos permite que un agente obtenga conclusiones en presencia de información incompleta y potencialmente contradictoria. En particular, la Programación en Lógica Rebatible es un formalismo basado en la argumentación rebatible y la Programación en Lógica. En esta Disertación, la importancia de la definición de ontologías para poder llevar a cabo la realización de la iniciativa de la Web Semántica junto con la presencia de ontologías incompletas y potencialmente contradictorias motivó el desarrollo de un marco de razonamiento con las llamadas -ontologías. Investigaciones previas de otros autores, determinaron que un subconjunto de las Lógicas para la Descripción pueden ser traducidas efectivamente a un conjunto de la Programación en Lógica. Nuestra propuesta involucra asignar semántica a ontologías expresadas en Lógicas para la Descripción por medio de Programas Lógicos Rebatibles para lidiar con definiciones de ontologías inconsistentes en la Web Semántica. Esto es, dada una ontología OWL expresada en el lenguaje OWLDL, es posible construir una ontología DL equivalente expresada en las Lógicas para la Descripción. En el caso en que DL satisfaga ciertas restricciones, esta puede ser expresada como un programa DeLP P. Por lo tanto, dada una consulta acerca de la pertenencia de una instancia a a un cierto concepto C expresada con respecto a OWL, se realiza un análisis dialectico con respecto a P para determinar todas las razones a favor y en contra de la plausibilidad de la afirmación C(a). Por otro lado, la integración de datos es el problema de combinar datos residiendo en diferentes fuentes y el de proveer al usuario con una vista unificada de dichos datos. El problema de diseñar sistemas de integración de datos es particularmente importante en el contexto de aplicaciones en la Web Semántica donde las ontologías son desarrolladas independientemente unas de otras, y por esta razón pueden ser mutuamente inconsistentes. Dada una ontología, nos interesa conocer en que condiciones un individuo es una instancia de un cierto concepto. Como cuando se tienen varias ontologías, los mismos conceptos pueden tener nombres distintos para un mismo significado o aún nombres iguales para significados diferentes, para relacionar los conceptos entre dos ontologías diferentes se utilizaron reglas puente o de articulación. De esta manera, un concepto se corresponde a una vista sobre otros conceptos de otra ontología. Mostramos también bajo que condiciones la propuesta del razonamiento con -ontologías puede ser adaptada a los dos tipos de integración de ontologías global-as-view y local-as-view considerados en la literatura especializada. Además, analizamos las propiedades formales que se desprenden de este acercamiento novedoso al tratamiento de ontologías inconsistentes en la Web Semántica. Los principales resultados obtenidos son que, como la interpretación de -ontologías como Programas Lógicos Rebatibles es realizada a través de una función de transformación que preserva la semántica de las ontologías involucradas, los resultados obtenidos al realizar consultas son sensatos. También, mostramos que el operador presentado es además consistente y significativo. El acercamiento al razonamiento en presencia de ontologías inconsistentes brinda la posibilidad de abordar de una manera ecaz ciertos problemas de aplicacion del ámbito del comercio electrónico, donde el modelo de reglas de negocio puede ser especificado en términos de ontologías. Entonces, la capacidad de razonar frente a ontologías inconsistentes permite abordajes alternativos conceptualmente más claros, ya que es posible automatizar ciertas decisiones de negocios tomadas a la luz de un conjunto de reglas de negocio posiblemente inconsistentes expresadas como una o varias ontologías y tener un sistema capaz de brindar una explicación del porque se arribo a una conclusión determinada. En consecuencia, presentamos entonces una aplicación del razonamiento sobre ontologías inconsistentes por medio de la argumentación rebatible al modelado de formularios en la World Wide Web. La noción de los formularios como una manera de organizar y presentar datos ha sido utilizada desde el comienzo de la World Wide Web. Los formularios Web han evolucionado junto con el desarrollo de nuevos lenguajes de marcado, en los cuales es posible proveer guiones de validación como parte del código del formulario para verificar que el signifiado pretendido del formulario es correcto. Sin embargo, para el diseñador del formulario, parte de este significado pretendido frecuentemente involucra otras características que no son restricciones por sí mismas, sino más bien atributos emergentes del formulario, los cuales brindan conclusiones plausibles en el contexto de información incompleta y potencialmente contradictoria. Como el valor de tales atributos puede cambiar en presencia de nuevo conocimiento, los llamamos atributos rebatibles. Propusimos entonces extender los formularios web para incorporar atributos rebatibles como parte del conocimiento que puede ser codifiado por el diseñador del formulario, por medio de los llamados -formularios; dicho conocimiento puede ser especificado mediante un programa DeLP, y posteriormente, como una ontología expresada en Lógicas para la Descripción.
110

An exploratory study using the predicate-argument structure to develop methodology for measuring semantic similarity of radiology sentences

Newsom, Eric Tyner 12 November 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The amount of information produced in the form of electronic free text in healthcare is increasing to levels incapable of being processed by humans for advancement of his/her professional practice. Information extraction (IE) is a sub-field of natural language processing with the goal of data reduction of unstructured free text. Pertinent to IE is an annotated corpus that frames how IE methods should create a logical expression necessary for processing meaning of text. Most annotation approaches seek to maximize meaning and knowledge by chunking sentences into phrases and mapping these phrases to a knowledge source to create a logical expression. However, these studies consistently have problems addressing semantics and none have addressed the issue of semantic similarity (or synonymy) to achieve data reduction. To achieve data reduction, a successful methodology for data reduction is dependent on a framework that can represent currently popular phrasal methods of IE but also fully represent the sentence. This study explores and reports on the benefits, problems, and requirements to using the predicate-argument statement (PAS) as the framework. A convenient sample from a prior study with ten synsets of 100 unique sentences from radiology reports deemed by domain experts to mean the same thing will be the text from which PAS structures are formed.

Page generated in 0.1661 seconds