231 |
Learning OWL Class ExpressionsLehmann, Jens 24 June 2010 (has links) (PDF)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems.
However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web.
In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work.
The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future.
The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold:
The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language.
The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently.
The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach.
The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
|
232 |
Representação de conhecimento : programação em lógica e o modelo das hiperredes / Knowledge representation: logic programming and the hypernets modelPalazzo, Luiz Antonio Moro January 1991 (has links)
Apesar de sua inerente indecidibilidade e do problema da negação, extensões da lógica de primeira ordem tem se mostrado capazes de superar a questão da monotonicidade, vindo a constituir esquemas de representação de conhecimento de expressividade virtualmente universal. Resta entretanto solucionar ou pelo menos amenizar as conseqüências do problema do controle, que limitam o seu emprego a aplicações de pequeno a médio porte. Investigações nesse sentido [BOW 85] [MON 88] indicam que a chave para superar a explosão inferencial passa obrigatoriamente pela estruturação do conhecimento, de modo a permitir o exercício de algum controle sobre as possíveis derivações dele decorrentes. O modelo das hiperredes [GEO 85] parece atingir tal objetivo, dado o seu elevado potencial de estruturação e o instrumental que oferece para o tratamento de construções descritivas, operacionais e organizacionais. Além disso, a simplicidade e uniformidade sintática de suas entidades primitivas possibilita uma interpretação semântica bastante clara do modelo original, por exemplo, baseada em grafos. O presente trabalho representa uma tentativa de associar a programação em lógica ao formalismo das hiperredes, visando obter um novo modelo capaz de preservar as expressividade da primeira, beneficiando-se simultaneamente do potencial heurístico e estrutura do segundo. Inicialmente procura-se obter uma noção clara da natureza do conhecimento e de seus mecanismos com o objetivo de caracterizar o problema da representação de conhecimento. Diferentes esquemas correntemente empregados para esse fim (sistemas de produções, redes semânticas, sistemas de frames, programação em lógica e a linguagem Krypton) são estudados e caracterizados do ponto de vista de sua expressividade, potencial heurístico e conveniência notacional. A programação em lógica é objeto de um estudo em maior profundidade, sob os enfoques modelo-teorético e prova-teorético. Sistemas de programação em lógica - particularmente a linguagem Prolog e extensões em nível meta - são investigados como esquemas de representação de conhecimento, considerando seus aspectos sintáticos e semânticos e a sua retação com Sistemas Gerenciadores de Bases de Dados. O modelo das hiperredes é apresentado introduzindo-se, entre outros, os conceitos de hipernodo, hiperrelação e protótipo, assim como as propriedades particutares de tais entidades. A linguagem Hyper, para o tratamento de hiperredes, é formalmente especificada. Emprega-se a linguagem Prolog como formalismo para a representação de Bases de Conhecimento estruturadas segundo o modelo das hiperredes. Sob tal abordagem uma Base de Conhecimento é vista como um conjunto (possivelmente vazio) de objetos estruturados ou peças de conhecimento, que por sua vez são classificados como hipernodos, hiperrelações ou protótipos. Um mecanismo top-down para a produção de inferências em hiperredes é proposto, introduzindo-se os conceitos de aspecto e visão sobre hiperredes, os quais são tomados como objetos de primeira classe, no sentido de poderem ser valores atribuídos a variáveis. Estuda-se os requisitos que um Sistema Gerenciador de Bases de Conhecimento deve apresentar, do ponto de vista da aplicação, da engenharia de conhecimento e da implementação, para suportar efetivamente os conceitos e abstrações (classificação, generalização, associação e agregação) associadas ao modelo proposto. Com base nas conclusões assim obtidas, um Sistema Gerenciador de Bases de Conhecimento (denominado Rhesus em alusão à sua finalidade experimental é proposto e especificado, objetivando confirmar a viabilidade técnica do desenvolvimento de aplicações baseadas em lógica e hiperredes. / In spite of its inherent undecidability and the negation problem, extensions of first-order logic have been shown to be able to overcome the question of the monotonicity, establishing knowledge representation schemata with virtuatLy universal expressiviness. However, one still has to solve, or at Least to reduce the consequences of the control problem, which constrains the use of Logic-based systems to either small or medium-sized applications. Investigations in this direction [BOW 85] [MON 88] indicate that the key to overcome the inferential explosion resides in the proper knowledge structure representation, in order to have some control over possible derivations. The Hypernets Model [GEO 85] seems to reach such goat, considering its high structural power and the features that it offers to deal with descriptive, operational and organizational knowledge. Besides, the simplicity and syntactical uniformity of its primitive notions allows a very clear definition for its semantics, based, for instance, on graphs. This work is an attempt to associate logic programming with the hypernets formalism, in order to get a new model, preserving the expressiveness of the former and the heuristic and structural power of the latter. First we try to get a clear notion of the nature of knowledge and its main aspects, intending to characterize the knowledge representation problem. Some knowledge representation schemata (production systems, semantic networks, frame systems, Logic programming and the Krypton Language) are studied and characterized from the point of view of their expressiveness, heuristic power and notational convenience. Logic programming is the subject of a deeper study, under the model-theoretic and proof-theoretic approaches. Logic programming systems - in particular the Prolog Language and metateuel extensions- - are investigated as knowledge representation schemata, considering its syntactic and semantic aspects and its relations with Data Base Management Systems. The hypernets model is presented, introducing the concepts of hypernode, hyperrelation and prototype, as well as the particular properties of those entities. The Hyper language, for the handling of h y pernets, is formally specified. Prolog is used as a formalism for the representation of Knowledge Bases which are structured as hypernets. Under this approach a Knowledge Brie is seen rrG a (possibly empty) set of structured objects, which are classified as hypernodes, hyperreLations or prototypes. A mechanism for top-down reasoning on hypernets is proposed, introducing the concepts of aspect and vision, which are taken as first-class objects in the sense that they could be (-Ysigned as values to variables. We study the requirements for the construction of a Knowledge Base Management System from the point of view of the user's need-1', knowledge engineering support and implementation issues, actually supporting the concepts and abstractions (classification, generalization, association and aggregation) rYsociated with the proposed model. Based on the conclusions of this study, a Knowledge Base Management System (called Rhesus, refering to its experimental objectives) is proposed, intending to confirm the technical viability of the development of applications based on logic and hypernets.
|
233 |
Logics of beliefViljoen, Elizabeth 04 1900 (has links)
The inadequacy of the usual possible world semantics of modal languages when the meaning of 'belief' is attached to the modal operator is discussed. Three other approaches are then investigated. In the case of Moore's autoepistemic logic it becomes possible to compare an agent's beliefs to 'reality', which cannot be done directly in the possible world semantics. Levesque's semantics makes explicit in the object language the notion of 'this is all the information the agent has', which plays an important role in nonmonotonic reasoning. Both of these approaches deal with ideal reasoners. The third approach, Konolige's deduction model, is based on a semantics capable of describing the beliefs of one or more resourcebounded agents. Finally, the AGM postulates for belief revision are discussed. / Computer Science / M.Sc. (Computer Science)
|
234 |
Representação de conhecimento : programação em lógica e o modelo das hiperredes / Knowledge representation: logic programming and the hypernets modelPalazzo, Luiz Antonio Moro January 1991 (has links)
Apesar de sua inerente indecidibilidade e do problema da negação, extensões da lógica de primeira ordem tem se mostrado capazes de superar a questão da monotonicidade, vindo a constituir esquemas de representação de conhecimento de expressividade virtualmente universal. Resta entretanto solucionar ou pelo menos amenizar as conseqüências do problema do controle, que limitam o seu emprego a aplicações de pequeno a médio porte. Investigações nesse sentido [BOW 85] [MON 88] indicam que a chave para superar a explosão inferencial passa obrigatoriamente pela estruturação do conhecimento, de modo a permitir o exercício de algum controle sobre as possíveis derivações dele decorrentes. O modelo das hiperredes [GEO 85] parece atingir tal objetivo, dado o seu elevado potencial de estruturação e o instrumental que oferece para o tratamento de construções descritivas, operacionais e organizacionais. Além disso, a simplicidade e uniformidade sintática de suas entidades primitivas possibilita uma interpretação semântica bastante clara do modelo original, por exemplo, baseada em grafos. O presente trabalho representa uma tentativa de associar a programação em lógica ao formalismo das hiperredes, visando obter um novo modelo capaz de preservar as expressividade da primeira, beneficiando-se simultaneamente do potencial heurístico e estrutura do segundo. Inicialmente procura-se obter uma noção clara da natureza do conhecimento e de seus mecanismos com o objetivo de caracterizar o problema da representação de conhecimento. Diferentes esquemas correntemente empregados para esse fim (sistemas de produções, redes semânticas, sistemas de frames, programação em lógica e a linguagem Krypton) são estudados e caracterizados do ponto de vista de sua expressividade, potencial heurístico e conveniência notacional. A programação em lógica é objeto de um estudo em maior profundidade, sob os enfoques modelo-teorético e prova-teorético. Sistemas de programação em lógica - particularmente a linguagem Prolog e extensões em nível meta - são investigados como esquemas de representação de conhecimento, considerando seus aspectos sintáticos e semânticos e a sua retação com Sistemas Gerenciadores de Bases de Dados. O modelo das hiperredes é apresentado introduzindo-se, entre outros, os conceitos de hipernodo, hiperrelação e protótipo, assim como as propriedades particutares de tais entidades. A linguagem Hyper, para o tratamento de hiperredes, é formalmente especificada. Emprega-se a linguagem Prolog como formalismo para a representação de Bases de Conhecimento estruturadas segundo o modelo das hiperredes. Sob tal abordagem uma Base de Conhecimento é vista como um conjunto (possivelmente vazio) de objetos estruturados ou peças de conhecimento, que por sua vez são classificados como hipernodos, hiperrelações ou protótipos. Um mecanismo top-down para a produção de inferências em hiperredes é proposto, introduzindo-se os conceitos de aspecto e visão sobre hiperredes, os quais são tomados como objetos de primeira classe, no sentido de poderem ser valores atribuídos a variáveis. Estuda-se os requisitos que um Sistema Gerenciador de Bases de Conhecimento deve apresentar, do ponto de vista da aplicação, da engenharia de conhecimento e da implementação, para suportar efetivamente os conceitos e abstrações (classificação, generalização, associação e agregação) associadas ao modelo proposto. Com base nas conclusões assim obtidas, um Sistema Gerenciador de Bases de Conhecimento (denominado Rhesus em alusão à sua finalidade experimental é proposto e especificado, objetivando confirmar a viabilidade técnica do desenvolvimento de aplicações baseadas em lógica e hiperredes. / In spite of its inherent undecidability and the negation problem, extensions of first-order logic have been shown to be able to overcome the question of the monotonicity, establishing knowledge representation schemata with virtuatLy universal expressiviness. However, one still has to solve, or at Least to reduce the consequences of the control problem, which constrains the use of Logic-based systems to either small or medium-sized applications. Investigations in this direction [BOW 85] [MON 88] indicate that the key to overcome the inferential explosion resides in the proper knowledge structure representation, in order to have some control over possible derivations. The Hypernets Model [GEO 85] seems to reach such goat, considering its high structural power and the features that it offers to deal with descriptive, operational and organizational knowledge. Besides, the simplicity and syntactical uniformity of its primitive notions allows a very clear definition for its semantics, based, for instance, on graphs. This work is an attempt to associate logic programming with the hypernets formalism, in order to get a new model, preserving the expressiveness of the former and the heuristic and structural power of the latter. First we try to get a clear notion of the nature of knowledge and its main aspects, intending to characterize the knowledge representation problem. Some knowledge representation schemata (production systems, semantic networks, frame systems, Logic programming and the Krypton Language) are studied and characterized from the point of view of their expressiveness, heuristic power and notational convenience. Logic programming is the subject of a deeper study, under the model-theoretic and proof-theoretic approaches. Logic programming systems - in particular the Prolog Language and metateuel extensions- - are investigated as knowledge representation schemata, considering its syntactic and semantic aspects and its relations with Data Base Management Systems. The hypernets model is presented, introducing the concepts of hypernode, hyperrelation and prototype, as well as the particular properties of those entities. The Hyper language, for the handling of h y pernets, is formally specified. Prolog is used as a formalism for the representation of Knowledge Bases which are structured as hypernets. Under this approach a Knowledge Brie is seen rrG a (possibly empty) set of structured objects, which are classified as hypernodes, hyperreLations or prototypes. A mechanism for top-down reasoning on hypernets is proposed, introducing the concepts of aspect and vision, which are taken as first-class objects in the sense that they could be (-Ysigned as values to variables. We study the requirements for the construction of a Knowledge Base Management System from the point of view of the user's need-1', knowledge engineering support and implementation issues, actually supporting the concepts and abstractions (classification, generalization, association and aggregation) rYsociated with the proposed model. Based on the conclusions of this study, a Knowledge Base Management System (called Rhesus, refering to its experimental objectives) is proposed, intending to confirm the technical viability of the development of applications based on logic and hypernets.
|
235 |
Ontoilper: an ontology- and inductive logic programming-based method to extract instances of entities and relations from textsLima, Rinaldo José de, Freitas, Frederico Luiz Gonçalves de 31 January 2014 (has links)
Submitted by Nayara Passos (nayara.passos@ufpe.br) on 2015-03-13T12:33:46Z
No. of bitstreams: 2
TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T13:16:54Z (GMT) No. of bitstreams: 2
TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T13:16:54Z (GMT). No. of bitstreams: 2
TESE Rinaldo José de Lima.pdf: 8678943 bytes, checksum: e88c290e414329ee00d2d6a35a466de0 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2014 / CNPq, CAPES. / Information Extraction (IE) consists in the task of discovering and structuring information found
in a semi-structured or unstructured textual corpus. Named Entity Recognition (NER) and Relation
Extraction (RE) are two important subtasks in IE. The former aims at finding named entities,
including the name of people, locations, among others, whereas the latter consists in detecting
and characterizing relations involving such named entities in text. Since the approach of manually
creating extraction rules for performing NER and RE is an intensive and time-consuming task,
researchers have turned their attention to how machine learning techniques can be applied to
IE in order to make IE systems more adaptive to domain changes. As a result, a myriad of
state-of-the-art methods for NER and RE relying on statistical machine learning techniques
have been proposed in the literature. Such systems typically use a propositional hypothesis
space for representing examples, i.e., an attribute-value representation. In machine learning, the
propositional representation of examples presents some limitations, particularly in the extraction
of binary relations, which mainly demands not only contextual and relational information about
the involving instances, but also more expressive semantic resources as background knowledge.
This thesis attempts to mitigate the aforementioned limitations based on the hypothesis that, to
be efficient and more adaptable to domain changes, an IE system should exploit ontologies and
semantic resources in a framework for IE that enables the automatic induction of extraction rules
by employing machine learning techniques. In this context, this thesis proposes a supervised
method to extract both entity and relation instances from textual corpora based on Inductive
Logic Programming, a symbolic machine learning technique. The proposed method, called
OntoILPER, benefits not only from ontologies and semantic resources, but also relies on a highly
expressive relational hypothesis space, in the form of logical predicates, for representing examples
whose structure is relevant to the information extraction task. OntoILPER automatically
induces symbolic extraction rules that subsume examples of entity and relation instances from
a tailored graph-based model of sentence representation, another contribution of this thesis.
Moreover, this graph-based model for representing sentences also enables the exploitation of
domain ontologies and additional background knowledge in the form of a condensed set of
features including lexical, syntactic, semantic, and relational ones. Differently from most of
the IE methods (a comprehensive survey is presented in this thesis, including the ones that also
apply ILP), OntoILPER takes advantage of a rich text preprocessing stage which encompasses
various shallow and deep natural language processing subtasks, including dependency parsing,
coreference resolution, word sense disambiguation, and semantic role labeling. Further mappings
of nouns and verbs to (formal) semantic resources are also considered. OntoILPER Framework,
the OntoILPER implementation, was experimentally evaluated on both NER and RE tasks.
This thesis reports the results of several assessments conducted using six standard evaluationcorpora from two distinct domains: news and biomedical. The obtained results demonstrated
the effectiveness of OntoILPER on both NER and RE tasks. Actually, the proposed framework
outperforms some of the state-of-the-art IE systems compared in this thesis. / A área de Extração de Informação (IE) visa descobrir e estruturar informações dispostas em
documentos semi-estruturados ou desestruturados. O Reconhecimento de Entidades Nomeadas
(REN) e a Extração de Relações (ER) são duas subtarefas importantes em EI. A primeira visa
encontrar entidades nomeadas, incluindo nome de pessoas e lugares, entre outros; enquanto
que a segunda, consiste na detecção e caracterização de relações que envolvem as entidades
nomeadas presentes no texto. Como a tarefa de criar manualmente as regras de extração para
realizar REN e ER é muito trabalhosa e onerosa, pesquisadores têm voltado suas atenções na
investigação de como as técnicas de aprendizado de máquina podem ser aplicadas à EI a fim de
tornar os sistemas de ER mais adaptáveis às mudanças de domínios. Como resultado, muitos
métodos do estado-da-arte em REN e ER, baseados em técnicas estatísticas de aprendizado de
máquina, têm sido propostos na literatura. Tais sistemas normalmente empregam um espaço
de hipóteses com expressividade propositional para representar os exemplos, ou seja, eles são
baseado na tradicional representação atributo-valor. Em aprendizado de máquina, a representação
proposicional apresenta algums fatores limitantes, principalmente na extração de relações binárias
que exigem não somente informações contextuais e estruturais (relacionais) sobre as instâncias,
mas também outras formas de como adicionar conhecimento prévio do problema durante o
processo de aprendizado. Esta tese visa atenuar as limitações acima mencionadas, tendo como
hipótese de trabalho que, para ser eficiente e mais facilmente adaptável às mudanças de domínio,
os sistemas de EI devem explorar ontologias e recursos semânticos no contexto de um arcabouço
para EI que permita a indução automática de regras de extração de informação através do
emprego de técnicas de aprendizado de máquina. Neste contexto, a presente tese propõe um
método supervisionado capaz de extrair instâncias de entidades (ou classes de ontologias) e de
relações a partir de textos apoiando-se na Programação em Lógica Indutiva (PLI), uma técnica de
aprendizado de máquina supervisionada capaz de induzir regras simbólicas de classificação. O
método proposto, chamado OntoILPER, não só se beneficia de ontologias e recursos semânticos,
mas também se baseia em um expressivo espaço de hipóteses, sob a forma de predicados
lógicos, capaz de representar exemplos cuja estrutura é relevante para a tarefa de EI consideradas
nesta tese. OntoILPER automaticamente induz regras simbólicas para classificar exemplos de
instâncias de entidades e relações a partir de um modelo de representação de frases baseado
em grafos. Tal modelo de representação é uma das constribuições desta tese. Além disso, o
modelo baseado em grafos para representação de frases e exemplos (instâncias de classes e
relações) favorece a integração de conhecimento prévio do problema na forma de um conjunto
reduzido de atributos léxicos, sintáticos, semânticos e estruturais. Diferentemente da maioria dos
métodos de EI (uma pesquisa abrangente é apresentada nesta tese, incluindo aqueles que também
se aplicam a PLI), OntoILPER faz uso de várias subtarefas do Processamento de Linguagem
|
236 |
Apprentissage de connaissances structurelles à partir d’images satellitaires et de données exogènes pour la cartographie dynamique de l’environnement amazonien / Structurel Knowledge learning from satellite images and exogenous data for dynamic mapping of the amazonian environmentBayoudh, Meriam 06 December 2013 (has links)
Les méthodes classiques d'analyse d'images satellites sont inadaptées au volume actuel du flux de données. L'automatisation de l'interprétation de ces images devient donc cruciale pour l'analyse et la gestion des phénomènes observables par satellite et évoluant dans le temps et l'espace. Ce travail vise à automatiser la cartographie dynamique de l'occupation du sol à partir d'images satellites, par des mécanismes expressifs, facilement interprétables en prenant en compte les aspects structurels de l'information géographique. Il s'inscrit dans le cadre de l'analyse d'images basée objet. Ainsi, un paramétrage supervisé d'un algorithme de segmentation d'images est proposé. Dans un deuxième temps, une méthode de classification supervisée d'objets géographiques est présentée combinant apprentissage automatique par programmation logique inductive et classement par l'approche multi-class rule set intersection. Ces approches sont appliquées à la cartographie de la bande côtière Guyanaise. Les résultats démontrent la faisabilité du paramétrage de la segmentation, mais également sa variabilité en fonction des classes de la carte de référence et des données d'entrée. Les résultats de la classification supervisée montrent qu'il est possible d'induire des règles de classification expressives, véhiculant des informations cohérentes et structurelles dans un contexte applicatif donnée et conduisant à des valeurs satisfaisantes de précision et de KAPPA (respectivement 84,6% et 0,7). Ce travail de thèse contribue ainsi à l'automatisation de la cartographie dynamique à partir d'images de télédétection et propose des perspectives originales et prometteuses. / Classical methods for satellite image analysis are inadequate for the current bulky data flow. Thus, automate the interpretation of such images becomes crucial for the analysis and management of phenomena changing in time and space, observable by satellite. Thus, this work aims at automating land cover cartography from satellite images, by expressive and easily interpretable mechanism, and by explicitly taking into account structural aspects of geographic information. It is part of the object-based image analysis framework, and assumes that it is possible to extract useful contextual knowledge from maps. Thus, a supervised parameterization methods of a segmentation algorithm is proposed. Secondly, a supervised classification of geographical objects is presented. It combines machine learning by inductive logic programming and the multi-class rule set intersection approach. These approaches are applied to the French Guiana coastline cartography. The results demonstrate the feasibility of the segmentation parameterization, but also its variability as a function of the reference map classes and of the input data. Yet, methodological developments allow to consider an operational implementation of such an approach. The results of the object supervised classification show that it is possible to induce expressive classification rules that convey consistent and structural information in a given application context and lead to reliable predictions, with overall accuracy and Kappa values equal to, respectively, 84,6% and 0,7. In conclusion, this work contributes to the automation of the dynamic cartography from remotely sensed images and proposes original and promising perpectives
|
237 |
Computational Issues in Calculi of Partial Inductive DefinitionsKreuger, Per January 1995 (has links)
We study the properties of a number of algorithms proposed to explore the computational space generated by a very simple and general idea: the notion of a mathematical definition and a number of suggested formal interpretations ofthis idea. Theories of partial inductive definitions (PID) constitute a class of logics based on the notion of an inductive definition. Formal systems based on this notion can be used to generalize Horn-logic and naturally allow and suggest extensions which differ in interesting ways from generalizations based on first order predicate calculus. E.g. the notion of completion generated by a calculus of PID and the resulting notion of negation is completely natural and does not require externally motivated procedures such as "negation as failure". For this reason, computational issues arising in these calculi deserve closer inspection. This work discuss a number of finitary theories of PID and analyzethe algorithmic and semantical issues that arise in each of them. There has been significant work on implementing logic programming languages in this setting and we briefly present the programming language and knowledge modelling tool GCLA II in which many of the computational prob-lems discussed arise naturally in practice. / <p>Also published as SICS Dissertation no. SICS-D-19</p>
|
238 |
From Logic Programming to Human Reasoning:Dietz Saldanha, Emmanuelle-Anna 22 August 2017 (has links) (PDF)
Results of psychological experiments have shown that humans make assumptions, which are not necessarily valid, that they are influenced by their background knowledge and that they reason non-monotonically. These observations show that classical logic does not seem to be adequate for modeling human reasoning. Instead of assuming that humans do not reason logically at all, we take the view that humans do not reason classical logically. Our goal is to model episodes of human reasoning and for this purpose we investigate the so-called Weak Completion Semantics. The Weak Completion Semantics is a Logic Programming approach and considers the least model of the weak completion of logic programs under the three-valued Łukasiewicz logic.
As the Weak Completion Semantics is relatively new and has not yet been extensively investigated, we first motivate why this approach is interesting for modeling human reasoning. After that, we show the formal correspondence to the already established Stable Model Semantics and Well-founded Semantics. Next, we present an extension with an additional context operator, that allows us to express negation as failure. Finally, we propose a contextual abductive reasoning approach, in which the context of observations is relevant. Some properties do not hold anymore under this extension. Besides discussing the well-known psychological experiments Byrne’s suppression task and Wason’s selection task, we investigate an experiment in spatial reasoning, an experiment in syllogistic reasoning and an experiment that examines the belief-bias effect. We show that the results of these experiments can be adequately modeled under the Weak Completion Semantics. A result which stands out here, is the outcome of modeling the syllogistic reasoning experiment, as we have a higher prediction match with the participants’ answers than any of twelve current cognitive theories.
We present an abstract evaluation system for conditionals and discuss well-known examples from the literature. We show that in this system, conditionals can be evaluated in various ways and we put up the hypothesis that humans use a particular evaluation strategy, namely that they prefer abduction to revision. We also discuss how relevance plays a role in the evaluation process of conditionals. For this purpose we propose a semantic definition of relevance and justify why this is preferable to a exclusively syntactic definition. Finally, we show that our system is more general than another system, which has recently been presented in the literature.
Altogether, this thesis shows one possible path on bridging the gap between Cognitive Science and Computational Logic. We investigated findings from psychological experiments and modeled their results within one formal approach, the Weak Completion Semantics. Furthermore, we proposed a general evaluation system for conditionals, for which we suggest a specific evaluation strategy. Yet, the outcome cannot be seen as the ultimate solution but delivers a starting point for new open questions in both areas.
|
239 |
Les systèmes cognitifs dans les réseaux autonomes : une méthode d'apprentissage distribué et collaboratif situé dans le plan de connaissance pour l'auto-adaptation / Cognitive systems in automatic networks : a distributed and collaborative learning method in knoledge plane for self-adapting functionMbaye, Maïssa 17 December 2009 (has links)
L'un des défis majeurs pour les décennies à venir, dans le domaine des technologies de l'information et de la communication, est la réalisation du concept des réseaux autonomes. Ce paradigme a pour objectif de rendre les équipements réseaux capables de s'autogérer, c'est-à-dire qu'ils pourront s'auto-configurer, s'auto-optimiser, s'auto-protéger et s'auto-restaurer en respectant les objectifs de haut niveau de leurs concepteurs. Les architectures majeures de réseaux autonomes se basent principalement sur la notion de boucle de contrôle fermée permettant l'auto-adaptation (auto-configuration et auto-optimisation) de l'équipement réseau en fonction des événements qui surviennent sur leur environnement. Le plan de connaissance est une des approches, très mise en avant ces dernières années par le monde de la recherche, qui suggère l'utilisation des systèmes cognitifs (l'apprentissage et le raisonnement) pour fermer la boucle de contrôle. Cependant, bien que les architectures majeures de gestion autonomes intègrent des modules d'apprentissage sous forme de boite noire, peu de recherches s'intéressent véritablement au contenu de ces boites. C'est dans ce cadre que nous avons fait une étude sur l'apport potentiel de l'apprentissage et proposé une méthode d'apprentissage distribué et collaboratif. Nous proposons une formalisation du problème d'auto-adaptation sous forme d'un problème d'apprentissage d'état-actions. Cette formalisation nous permet de définir un apprentissage de stratégies d'auto-adaptation qui se base sur l'utilisation de l'historique des transitions et utilise la programmation logique inductive pour découvrir de nouvelles stratégies à partir de celles déjà découvertes. Nous définissons, aussi un algorithme de partage de la connaissance qui permet d'accélérer le processus d'apprentissage. Enfin, nous avons testé l'approche proposé dans le cadre d'un réseau DiffServ et montré sa transposition sur le contexte du transport de flux multimédia dans les réseaux sans-fil 802.11. / One of the major challenges for decades to come, in the field of information technologies and the communication, is realization of autonomic paradigm. It aims to enable network equipments to self-manage, enable them to self-configure, self-optimize, self-protect and self-heal according to high-level objectives of their designers. Major architectures of autonomic networking are based on closed control loop allowing self-adapting (self-configuring and self-optimizing) of the network equipment according to the events which arise on their environment. Knowledge plane is one approach, very emphasis these last years by researchers, which suggests the use of the cognitive systems (machine learning and the reasoning) to realize closed control loop. However, although the major autonomic architectures integrate machine learning modules as functional block, few researches are really interested in the contents of these blocks. It is in this context that we made a study on the potential contribution machine learning and proposed a method of distributed and collaborative machine learning. We propose a formalization self-adapting problem in term of learning configuration strategies (state-actions) problem. This formalization allows us to define a strategies machine learning method for self-adapting which is based on the history observed transitions and uses inductive logic programming to discover new strategies from those already discovered. We defined, also a knowledge sharing algorithm which makes network components collaborate to improve learning process. Finally, we tested our approach in DiffServ context and showed its transposition on multimedia streaming in 802.11 wireless networks.
|
240 |
Answer set programming probabilístico / Probabilistic Answer Set ProgrammingEduardo Menezes de Morais 10 December 2012 (has links)
Este trabalho introduz uma técnica chamada Answer Set Programming Probabilístico (PASP), que permite a modelagem de teorias complexas e a verificação de sua consistência em relação a um conjunto de dados estatísticos. Propomos métodos de resolução baseados em uma redução para o problema da satisfazibilidade probabilística (PSAT) e um método de redução de Turing ao ASP. / This dissertation introduces a technique called Probabilistic Answer Set Programming (PASP), that allows modeling complex theories and check its consistence with respect to a set of statistical data. We propose a method of resolution based in the reduction to the probabilistic satisfiability problem (PSAT) and a Turing reduction method to ASP.
|
Page generated in 0.1256 seconds