• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 11
  • 4
  • 3
  • Tagged with
  • 44
  • 44
  • 37
  • 16
  • 16
  • 11
  • 10
  • 10
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

"Aquisição de conhecimento de conjuntos de exemplos no formato atributo valor utilizando aprendizado de máquina relacional"

Mariza Ferro 17 September 2004 (has links)
O Aprendizado de Máquina trata da questão de como desenvolver programas de computador capazes de aprender um conceito ou hipótese a partir de um conjunto de exemplos ou casos observados. Baseado no conjunto de treinamento o algoritmo de aprendizado induz a classificação de uma hipótese capaz de determinar corretamente a classe de novos exemplos ainda não rotulados. Linguagens de descrição são necessárias para escrever exemplos, conhecimento do domínio bem como as hipóteses aprendidas a partir dos exemplos. Em geral, essas linguagens podem ser divididas em dois tipos: linguagem baseada em atributo-valor ou proposicional e linguagem relacional. Algoritmos de aprendizado são classificados como proposicional ou relacional dependendo da liguagem de descrição que eles utilizam. Além disso, no aprendizado simbólico o objetivo é gerar a classificação de hipóteses que possam ser facilmente interpretadas pelos humanos. Algoritmos de aprendizado proposicional utilizam a representação atributo-valor, a qual é inadequada para representar objetos estruturados e relações entre esses objetos. Por outro lado, a Programação lógica Indutiva (PLI) é realizada com o desenvolvimento de técnicas e ferramentas para o aprendizado relacional. Sistemas de PLI são capazes de aprender levando em consideração conhecimento do domínio na forma de um programa lógico e também usar a linguagem de programas lógicos para descrever o conhecimento induzido. Neste trabalho foi implementado um módulo chamado Kaeru para converter dados no formato atributo-valor para o formato relacional utilizado pelo sistema de PLI Aleph. Uma série de experimentos foram realizados com quatro conjuntos de dados naturais e um conjunto de dados real no formato atributo valor. Utilizando o módulo conversor Kaeru esses dados foram convertidos para o formato relacional utilizado pelo Aleph e hipóteses de classificação foram induzidas utilizando aprendizado proposicional bem como aprendizado relacional. É mostrado também, que o aprendizado proposicional pode ser utilizado para incrementar o conhecimento do domínio utilizado pelos sistemas de aprendizado relacional para melhorar a qualidade das hipóteses induzidas. / Machine Learning addresses the question of how to build computer programs that learn a concept or hypotheses from a set of examples, objects or cases. Descriptive languages are necessary in machine learning to describe the set of examples, domain knowledge as well as the hypothesis learned from these examples. In general, these languages can be divided into two types: languages based on attribute values, or em propositional languages, and relational languages. Learning algorithms are often classified as propositional or relational taking into consideration the descriptive language they use. Typical propositional learning algorithms employ the attribute value representation, which is inadequate for problem-domains that require reasoning about the structure of objects in the domain and relations among such objects. On the other hand, Inductive Logig Programming (ILP) is concerned with the development of techniques and tools for relational learning. ILP systems are able to take into account domain knowledge in the form of a logic program and also use the language of logic programs for describing the induced knowledge or hypothesis. In this work we propose and implement a module, named kaeru, to convert data in the attribute-value format to the relational format used by the ILP system Aleph. We describe a series of experiments performed on four natural data sets and one real data set in the attribute value format. Using the kaeru module these data sets were converted to the relational format used by Aleph and classifying hipoteses were induced using propositional as well as relational learning. We also show that propositional knowledge can be used to increment the background knowledge used by relational learners in order to improve the induded hypotheses quality.
22

Knowledge Representation, Reasoning and Learning for Non-Extractive Reading Comprehension

January 2019 (has links)
abstract: While in recent years deep learning (DL) based approaches have been the popular approach in developing end-to-end question answering (QA) systems, such systems lack several desired properties, such as the ability to do sophisticated reasoning with knowledge, the ability to learn using less resources and interpretability. In this thesis, I explore solutions that aim to address these drawbacks. Towards this goal, I work with a specific family of reading comprehension tasks, normally referred to as the Non-Extractive Reading Comprehension (NRC), where the given passage does not contain enough information and to correctly answer sophisticated reasoning and ``additional knowledge" is required. I have organized the NRC tasks into three categories. Here I present my solutions to the first two categories and some preliminary results on the third category. Category 1 NRC tasks refer to the scenarios where the required ``additional knowledge" is missing but there exists a decent natural language parser. For these tasks, I learn the missing ``additional knowledge" with the help of the parser and a novel inductive logic programming. The learned knowledge is then used to answer new questions. Experiments on three NRC tasks show that this approach along with providing an interpretable solution achieves better or comparable accuracy to that of the state-of-the-art DL based approaches. The category 2 NRC tasks refer to the alternate scenario where the ``additional knowledge" is available but no natural language parser works well for the sentences of the target domain. To deal with these tasks, I present a novel hybrid reasoning approach which combines symbolic and natural language inference (neural reasoning) and ultimately allows symbolic modules to reason over raw text without requiring any translation. Experiments on two NRC tasks shows its effectiveness. The category 3 neither provide the ``missing knowledge" and nor a good parser. This thesis does not provide an interpretable solution for this category but some preliminary results and analysis of a pure DL based approach. Nonetheless, the thesis shows beyond the world of pure DL based approaches, there are tools that can offer interpretable solutions for challenging tasks without using much resource and possibly with better accuracy. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
23

An investigation into theory completion techniques in inductive logic programming

Moyle, Stephen Anthony January 2003 (has links)
Traditional Inductive Logic Programming (ILP) focuses on the setting where the target theory is a generalisation of the observations. This is known as Observational Predicate Learning (OPL). In the Theory Completion setting the target theory is not in the same predicate as the observations (non-OPL). This thesis investigates two alternative simple extensions to traditional ILP to perform non-OPL or Theory Completion. Both techniques perform extraction-case abduction from an existing background theory and one seed observation. The first technique -- Logical Back-propagation -- modifies the existing background theory so that abductions can be achieved by a form of constructive negation using a standard SLD-resolution theorem prover. The second technique -- SOLD-resolution -- modifies the theorem prover, and leaves the existing background theory unchanged. It is shown that all abductions produced by Logical Back-propagation can also be generated by SOLD-resolution; but the reverse does not hold. The implementation using the SOLD-resolution technique -- the ALECTO system -- was applied to the problems of completing context free and context dependant grammars; and learning Event Calculus programs. It was successfully able to learn an Event Calculus program to control the navigation of a real-life robot. The Event Calculus is a formalism to represent common-sense knowledge. It follows that the discovery of some common-sense knowledge was produced with the assistance of a machine.
24

Inductive Program Synthesis with a Type System

Torres Padilla, Juan Pablo January 2019 (has links)
No description available.
25

Learning probabilistic relational models: a novel approach. / Aprendendo modelos probabilísticos relacionais: uma nova abordagem.

Mormille, Luiz Henrique Barbosa 17 August 2018 (has links)
While most statistical learning methods are designed to work with data stored in a single table, many large datasets are stored in relational database systems. Probabilistic Relational Models (PRM) extend Bayesian networks by introducing relations and individuals, thus making it possible to represent information in a relational database. However, learning a PRM from relational data is a more complex task than learning a Bayesian Network from \"flat\" data. The main difficulties that arise while learning a PRM are establishing what are the legal dependency structures, searching for possible structures, and scoring them. This thesis focuses on the development of a novel approach to learn the structure of a PRM, describes a package in the R language to support the learning framework, and applies it to a real, large scale scenario of a city named Atibaia, in the state of São Paulo, Brazil. The research is based on a database combining three different tables, each representing one class in the domain of study. The first table contains 27 attributes from 110,816 citizens of Atibaia. The second table contains 9 attributes from 20,162 companies located in the city. And finally, the third table has 8 attributes from 327 census sectors (small territorial units that comprise the city of Atibaia). The proposed framework is applied to learn a PRM structure and parameters from the database. The model is used to verify if the Social Class of a person can be explained by the location where they live, their neighbors, and the companies nearby. Preliminary experiments have been conducted and a paper published in the 2017 Symposium on Knowledge Discovery, Mining and Learning (KDMiLe). The algorithm performance was further evaluated by extensive experimentation, and a broader study using Serasa Experian data was conducted. Finally, the package in the R language that supports our method was refined along with proper documentation and a tutorial. / Embora a maioria dos métodos de aprendizado estatístico tenha sido desenvolvida para se trabalhar com dados armazenados em uma única tabela, muitas bases de dados estão armazenadas em bancos de dados relacionais. Modelos Probabilísticos Relacionai (PRM) estendem Redes Bayesianas introduzindo relações e indivíduos, tornando possível a representação de informação em uma base de dados relacional. Entretanto, aprender um PRM através de dados relacionais é uma tarefa mais complexa que aprender uma Rede Bayesiana de uma única tabela. As maiores dificuldades que se impõe enquanto se aprende um PRM são estabelecer quais são as estruturas de dependência legais, procurar por possíveis estruturas, e avalia-las. Esta tese foca em desenvolver um novo método de aprendizado de estruturas de PRM, descrever um pacote na linguagem R que suporte este método e aplica-lo a um cenário real e de grande escala, a cidade de Atibaia, no estado de São Paulo, Brasil. Esta pesquisa está baseada em uma base de dados combinando três tabelas distintas, cada uma representando uma classe no domínio de estudo. A primeira tabela contém 27 atributos de 110.816 habitantes de Atibaia, e a segunda tabela contém 9 atributos de 20.162 empresas da cidade. Por fim, a terceira tabela possui 8 atributos para 327 setores censitários (pequenas unidades territoriais que formam a cidade de Atibaia). A proposta é aplicada para aprender-se a estrutura de um PRM e seus parâmetros através desta base de dados. O modelo foi utilizado para verificar se a classe social de uma pessoa pode ser explicada pelo local onde ela vive, seus vizinhos e as companhias próximas. Experimentos preliminares foram conduzidos e um artigo foi publicado no Symposium on Knowledge Discovery, Mining and Learning (KDMiLe). O desempenho do algoritmo foi reavaliada através de extensiva experimentação, e um estudo mais amplo foi conduzido com os dados da Serasa Experian. Por fim, o pacote em R que suporta o método proposto foi refinado, e documentação e tutorial apropriado foram descritos.
26

Logic-based modelling of musical harmony for automatic characterisation and classification

Anglade, Amélie January 2014 (has links)
Harmony is the aspect of music concerned with the structure, progression, and relation of chords. In Western tonal music each period had different rules and practices of harmony. Similarly some composers and musicians are recognised for their characteristic harmonic patterns which differ from the chord sequences used by other musicians of the same period or genre. This thesis is concerned with the automatic induction of the harmony rules and patterns underlying a genre, a composer, or more generally a 'style'. Many of the existing approaches for music classification or pattern extraction make use of statistical methods which present several limitations. Typically they are black boxes, can not be fed with background knowledge, do not take into account the intricate temporal dimension of the musical data, and ignore rare but informative events. To overcome these limitations we adopt first-order logic representations of chord sequences and Inductive Logic Programming techniques to infer models of style. We introduce a fixed length representation of chord sequences similar to n-grams but based on first-order logic, and use it to characterise symbolic corpora of pop and jazz music. We extend our knowledge representation scheme using context-free definite-clause grammars, which support chord sequences of any length and allow to skip ornamental chords, and test it on genre classification problems, on both symbolic and audio data. Through these experiments we also compare various chord and harmony characteristics such as degree, root note, intervals between root notes, chord labels and assess their characterisation and classification accuracy, expressiveness, and computational cost. Moreover we extend a state- of-the-art genre classifier based on low-level audio features with such harmony-based models and prove that it can lead to statistically significant classification improvements. We show our logic-based modelling approach can not only compete with and improve on statistical approaches but also provides expressive, transparent and musicologically meaningful models of harmony which makes it suitable for knowledge discovery purposes.
27

Meta-Interpretive Learning Versus Inductive Metalogic Programming : A Comparative Analysis in Inductive Logic Programming

Pettersson, Emil January 2019 (has links)
Artificial intelligence and machine learning are fields of research that have become very popular and are getting more attention in the media as our computational power increases and the theories and latest developments of these fields can be put into practice in the real world. The field of machine learning consists of different paradigms, two of which are the symbolic and connectionist paradigms. In 1991 it was pointed out by Minsky that we could benefit from sharing ideas between the paradigms instead of competing for dominance in the field. That is why this thesis is investigating two approaches to inductive logic programming, where the main research goals are to, first: find similarities or differences between the approaches and potential areas where cross-pollination could be beneficial, and secondly: investigate their relative performance to each other based on the results published in the research. The approaches investigated are Meta-Interpretive Learning and Inductive Metalogic Programming, which belong to the symbolic paradigm of machine learning. The research is conducted through a comparative study based on published research papers. The conclusion to the study suggests that at least two aspects of the approaches could potentially be shared between them, namely the reversible aspect of the meta-interpreter and restricting the hypothesis space using the Herbrand base. However, the findings regarding performance were deemed incompatible, in terms of a fair one to one comparison. The results of the study are mainly specific, but could be interpreted as motivation for similar collaboration efforts between different paradigms.
28

An Ilp-based Concept Discovery System For Multi-relational Data Mining

Kavurucu, Yusuf 01 July 2009 (has links) (PDF)
Multi Relational Data Mining has become popular due to the limitations of propositional problem definition in structured domains and the tendency of storing data in relational databases. However, as patterns involve multiple relations, the search space of possible hypothesis becomes intractably complex. In order to cope with this problem, several relational knowledge discovery systems have been developed employing various search strategies, heuristics and language pattern limitations. In this thesis, Inductive Logic Programming (ILP) based concept discovery is studied and two systems based on a hybrid methodology employing ILP and APRIORI, namely Confidence-based Concept Discovery and Concept Rule Induction System, are proposed. In Confidence-based Concept Discovery and Concept Rule Induction System, the main aim is to relax the strong declarative biases and user-defined specifications. Moreover, this new method directly works on relational databases. In addition to this, the traditional definition of confidence from relational database perspective is modified to express Closed World Assumption in first-order logic. A new confidence-based pruning method based on the improved definition is applied in the APRIORI lattice. Moreover, a new hypothesis evaluation criterion is used for expressing the quality of patterns in the search space. In addition to this, in Concept Rule Induction System, the constructed rule quality is further improved by using an improved generalization metod. Finally, a set of experiments are conducted on real-world problems to evaluate the performance of the proposed method with similar systems in terms of support and confidence.
29

Επαγωγικός λογικός προγραμματισμός και εφαρμογές

Λώλης, Γεώργιος Ε. 28 August 2008 (has links)
Ο Επαγωγικός Λογικός Προγραμματισμός (Inductive Logic Programming ή, σε συντομογραφία ILP) είναι ο ερευνητικός τομέας της Τεχνητής Νοημοσύνης (Artificial Intelligence) που δραστηριοποιείται στη τομή των γνωστικών περιοχών της Μάθησης Μηχανής (Machine Learning) και του Λογικού Προγραμματισμού (Logic Programming).Ο όρος επαγωγικός εκφράζει την ιδέα του συλλογισμού από το επί μέρους στο γενικό. Μέσω της επαγωγικής μάθησης μηχανής ο ILP επιτυγχάνει το στόχο του που είναι η δημιουργία εργαλείων και η ανάπτυξη τεχνικών για την εξαγωγή υποθέσεων από παρατηρήσεις (παραδείγματα) και η σύνθεση-απόκτηση νέας γνώσης από εμπειρικές παρατηρήσεις. Σε αντίθεση με της περισσότερες άλλες προσεγγίσεις της επαγωγικής μάθησης ο ILP ενδιαφέρεται για της ιδιότητες του συμπερασμού με κανόνες για την σύγκλιση αλγορίθμων και για την υπολογιστική πολυπλοκότητα των διαδικασιών. Ο ILP ασχολείται με την ανάπτυξη τεχνικών και εργαλείων για την σχεσιακή ανάλυση δεδομένων. Εφαρμόζεται απευθείας σε δεδομένα πολλαπλών συσχετισμών για την ανακάλυψη προτύπων. Τα πρότυπα που ανακαλύπτονται από τα συστήματα στον ILP εκφράζονται ως λογικά προγράμματα. Τα λογικά προγράμματα αποτελούνται από ειδικούς κανόνες, οι οποίοι χωρίζονται στις προϋποθέσεις και στα συμπεράσματα. Ο ILP έχει χρησιμοποιηθεί εκτεταμένα σε προβλήματα που αφορούν τη μοριακή βιολογία, την βιοχημεία και την χημεία. Τα παραδείγματα, οι κανόνες εκφράζουν την γνώση υποβάθρου εκφράζονται σε μια γλώσσα λογικού προγραμματισμού όπως η Prolog. Ο Επαγωγικός Λογικός Προγραμματισμός διαφοροποιείται από τις άλλες μορφές Μάθησης Μηχανής, αφ’ ενός μεν λόγω της χρήσης μιας εκφραστικής γλώσσας αναπαράστασης και αφ’ ετέρου από τη δυνατότητά του να χρησιμοποιεί τη γνώση υποβάθρου. Διάφορες εφαρμογές έχουν αναπτυχθεί, εκ των οποίων η πιο πρόσφατη είναι η Progol, που αποτελείται από ένα διερμηνέα της Prolog ο οποίος συνοδεύεται από έναν αλγόριθμο Αντίστροφης Συνεπαγωγής (Inverse Entailment) που κατασκευάζει νέες προτάσεις με τη γενίκευση των παραδειγμάτων που περιέχονται στη βάση δεδομένων της Prolog. Η θεωρία του Επαγωγικού Λογικού Προγραμματισμού εγγυάται ότι η Progol θα διεξάγει μια αποδεκτή αναζήτηση στο διάστημα των γενικεύσεων, βρίσκοντας το ελάχιστο σύνολο προτάσεων, από το οποίο όλα τα παραδείγματα μπορούν να προκύψουν. Στην συγκεκριμένη εργασία η Progol είναι το εργαλείο που χρησιμοποιείται για την ανάπτυξη των παραδειγμάτων εφαρμογής του ILP. / The Inductive Reasonable Planning (Inductive Logic Programming or, in abbreviation ILP) is the inquiring sector Artificial Intelligence that is activated in the section of cognitive regions of Learning of Machine (Machine Learning) and Reasonable Planning (Logic Programming). The term inductive expresses the idea of reasoning from on part in general. Via the inductive learning of machine the ILP achieves his objective that is the creation of tools and the growth of techniques for the export of affairs from observations (examples) and composition of new knowledge from empiric observations. Contrary to more other approaches of inductive learning the ILP is interested for its inference attributes with rules on the convergence of algorithms and on the calculating complexity of processes. The ILP deals with the growth of techniques and tools for the relational analysis of data. It is applied directly in data of multiple correlations on the discovery of models. The models that are discovered by the systems in the ILP are expressed as reasonable programs. The reasonable programs are constituted by special rules, which are separated in the conditions and in the conclusions. The ILP has been used extensive in problems that concern the molecular biology, the biochemistry and the chemistry. The examples, the rules express the knowledge of background are expressed in a language of reasonable planning as the Prolog. The Inductive Reasonable Planning is differentiated by the other forms of Learning of Machine, on the one hand men because the use of expressive language of representation and on the other hand by his possibility of using the knowledge of background. Various applications have been developed, from which most recent is Progol, that is constituted from interpreter of Prolog which is accompanied by a algorithm of Inverse Entailment that manufactures new proposals with the generalisation of examples that is contained in the base of data of Prolog. theory of Inductive Reasonable Planning guarantees that the Progol will carry out a acceptable search in the interval of generalisations, finding the minimal total of proposals, from which all the examples can result. In the particular work the Progol is the tool that is used for the growth of examples of application of ILP.
30

Modeling Actions and State Changes for a Machine Reading Comprehension Dataset

January 2019 (has links)
abstract: Artificial general intelligence consists of many components, one of which is Natural Language Understanding (NLU). One of the applications of NLU is Reading Comprehension where it is expected that a system understand all aspects of a text. Further, understanding natural procedure-describing text that deals with existence of entities and effects of actions on these entities while doing reasoning and inference at the same time is a particularly difficult task. A recent natural language dataset by the Allen Institute of Artificial Intelligence, ProPara, attempted to address the challenges to determine entity existence and entity tracking in natural text. As part of this work, an attempt is made to address the ProPara challenge. The Knowledge Representation and Reasoning (KRR) community has developed effective techniques for modeling and reasoning about actions and similar techniques are used in this work. A system consisting of Inductive Logic Programming (ILP) and Answer Set Programming (ASP) is used to address the challenge and achieves close to state-of-the-art results and provides an explainable model. An existing semantic role label parser is modified and used to parse the dataset. On analysis of the learnt model, it was found that some of the rules were not generic enough. To overcome the issue, the Proposition Bank dataset is then used to add knowledge in an attempt to generalize the ILP learnt rules to possibly improve the results. / Dissertation/Thesis / Masters Thesis Computer Science 2019

Page generated in 0.1738 seconds