Spelling suggestions: "subject:"[een] LOGIC PROGRAMMING"" "subject:"[enn] LOGIC PROGRAMMING""
91 |
"Aquisição de conhecimento de conjuntos de exemplos no formato atributo valor utilizando aprendizado de máquina relacional"Ferro, Mariza 17 September 2004 (has links)
O Aprendizado de Máquina trata da questão de como desenvolver programas de computador capazes de aprender um conceito ou hipótese a partir de um conjunto de exemplos ou casos observados. Baseado no conjunto de treinamento o algoritmo de aprendizado induz a classificação de uma hipótese capaz de determinar corretamente a classe de novos exemplos ainda não rotulados. Linguagens de descrição são necessárias para escrever exemplos, conhecimento do domínio bem como as hipóteses aprendidas a partir dos exemplos. Em geral, essas linguagens podem ser divididas em dois tipos: linguagem baseada em atributo-valor ou proposicional e linguagem relacional. Algoritmos de aprendizado são classificados como proposicional ou relacional dependendo da liguagem de descrição que eles utilizam. Além disso, no aprendizado simbólico o objetivo é gerar a classificação de hipóteses que possam ser facilmente interpretadas pelos humanos. Algoritmos de aprendizado proposicional utilizam a representação atributo-valor, a qual é inadequada para representar objetos estruturados e relações entre esses objetos. Por outro lado, a Programação lógica Indutiva (PLI) é realizada com o desenvolvimento de técnicas e ferramentas para o aprendizado relacional. Sistemas de PLI são capazes de aprender levando em consideração conhecimento do domínio na forma de um programa lógico e também usar a linguagem de programas lógicos para descrever o conhecimento induzido. Neste trabalho foi implementado um módulo chamado Kaeru para converter dados no formato atributo-valor para o formato relacional utilizado pelo sistema de PLI Aleph. Uma série de experimentos foram realizados com quatro conjuntos de dados naturais e um conjunto de dados real no formato atributo valor. Utilizando o módulo conversor Kaeru esses dados foram convertidos para o formato relacional utilizado pelo Aleph e hipóteses de classificação foram induzidas utilizando aprendizado proposicional bem como aprendizado relacional. É mostrado também, que o aprendizado proposicional pode ser utilizado para incrementar o conhecimento do domínio utilizado pelos sistemas de aprendizado relacional para melhorar a qualidade das hipóteses induzidas. / Machine Learning addresses the question of how to build computer programs that learn a concept or hypotheses from a set of examples, objects or cases. Descriptive languages are necessary in machine learning to describe the set of examples, domain knowledge as well as the hypothesis learned from these examples. In general, these languages can be divided into two types: languages based on attribute values, or em propositional languages, and relational languages. Learning algorithms are often classified as propositional or relational taking into consideration the descriptive language they use. Typical propositional learning algorithms employ the attribute value representation, which is inadequate for problem-domains that require reasoning about the structure of objects in the domain and relations among such objects. On the other hand, Inductive Logig Programming (ILP) is concerned with the development of techniques and tools for relational learning. ILP systems are able to take into account domain knowledge in the form of a logic program and also use the language of logic programs for describing the induced knowledge or hypothesis. In this work we propose and implement a module, named kaeru, to convert data in the attribute-value format to the relational format used by the ILP system Aleph. We describe a series of experiments performed on four natural data sets and one real data set in the attribute value format. Using the kaeru module these data sets were converted to the relational format used by Aleph and classifying hipoteses were induced using propositional as well as relational learning. We also show that propositional knowledge can be used to increment the background knowledge used by relational learners in order to improve the induded hypotheses quality.
|
92 |
Ampliando os limites do aprendizado indutivo de máquina através das abordagens construtiva e relacional. / Extending the limits of inductive machine learning through constructive and relational approaches.Nicoletti, Maria do Carmo 24 June 1994 (has links)
Este trabalho investiga Aprendizado Indutivo de Máquina como função das linguagens de descrição, utilizadas para expressar instancias, conceitos e teoria do domínio. A ampliação do poder de representação do aprendizado proporcional e abordada no contexto de indução construtiva, no domínio de funções booleanas, com a proposta de uma estratégia de composição de atributos denominada root-fringe. Avaliações experimentais dessa e de outras estratégias de construção de novos atributos foram conduzidas e os resultados analisados. Dois métodos de poda, para tratamento de ruídos, em aprendizado de arvores de decisão, foram avaliados num ambiente de indução construtiva e os resultados discutidos. Devido a limitação do aprendizado proposicional, foram investigadas formas de ampliação dos limites do aprendizado, através da ampliação do poder representacional das linguagens de descrição. Foi escolhida Programação Lógica Indutiva - PLI - que e um paradigma de aprendizado indutivo que usa restrições de Lógica de Primeira Ordem como linguagens de descrição. O aprendizado em PLI só é factível quando as linguagens utilizadas estão restritas e é fortemente controlado, caso contrário, o aprendizado em PLI se torna indecidível. A pesquisa em PLI se direcionou a formas de restrição das linguagens de descrição da teoria do domínio e de hipóteses. Três algoritmos que \"traduzem\" a teoria do domínio de sua forma intencional, para extensional, são apresentados. As implementações de dois deles são discutidas. As implementações realizadas deram origem a dois ambientes experimentais de aprendizado: o ambiente proposicional experimental, do qual fazem parte o ambiente experimental construtivo, e o ambiente experimental relacional. / This work investigates Inductive Machine Learning as a function of the description languages employed to express instances, concepts and domain theory. The enlargement of the representational power of propositional learning methods is approached via constructive induction, in the domain of boolean functions, through the proposal of a bias for composing attributes, namely, the bias root-fringe. Experimental evaluation of root-fringe, as well as other biases for constructing new attributes was conducted and the results analyzed. Two pruning methods for decision trees were evaluated in an environment of constructive induction and the results discussed. Due to the limitations of propositional learning, ways of enlarging the limits of the learning process were investigated through enlarging the representational power of the description languages. It was chosen Inductive Logic Programming - ILP - that is an inductive learning paradigm that uses restrictions of First Order Logic as description languages. Learning using ILP is only feasible when the languages are restricted and are strongly controlled; otherwise, learning in ILP becomes undecidible. Research work in ILP was directed towards restricting domain theory and hypotheses description languages. Three algorithms that \"translate\" the intentional expression of a domain theory into its extensional expression are presented. The implementations of two of them are discussed. The implementations gave rise to two experimental learning environments: the propositional environment, which includes the constructive environment, and the relational environment.
|
93 |
The design and implementation of a multiparadigm programming language.January 1993 (has links)
by Chi-keung Luk. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 169-174). / Preface --- p.xi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Programming Languages --- p.2 / Chapter 1.2 --- Programming Paradigms --- p.2 / Chapter 1.2.1 --- What is a programming paradigm --- p.2 / Chapter 1.2.2 --- Which came first? Languages or paradigms? --- p.2 / Chapter 1.2.3 --- Overview of some paradigms --- p.4 / Chapter 1.2.4 --- A spectrum of paradigms --- p.6 / Chapter 1.2.5 --- Mulitparadigm systems --- p.7 / Chapter 1.3 --- The Objectives of this research --- p.8 / Chapter 2 --- "Studies of the object-oriented, the logic and the functional paradigms" --- p.10 / Chapter 2.1 --- The Object-Oriented Paradigm --- p.10 / Chapter 2.1.1 --- Basic components --- p.10 / Chapter 2.1.2 --- Motivations --- p.11 / Chapter 2.1.3 --- Some related issues --- p.12 / Chapter 2.1.4 --- Computational models for object-oriented programming --- p.16 / Chapter 2.2 --- The Functional Paradigm --- p.18 / Chapter 2.2.1 --- Basic concepts --- p.18 / Chapter 2.2.2 --- Lambda calculus --- p.20 / Chapter 2.2.3 --- The characteristics of functional programs --- p.21 / Chapter 2.2.4 --- Practicality of functional programming --- p.25 / Chapter 2.3 --- The Logic Paradigm --- p.28 / Chapter 2.3.1 --- Relations --- p.28 / Chapter 2.3.2 --- Logic programs --- p.29 / Chapter 2.3.3 --- The opportunity for parallelism --- p.30 / Chapter 2.4 --- Summary --- p.31 / Chapter 3 --- A survey of some existing multiparadigm languages --- p.32 / Chapter 3.1 --- Logic + Object-Oriented --- p.33 / Chapter 3.1.1 --- LogiC++ --- p.33 / Chapter 3.1.2 --- Intermission --- p.34 / Chapter 3.1.3 --- Object-Oriented Programming in Prolog (OOPP) --- p.36 / Chapter 3.1.4 --- Communication Prolog Unit (CPU) --- p.37 / Chapter 3.1.5 --- DLP --- p.37 / Chapter 3.1.6 --- Representing Objects in a Logic Programming Language with Scoping Constructs (OLPSC) --- p.39 / Chapter 3.1.7 --- KSL/Logic --- p.40 / Chapter 3.1.8 --- Orient84/K --- p.41 / Chapter 3.1.9 --- Vulcan --- p.42 / Chapter 3.1.10 --- The Bridge approach --- p.43 / Chapter 3.1.11 --- Discussion --- p.44 / Chapter 3.2 --- Functional + Object-Oriented --- p.46 / Chapter 3.2.1 --- PROOF --- p.46 / Chapter 3.2.2 --- A Functional Language with Classes (FLC) --- p.47 / Chapter 3.2.3 --- Common Lisp Object System (CLOS) --- p.49 / Chapter 3.2.4 --- FOOPS --- p.50 / Chapter 3.2.5 --- Discussion --- p.51 / Chapter 3.3 --- Logic + Functional --- p.52 / Chapter 3.3.1 --- HOPE --- p.52 / Chapter 3.3.2 --- FUNLOG --- p.54 / Chapter 3.3.3 --- F* --- p.55 / Chapter 3.3.4 --- LEAF --- p.56 / Chapter 3.3.5 --- Applog --- p.57 / Chapter 3.3.6 --- Discussion --- p.58 / Chapter 3.4 --- Logic + Functional + Object-Oriented --- p.61 / Chapter 3.4.1 --- Paradise --- p.61 / Chapter 3.4.2 --- LIFE --- p.62 / Chapter 3.4.3 --- UNIFORM --- p.63 / Chapter 3.4.4 --- G --- p.64 / Chapter 3.4.5 --- FOOPlog --- p.66 / Chapter 3.4.6 --- Logic and Objects (L&O) --- p.66 / Chapter 3.4.7 --- Discussion --- p.67 / Chapter 4 --- The design of a multiparadigm language I --- p.70 / Chapter 4.1 --- An Object-Oriented Framework --- p.71 / Chapter 4.1.1 --- A hierarchy of classes --- p.71 / Chapter 4.1.2 --- Program structure --- p.71 / Chapter 4.1.3 --- Parametric classes --- p.72 / Chapter 4.1.4 --- Inheritance --- p.73 / Chapter 4.1.5 --- The meanings of classes and methods --- p.75 / Chapter 4.1.6 --- Objects and messages --- p.75 / Chapter 4.2 --- The logic Subclasses --- p.76 / Chapter 4.2.1 --- Syntax --- p.76 / Chapter 4.2.2 --- Distributed inference --- p.76 / Chapter 4.2.3 --- Adding functions and expressions to logic programs --- p.77 / Chapter 4.2.4 --- State modelling --- p.79 / Chapter 4.3 --- The functional Subclasses --- p.80 / Chapter 4.3.1 --- The syntax of functions --- p.80 / Chapter 4.3.2 --- Abstract data types --- p.81 / Chapter 4.3.3 --- Augmented list comprehensions --- p.82 / Chapter 4.4 --- The Semantic Foundation of I Programs --- p.84 / Chapter 4.4.1 --- T1* : Transform functions into Horn clauses --- p.84 / Chapter 4.4.2 --- T2*: Transform object-oriented features into pure logic --- p.85 / Chapter 4.5 --- Exploiting Parallelism in I Programs --- p.89 / Chapter 4.5.1 --- Inter-object parallelism --- p.89 / Chapter 4.5.2 --- Intra-object parallelism --- p.92 / Chapter 4.6 --- Discussion --- p.96 / Chapter 5 --- An implementation of a prototype of I --- p.99 / Chapter 5.1 --- System Overview --- p.99 / Chapter 5.2 --- I-to-Prolog Translation --- p.101 / Chapter 5.2.1 --- Pass 1 - lexical and syntax analysis --- p.101 / Chapter 5.2.2 --- Pass 2 - Class Table Construction and Semantic Checking --- p.101 / Chapter 5.2.3 --- Pass 3 - Determination of Multiple Inheritance Precedence --- p.105 / Chapter 5.2.4 --- Pass 4 - Translation of the directive part --- p.110 / Chapter 5.2.5 --- Pass 5 - Creation of Prolog source code for an I object --- p.110 / Chapter 5.2.6 --- Using expressions in logic methods --- p.112 / Chapter 5.3 --- I-to-LML Translation --- p.114 / Chapter 5.4 --- The Run-time Handler --- p.117 / Chapter 5.4.1 --- Object Management --- p.118 / Chapter 5.4.2 --- Process Management and Message Passing --- p.121 / Chapter 6 --- Some applications written in I --- p.125 / Chapter 6.1 --- Modeling of a State Space Search --- p.125 / Chapter 6.2 --- A Solution to the N-queen Problem --- p.129 / Chapter 6.3 --- Object-Oriented Modeling of a Database --- p.131 / Chapter 6.4 --- A Simple Expert System --- p.133 / Chapter 6.5 --- Summary --- p.138 / Chapter 7 --- Conclusion and future work --- p.139 / Chapter 7.1 --- Conclusion --- p.139 / Chapter 7.2 --- Future Work --- p.141 / Chapter A --- Language manual --- p.146 / Chapter A.1 --- Introduction --- p.146 / Chapter A.2 --- Syntax --- p.146 / Chapter A.2.1 --- The lexical specification --- p.146 / Chapter A.2.2 --- The syntax specification --- p.149 / Chapter A3 --- Classes --- p.152 / Chapter A.4 --- Object Creation and Method Invocation --- p.153 / Chapter A.5 --- The logic Subclasses --- p.155 / Chapter A.6 --- The functional Subclasses --- p.156 / Chapter A.7 --- Types --- p.158 / Chapter A.8 --- Mutable States --- p.158 / Chapter B --- User's guide --- p.160 / Chapter B.1 --- System Calls --- p.160 / Chapter B.2 --- Configuration Parameters --- p.162 / Chapter B.3 --- Errors --- p.163 / Chapter B.4 --- Implementation Limits --- p.164 / Chapter B.5 --- How to install the system --- p.164 / Chapter B.6 --- How to use the system --- p.164 / Chapter B.7 --- How to recompile the system --- p.166 / Chapter B.8 --- Directory arrangement --- p.167 / Chapter C --- List of publications --- p.168 / Bibliography --- p.169
|
94 |
Integrating artificial neural networks and constraint logic programming.January 1995 (has links)
by Vincent Wai-leuk Tam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 74-80). / Chapter 1 --- Introduction and Summary --- p.1 / Chapter 1.1 --- The Task --- p.1 / Chapter 1.2 --- The Thesis --- p.2 / Chapter 1.2.1 --- Thesis --- p.2 / Chapter 1.2.2 --- Antithesis --- p.3 / Chapter 1.2.3 --- Synthesis --- p.5 / Chapter 1.3 --- Results --- p.6 / Chapter 1.4 --- Contributions --- p.6 / Chapter 1.5 --- Chapter Summaries --- p.7 / Chapter 1.5.1 --- Chapter 2: An ANN-Based Constraint-Solver --- p.8 / Chapter 1.5.2 --- Chapter 3: A Theoretical Framework of PROCLANN --- p.8 / Chapter 1.5.3 --- Chapter 4: The Prototype Implementation --- p.8 / Chapter 1.5.4 --- Chapter 5: Benchmarking --- p.9 / Chapter 1.5.5 --- Chapter 6: Conclusion --- p.9 / Chapter 2 --- An ANN-Based Constraint-Solver --- p.10 / Chapter 2.1 --- Notations --- p.11 / Chapter 2.2 --- Criteria for ANN-based Constraint-solver --- p.11 / Chapter 2.3 --- A Generic Neural Network: GENET --- p.13 / Chapter 2.3.1 --- Network Structure --- p.13 / Chapter 2.3.2 --- Network Convergence --- p.17 / Chapter 2.3.3 --- Energy Perspective --- p.22 / Chapter 2.4 --- Properties of GENET --- p.23 / Chapter 2.5 --- Incremental GENET --- p.27 / Chapter 3 --- A Theoretical Framework of PROCLANN --- p.29 / Chapter 3.1 --- Syntax and Declarative Semantics --- p.30 / Chapter 3.2 --- Unification in PROCLANN --- p.33 / Chapter 3.3 --- PROCLANN Computation Model --- p.38 / Chapter 3.4 --- Soundness and Weak Completeness of the PROCLANN Compu- tation Model --- p.40 / Chapter 3.5 --- Probabilistic Non-determinism --- p.46 / Chapter 4 --- The Prototype Implementation --- p.48 / Chapter 4.1 --- Prototype Design --- p.48 / Chapter 4.2 --- Implementation Issues --- p.52 / Chapter 5 --- Benchmarking --- p.58 / Chapter 5.1 --- N-Queens --- p.59 / Chapter 5.1.1 --- Benchmarking --- p.59 / Chapter 5.1.2 --- Analysis --- p.59 / Chapter 5.2 --- Graph-coloring --- p.63 / Chapter 5.2.1 --- Benchmarking --- p.63 / Chapter 5.2.2 --- Analysis --- p.64 / Chapter 5.3 --- Exceptionally Hard Problem --- p.66 / Chapter 5.3.1 --- Benchmarking --- p.67 / Chapter 5.3.2 --- Analysis --- p.67 / Chapter 6 --- Conclusion --- p.68 / Chapter 6.1 --- Contributions --- p.68 / Chapter 6.2 --- Limitations --- p.70 / Chapter 6.3 --- Future Work --- p.71 / Chapter 6.3.1 --- Parallel Implementation --- p.71 / Chapter 6.3.2 --- General Constraint Handling --- p.72 / Chapter 6.3.3 --- Other ANN Models --- p.73 / Chapter 6.3.4 --- Other Domains --- p.73 / Bibliography --- p.74 / Appendix A The Hard Graph-coloring Problems --- p.81 / Appendix B An Exceptionally Hard Problem (EHP) --- p.182
|
95 |
Integrating phosphoproteomic time series data into prior knowledge networks / Intégration de données de séries temporelles phosphoprotéomiques dans des réseaux de connaissances antérieursRazzaq, Misbah 05 December 2018 (has links)
Les voies de signalisation canoniques traditionnelles aident à comprendre l'ensemble des processus de signalisation à l'intérieur de la cellule. Les données phosphoprotéomiques à grande échelle donnent un aperçu des altérations entre différentes protéines dans différents contextes expérimentaux. Notre objectif est de combiner les réseaux de signalisation traditionnels avec des données de séries temporelles phosphoprotéomiques complexes afin de démêler les réseaux de signalisation spécifiques aux cellules. Côté application, nous appliquons et améliorons une méthode de séries temporelles caspo conçue pour intégrer des données phosphoprotéomiques de séries temporelles dans des réseaux de signalisation de protéines. Nous utilisons une étude de cas réel à grande échelle tirée du défi HPN-DREAM BreastCancer. Nous déduisons une famille de modèles booléens à partir de données de séries temporelles de perturbations multiples de quatre lignées cellulaires de cancer du sein, compte tenu d'un réseau de signalisation protéique antérieur. Les résultats obtenus sont comparables aux équipes les plus performantes du challenge HPN-DREAM. Nous avons découvert que les modèles similaires sont regroupés dans l'espace de solutions. Du côté informatique, nous avons amélioré la méthode pour découvrir diverses solutions et améliorer le temps de calcul. / Traditional canonical signaling pathways help to understand overall signaling processes inside the cell. Large scale phosphoproteomic data provide insight into alterations among different proteins under different experimental settings. Our goal is to combine the traditional signaling networks with complex phosphoproteomic time-series data in order to unravel cell specific signaling networks. On the application side, we apply and improve a caspo time series method conceived to integrate time series phosphoproteomic data into protein signaling networks. We use a large-scale real case study from the HPN-DREAM BreastCancer challenge. We infer a family of Boolean models from multiple perturbation time series data of four breast cancer cell lines given a prior protein signaling network. The obtained results are comparable to the top performing teams of the HPN-DREAM challenge. We also discovered that the similar models are clustered to getherin the solutions space. On the computational side, we improved the method to discover diverse solutions and improve the computational time.
|
96 |
Learning acyclic probabilistic logic programs from data. / Aprendizado de programas lógico-probabilísticos acíclicos.Francisco Henrique Otte Vieira de Faria 12 December 2017 (has links)
To learn a probabilistic logic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of these attributes. In this work, we focus on acyclic programs, because in this case the meaning of the program is quite transparent and easy to grasp. We propose that the learning process for a probabilistic acyclic logic program should be guided by a scoring function imported from the literature on Bayesian network learning. We suggest novel techniques that lead to orders of magnitude improvements in the current state-of-art represented by the ProbLog package. In addition, we present novel techniques for learning the structure of acyclic probabilistic logic programs. / O aprendizado de um programa lógico probabilístico consiste em encontrar um conjunto de regras lógico-probabilísticas que melhor se adequem aos dados, a fim de explicar de que forma estão relacionados os atributos observados e predizer a ocorrência de novas instanciações destes atributos. Neste trabalho focamos em programas acíclicos, cujo significado é bastante claro e fácil de interpretar. Propõe-se que o processo de aprendizado de programas lógicos probabilísticos acíclicos deve ser guiado por funções de avaliação importadas da literatura de aprendizado de redes Bayesianas. Neste trabalho s~ao sugeridas novas técnicas para aprendizado de parâmetros que contribuem para uma melhora significativa na eficiência computacional do estado da arte representado pelo pacote ProbLog. Além disto, apresentamos novas técnicas para aprendizado da estrutura de programas lógicos probabilísticos acíclicos.
|
97 |
DEFT guessing: using inductive transfer to improve rule evaluation from limited dataReid, Mark Darren, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Algorithms that learn sets of rules describing a concept from its examples have been widely studied in machine learning and have been applied to problems in medicine, molecular biology, planning and linguistics. Many of these algorithms used a separate-and-conquer strategy, repeatedly searching for rules that explain different parts of the example set. When examples are scarce, however, it is difficult for these algorithms to evaluate the relative quality of two or more rules which fit the examples equally well. This dissertation proposes, implements and examines a general technique for modifying rule evaluation in order to improve learning performance in these situations. This approach, called Description-based Evaluation Function Transfer (DEFT), adjusts the way rules are evaluated on a target concept by taking into account the performance of similar rules on a related support task that is supplied by a domain expert. Central to this approach is a novel theory of task similarity that is defined in terms of syntactic properties of rules, called descriptions, which define what it means for rules to be similar. Each description is associated with a prior distribution over classification probabilities derived from the support examples and a rule's evaluation on a target task is combined with the relevant prior using Bayes' rule. Given some natural conditions regarding the similarity of the target and support task, it is shown that modifying rule evaluation in this way is guaranteed to improve estimates of the true classification probabilities. Algorithms to efficiently implement Deft are described, analysed and used to measure the effect these improvements have on the quality of induced theories. Empirical studies of this implementation were carried out on two artificial and two real-world domains. The results show that the inductive transfer of evaluation bias based on rule similarity is an effective and practical way to improve learning when training examples are limited.
|
98 |
Visual Compositional-Relational ProgrammingZetterström, Andreas January 2010 (has links)
<p>In an ever faster changing environment, software developers not only need agile methods, but also agile programming paradigms and tools. A paradigm shift towards declarative programming has begun; a clear indication of this is Microsoft's substantial investment in functional programming. Moreover, several attempts have been made to enable visual programming. We believe that software development is ready for a new paradigm which goes beyond any existing declarative paradigm: visual compositional-relational programming. Compositional-relational programming (CRP) is a purely declarative paradigm -- making it suitable for a visual representation. All procedural aspects -- including the increasingly important issue of parallelization -- are removed from the programmer's consideration and handled in the underlying implementation. The foundation for CRP is a theory of higher-order combinatory logic programming developed by Hamfelt and Nilsson in the 1990's. This thesis proposes a model for visualizing compositional-relational programming. We show that the diagrams are isomorphic with the programs represented in textual form. Furthermore, we show that the model can be used to automatically generate code from diagrams, thus paving the way for a visual integrated development environment for CRP, where programming is performed by combining visual objects in a drag-and-drop fashion. At present, we implement CRP using Prolog. However, in future we foresee an implementation directly on one of the major object-oriented frameworks, e.g. the .NET platform, with the aim to finally launch relational programming into large-scale systems development.</p>
|
99 |
Financial Information Integration In the Presence of Equational Ontological ConflictsFirat, Aykut, Madnick, Stuart E., Grosof, Benjamin 01 1900 (has links)
While there are efforts to establish a single international accounting standard, there are strong current and future needs to handle heterogeneous accounting methods and systems. We advocate a context-based approach to dealing with multiple accounting standards and equational ontological conflicts. In this paper we first define what we mean by equational ontological conflicts and then describe a new approach, using Constraint Logic Programming and abductive reasoning, to reconcile such conflicts among disparate information systems. In particular, we focus on the use of Constraint Handling Rules as a simultaneous symbolic equation solver, which is a powerful way to combine, invert and simplify multiple conversion functions that translate between different contexts. Finally, we demonstrate a sample application using our prototype implementation that demonstrates the viability of our approach. / Singapore-MIT Alliance (SMA)
|
100 |
Test items for and misconceptions of competences in the domain of logic programmingLinck, Barbara January 2013 (has links)
Development of competence-oriented curricula is still an important theme in informatics education. Unfortunately informatics curricula, which include the domain of logic programming, are still input-orientated or lack detailed competence descriptions. Therefore, the development of competence model and of learning outcomes' descriptions is essential for the learning process in this domain. A prior research developed both. The next research step is to formulate test items to measure the described learning outcomes. This article describes this procedure and exemplifies test items. It also relates a test in school to the items and shows which misconceptions and typical errors are important to discuss in class. The test result can also confirm or disprove the competence model. Therefore, this school test is important for theoretical research as well as for the concrete planning of lessons. Quantitative analysis in school is important for evaluation and improvement of informatics education.
|
Page generated in 0.0424 seconds