121 |
The design and implementation of a multiparadigm programming language.January 1993 (has links)
by Chi-keung Luk. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 169-174). / Preface --- p.xi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Programming Languages --- p.2 / Chapter 1.2 --- Programming Paradigms --- p.2 / Chapter 1.2.1 --- What is a programming paradigm --- p.2 / Chapter 1.2.2 --- Which came first? Languages or paradigms? --- p.2 / Chapter 1.2.3 --- Overview of some paradigms --- p.4 / Chapter 1.2.4 --- A spectrum of paradigms --- p.6 / Chapter 1.2.5 --- Mulitparadigm systems --- p.7 / Chapter 1.3 --- The Objectives of this research --- p.8 / Chapter 2 --- "Studies of the object-oriented, the logic and the functional paradigms" --- p.10 / Chapter 2.1 --- The Object-Oriented Paradigm --- p.10 / Chapter 2.1.1 --- Basic components --- p.10 / Chapter 2.1.2 --- Motivations --- p.11 / Chapter 2.1.3 --- Some related issues --- p.12 / Chapter 2.1.4 --- Computational models for object-oriented programming --- p.16 / Chapter 2.2 --- The Functional Paradigm --- p.18 / Chapter 2.2.1 --- Basic concepts --- p.18 / Chapter 2.2.2 --- Lambda calculus --- p.20 / Chapter 2.2.3 --- The characteristics of functional programs --- p.21 / Chapter 2.2.4 --- Practicality of functional programming --- p.25 / Chapter 2.3 --- The Logic Paradigm --- p.28 / Chapter 2.3.1 --- Relations --- p.28 / Chapter 2.3.2 --- Logic programs --- p.29 / Chapter 2.3.3 --- The opportunity for parallelism --- p.30 / Chapter 2.4 --- Summary --- p.31 / Chapter 3 --- A survey of some existing multiparadigm languages --- p.32 / Chapter 3.1 --- Logic + Object-Oriented --- p.33 / Chapter 3.1.1 --- LogiC++ --- p.33 / Chapter 3.1.2 --- Intermission --- p.34 / Chapter 3.1.3 --- Object-Oriented Programming in Prolog (OOPP) --- p.36 / Chapter 3.1.4 --- Communication Prolog Unit (CPU) --- p.37 / Chapter 3.1.5 --- DLP --- p.37 / Chapter 3.1.6 --- Representing Objects in a Logic Programming Language with Scoping Constructs (OLPSC) --- p.39 / Chapter 3.1.7 --- KSL/Logic --- p.40 / Chapter 3.1.8 --- Orient84/K --- p.41 / Chapter 3.1.9 --- Vulcan --- p.42 / Chapter 3.1.10 --- The Bridge approach --- p.43 / Chapter 3.1.11 --- Discussion --- p.44 / Chapter 3.2 --- Functional + Object-Oriented --- p.46 / Chapter 3.2.1 --- PROOF --- p.46 / Chapter 3.2.2 --- A Functional Language with Classes (FLC) --- p.47 / Chapter 3.2.3 --- Common Lisp Object System (CLOS) --- p.49 / Chapter 3.2.4 --- FOOPS --- p.50 / Chapter 3.2.5 --- Discussion --- p.51 / Chapter 3.3 --- Logic + Functional --- p.52 / Chapter 3.3.1 --- HOPE --- p.52 / Chapter 3.3.2 --- FUNLOG --- p.54 / Chapter 3.3.3 --- F* --- p.55 / Chapter 3.3.4 --- LEAF --- p.56 / Chapter 3.3.5 --- Applog --- p.57 / Chapter 3.3.6 --- Discussion --- p.58 / Chapter 3.4 --- Logic + Functional + Object-Oriented --- p.61 / Chapter 3.4.1 --- Paradise --- p.61 / Chapter 3.4.2 --- LIFE --- p.62 / Chapter 3.4.3 --- UNIFORM --- p.63 / Chapter 3.4.4 --- G --- p.64 / Chapter 3.4.5 --- FOOPlog --- p.66 / Chapter 3.4.6 --- Logic and Objects (L&O) --- p.66 / Chapter 3.4.7 --- Discussion --- p.67 / Chapter 4 --- The design of a multiparadigm language I --- p.70 / Chapter 4.1 --- An Object-Oriented Framework --- p.71 / Chapter 4.1.1 --- A hierarchy of classes --- p.71 / Chapter 4.1.2 --- Program structure --- p.71 / Chapter 4.1.3 --- Parametric classes --- p.72 / Chapter 4.1.4 --- Inheritance --- p.73 / Chapter 4.1.5 --- The meanings of classes and methods --- p.75 / Chapter 4.1.6 --- Objects and messages --- p.75 / Chapter 4.2 --- The logic Subclasses --- p.76 / Chapter 4.2.1 --- Syntax --- p.76 / Chapter 4.2.2 --- Distributed inference --- p.76 / Chapter 4.2.3 --- Adding functions and expressions to logic programs --- p.77 / Chapter 4.2.4 --- State modelling --- p.79 / Chapter 4.3 --- The functional Subclasses --- p.80 / Chapter 4.3.1 --- The syntax of functions --- p.80 / Chapter 4.3.2 --- Abstract data types --- p.81 / Chapter 4.3.3 --- Augmented list comprehensions --- p.82 / Chapter 4.4 --- The Semantic Foundation of I Programs --- p.84 / Chapter 4.4.1 --- T1* : Transform functions into Horn clauses --- p.84 / Chapter 4.4.2 --- T2*: Transform object-oriented features into pure logic --- p.85 / Chapter 4.5 --- Exploiting Parallelism in I Programs --- p.89 / Chapter 4.5.1 --- Inter-object parallelism --- p.89 / Chapter 4.5.2 --- Intra-object parallelism --- p.92 / Chapter 4.6 --- Discussion --- p.96 / Chapter 5 --- An implementation of a prototype of I --- p.99 / Chapter 5.1 --- System Overview --- p.99 / Chapter 5.2 --- I-to-Prolog Translation --- p.101 / Chapter 5.2.1 --- Pass 1 - lexical and syntax analysis --- p.101 / Chapter 5.2.2 --- Pass 2 - Class Table Construction and Semantic Checking --- p.101 / Chapter 5.2.3 --- Pass 3 - Determination of Multiple Inheritance Precedence --- p.105 / Chapter 5.2.4 --- Pass 4 - Translation of the directive part --- p.110 / Chapter 5.2.5 --- Pass 5 - Creation of Prolog source code for an I object --- p.110 / Chapter 5.2.6 --- Using expressions in logic methods --- p.112 / Chapter 5.3 --- I-to-LML Translation --- p.114 / Chapter 5.4 --- The Run-time Handler --- p.117 / Chapter 5.4.1 --- Object Management --- p.118 / Chapter 5.4.2 --- Process Management and Message Passing --- p.121 / Chapter 6 --- Some applications written in I --- p.125 / Chapter 6.1 --- Modeling of a State Space Search --- p.125 / Chapter 6.2 --- A Solution to the N-queen Problem --- p.129 / Chapter 6.3 --- Object-Oriented Modeling of a Database --- p.131 / Chapter 6.4 --- A Simple Expert System --- p.133 / Chapter 6.5 --- Summary --- p.138 / Chapter 7 --- Conclusion and future work --- p.139 / Chapter 7.1 --- Conclusion --- p.139 / Chapter 7.2 --- Future Work --- p.141 / Chapter A --- Language manual --- p.146 / Chapter A.1 --- Introduction --- p.146 / Chapter A.2 --- Syntax --- p.146 / Chapter A.2.1 --- The lexical specification --- p.146 / Chapter A.2.2 --- The syntax specification --- p.149 / Chapter A3 --- Classes --- p.152 / Chapter A.4 --- Object Creation and Method Invocation --- p.153 / Chapter A.5 --- The logic Subclasses --- p.155 / Chapter A.6 --- The functional Subclasses --- p.156 / Chapter A.7 --- Types --- p.158 / Chapter A.8 --- Mutable States --- p.158 / Chapter B --- User's guide --- p.160 / Chapter B.1 --- System Calls --- p.160 / Chapter B.2 --- Configuration Parameters --- p.162 / Chapter B.3 --- Errors --- p.163 / Chapter B.4 --- Implementation Limits --- p.164 / Chapter B.5 --- How to install the system --- p.164 / Chapter B.6 --- How to use the system --- p.164 / Chapter B.7 --- How to recompile the system --- p.166 / Chapter B.8 --- Directory arrangement --- p.167 / Chapter C --- List of publications --- p.168 / Bibliography --- p.169
|
122 |
Integrating artificial neural networks and constraint logic programming.January 1995 (has links)
by Vincent Wai-leuk Tam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 74-80). / Chapter 1 --- Introduction and Summary --- p.1 / Chapter 1.1 --- The Task --- p.1 / Chapter 1.2 --- The Thesis --- p.2 / Chapter 1.2.1 --- Thesis --- p.2 / Chapter 1.2.2 --- Antithesis --- p.3 / Chapter 1.2.3 --- Synthesis --- p.5 / Chapter 1.3 --- Results --- p.6 / Chapter 1.4 --- Contributions --- p.6 / Chapter 1.5 --- Chapter Summaries --- p.7 / Chapter 1.5.1 --- Chapter 2: An ANN-Based Constraint-Solver --- p.8 / Chapter 1.5.2 --- Chapter 3: A Theoretical Framework of PROCLANN --- p.8 / Chapter 1.5.3 --- Chapter 4: The Prototype Implementation --- p.8 / Chapter 1.5.4 --- Chapter 5: Benchmarking --- p.9 / Chapter 1.5.5 --- Chapter 6: Conclusion --- p.9 / Chapter 2 --- An ANN-Based Constraint-Solver --- p.10 / Chapter 2.1 --- Notations --- p.11 / Chapter 2.2 --- Criteria for ANN-based Constraint-solver --- p.11 / Chapter 2.3 --- A Generic Neural Network: GENET --- p.13 / Chapter 2.3.1 --- Network Structure --- p.13 / Chapter 2.3.2 --- Network Convergence --- p.17 / Chapter 2.3.3 --- Energy Perspective --- p.22 / Chapter 2.4 --- Properties of GENET --- p.23 / Chapter 2.5 --- Incremental GENET --- p.27 / Chapter 3 --- A Theoretical Framework of PROCLANN --- p.29 / Chapter 3.1 --- Syntax and Declarative Semantics --- p.30 / Chapter 3.2 --- Unification in PROCLANN --- p.33 / Chapter 3.3 --- PROCLANN Computation Model --- p.38 / Chapter 3.4 --- Soundness and Weak Completeness of the PROCLANN Compu- tation Model --- p.40 / Chapter 3.5 --- Probabilistic Non-determinism --- p.46 / Chapter 4 --- The Prototype Implementation --- p.48 / Chapter 4.1 --- Prototype Design --- p.48 / Chapter 4.2 --- Implementation Issues --- p.52 / Chapter 5 --- Benchmarking --- p.58 / Chapter 5.1 --- N-Queens --- p.59 / Chapter 5.1.1 --- Benchmarking --- p.59 / Chapter 5.1.2 --- Analysis --- p.59 / Chapter 5.2 --- Graph-coloring --- p.63 / Chapter 5.2.1 --- Benchmarking --- p.63 / Chapter 5.2.2 --- Analysis --- p.64 / Chapter 5.3 --- Exceptionally Hard Problem --- p.66 / Chapter 5.3.1 --- Benchmarking --- p.67 / Chapter 5.3.2 --- Analysis --- p.67 / Chapter 6 --- Conclusion --- p.68 / Chapter 6.1 --- Contributions --- p.68 / Chapter 6.2 --- Limitations --- p.70 / Chapter 6.3 --- Future Work --- p.71 / Chapter 6.3.1 --- Parallel Implementation --- p.71 / Chapter 6.3.2 --- General Constraint Handling --- p.72 / Chapter 6.3.3 --- Other ANN Models --- p.73 / Chapter 6.3.4 --- Other Domains --- p.73 / Bibliography --- p.74 / Appendix A The Hard Graph-coloring Problems --- p.81 / Appendix B An Exceptionally Hard Problem (EHP) --- p.182
|
123 |
Integrating phosphoproteomic time series data into prior knowledge networks / Intégration de données de séries temporelles phosphoprotéomiques dans des réseaux de connaissances antérieursRazzaq, Misbah 05 December 2018 (has links)
Les voies de signalisation canoniques traditionnelles aident à comprendre l'ensemble des processus de signalisation à l'intérieur de la cellule. Les données phosphoprotéomiques à grande échelle donnent un aperçu des altérations entre différentes protéines dans différents contextes expérimentaux. Notre objectif est de combiner les réseaux de signalisation traditionnels avec des données de séries temporelles phosphoprotéomiques complexes afin de démêler les réseaux de signalisation spécifiques aux cellules. Côté application, nous appliquons et améliorons une méthode de séries temporelles caspo conçue pour intégrer des données phosphoprotéomiques de séries temporelles dans des réseaux de signalisation de protéines. Nous utilisons une étude de cas réel à grande échelle tirée du défi HPN-DREAM BreastCancer. Nous déduisons une famille de modèles booléens à partir de données de séries temporelles de perturbations multiples de quatre lignées cellulaires de cancer du sein, compte tenu d'un réseau de signalisation protéique antérieur. Les résultats obtenus sont comparables aux équipes les plus performantes du challenge HPN-DREAM. Nous avons découvert que les modèles similaires sont regroupés dans l'espace de solutions. Du côté informatique, nous avons amélioré la méthode pour découvrir diverses solutions et améliorer le temps de calcul. / Traditional canonical signaling pathways help to understand overall signaling processes inside the cell. Large scale phosphoproteomic data provide insight into alterations among different proteins under different experimental settings. Our goal is to combine the traditional signaling networks with complex phosphoproteomic time-series data in order to unravel cell specific signaling networks. On the application side, we apply and improve a caspo time series method conceived to integrate time series phosphoproteomic data into protein signaling networks. We use a large-scale real case study from the HPN-DREAM BreastCancer challenge. We infer a family of Boolean models from multiple perturbation time series data of four breast cancer cell lines given a prior protein signaling network. The obtained results are comparable to the top performing teams of the HPN-DREAM challenge. We also discovered that the similar models are clustered to getherin the solutions space. On the computational side, we improved the method to discover diverse solutions and improve the computational time.
|
124 |
Learning acyclic probabilistic logic programs from data. / Aprendizado de programas lógico-probabilísticos acíclicos.Francisco Henrique Otte Vieira de Faria 12 December 2017 (has links)
To learn a probabilistic logic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of these attributes. In this work, we focus on acyclic programs, because in this case the meaning of the program is quite transparent and easy to grasp. We propose that the learning process for a probabilistic acyclic logic program should be guided by a scoring function imported from the literature on Bayesian network learning. We suggest novel techniques that lead to orders of magnitude improvements in the current state-of-art represented by the ProbLog package. In addition, we present novel techniques for learning the structure of acyclic probabilistic logic programs. / O aprendizado de um programa lógico probabilístico consiste em encontrar um conjunto de regras lógico-probabilísticas que melhor se adequem aos dados, a fim de explicar de que forma estão relacionados os atributos observados e predizer a ocorrência de novas instanciações destes atributos. Neste trabalho focamos em programas acíclicos, cujo significado é bastante claro e fácil de interpretar. Propõe-se que o processo de aprendizado de programas lógicos probabilísticos acíclicos deve ser guiado por funções de avaliação importadas da literatura de aprendizado de redes Bayesianas. Neste trabalho s~ao sugeridas novas técnicas para aprendizado de parâmetros que contribuem para uma melhora significativa na eficiência computacional do estado da arte representado pelo pacote ProbLog. Além disto, apresentamos novas técnicas para aprendizado da estrutura de programas lógicos probabilísticos acíclicos.
|
125 |
A SLDNF based formalization for updates and abductionLakkaraju, Sai Kiran, University of Western Sydney, College of Science, Technology and Environment, School of Computing and Information Technology January 2001 (has links)
Knowledge representation and inference are the backbone of artificial intelligence, and logic programming is one of the most widely used knowledge representation tools. Logic programming with deduction/induction/abduction as the reasoning technique is serving numerous fields of artificial intelligence. In dynamic domains where there are constant changes in knowledge, updating the knowledge base is crucial to keep it stable. This thesis investigates the issues in updating the knowledge base. Two types of logic program based updates are considered, simple fact based updates where the knowledge base is updated by a simple fact, and rule based updates where the knowledge base is updated by a rule. A SLDNF based procedural approach is proposed to implement such updates. This thesis also investigates the issues involved in simple fact based and rule based abduction, and it is observed that updates are closely related to abduction. A SLDNF based procedural approach to perform simple fact/rule based updates and abduction is proposed as a result of this study / Master of Science (Hons)
|
126 |
DEFT guessing: using inductive transfer to improve rule evaluation from limited dataReid, Mark Darren, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Algorithms that learn sets of rules describing a concept from its examples have been widely studied in machine learning and have been applied to problems in medicine, molecular biology, planning and linguistics. Many of these algorithms used a separate-and-conquer strategy, repeatedly searching for rules that explain different parts of the example set. When examples are scarce, however, it is difficult for these algorithms to evaluate the relative quality of two or more rules which fit the examples equally well. This dissertation proposes, implements and examines a general technique for modifying rule evaluation in order to improve learning performance in these situations. This approach, called Description-based Evaluation Function Transfer (DEFT), adjusts the way rules are evaluated on a target concept by taking into account the performance of similar rules on a related support task that is supplied by a domain expert. Central to this approach is a novel theory of task similarity that is defined in terms of syntactic properties of rules, called descriptions, which define what it means for rules to be similar. Each description is associated with a prior distribution over classification probabilities derived from the support examples and a rule's evaluation on a target task is combined with the relevant prior using Bayes' rule. Given some natural conditions regarding the similarity of the target and support task, it is shown that modifying rule evaluation in this way is guaranteed to improve estimates of the true classification probabilities. Algorithms to efficiently implement Deft are described, analysed and used to measure the effect these improvements have on the quality of induced theories. Empirical studies of this implementation were carried out on two artificial and two real-world domains. The results show that the inductive transfer of evaluation bias based on rule similarity is an effective and practical way to improve learning when training examples are limited.
|
127 |
A comparison of SL- and unit-resolution search rules for stratified logic programsLagerqvist, Victor January 2010 (has links)
<p>There are two symmetrical resolution rules applicable to logic programs - SL-resolution which yields a top-down refutation and unit-resolution which yields a bottom-up refutation. Both resolution principles need to be coupled with a search rule before they can be used in practice. The search rule determines in which order program clauses are used in the refutation and affects both performance, completeness and quality of solutions. The thesis surveys exhaustive and heuristic search rules for SL-resolution and transformation techniques for (general) logic programs that makes unit-resolution goal oriented.</p><p>The search rules were implemented as meta-interpreters for Prolog and were benchmarked on a suite of programs incorporating both deterministic and nondeterministic code. Whenever deemed applicable benchmark programs were permuted with respect to clause and goal ordering to see if it affected the interpreters performance and termination.</p><p>With the help of the evaluation the conclusion was that alternative search rules for SL-resolution should not be used for performance gains but can in some cases greatly improve the quality of solutions, e.g. in planning or other applications where the quality of an answer correlates with the length of the refutation. It was also established that A* is more flexible than exhaustive search rules since its behavior can be fine-tuned with weighting, and can in some cases be more efficient than both iterative deepening and breadth-first search. The bottom-up interpreter based on unit-resolution and magic transformation had several advantages over the top-down interpreters. Notably for programs where subgoals are recomputed many times. The great disparity in implementation techniques made direct performance comparisons hard however, and it is not clear if even an optimized bottom-up interpreter is competitive against a top-down interpreter with tabling of answers.</p>
|
128 |
Visual Compositional-Relational ProgrammingZetterström, Andreas January 2010 (has links)
<p>In an ever faster changing environment, software developers not only need agile methods, but also agile programming paradigms and tools. A paradigm shift towards declarative programming has begun; a clear indication of this is Microsoft's substantial investment in functional programming. Moreover, several attempts have been made to enable visual programming. We believe that software development is ready for a new paradigm which goes beyond any existing declarative paradigm: visual compositional-relational programming. Compositional-relational programming (CRP) is a purely declarative paradigm -- making it suitable for a visual representation. All procedural aspects -- including the increasingly important issue of parallelization -- are removed from the programmer's consideration and handled in the underlying implementation. The foundation for CRP is a theory of higher-order combinatory logic programming developed by Hamfelt and Nilsson in the 1990's. This thesis proposes a model for visualizing compositional-relational programming. We show that the diagrams are isomorphic with the programs represented in textual form. Furthermore, we show that the model can be used to automatically generate code from diagrams, thus paving the way for a visual integrated development environment for CRP, where programming is performed by combining visual objects in a drag-and-drop fashion. At present, we implement CRP using Prolog. However, in future we foresee an implementation directly on one of the major object-oriented frameworks, e.g. the .NET platform, with the aim to finally launch relational programming into large-scale systems development.</p>
|
129 |
Financial Information Integration In the Presence of Equational Ontological ConflictsFirat, Aykut, Madnick, Stuart E., Grosof, Benjamin 01 1900 (has links)
While there are efforts to establish a single international accounting standard, there are strong current and future needs to handle heterogeneous accounting methods and systems. We advocate a context-based approach to dealing with multiple accounting standards and equational ontological conflicts. In this paper we first define what we mean by equational ontological conflicts and then describe a new approach, using Constraint Logic Programming and abductive reasoning, to reconcile such conflicts among disparate information systems. In particular, we focus on the use of Constraint Handling Rules as a simultaneous symbolic equation solver, which is a powerful way to combine, invert and simplify multiple conversion functions that translate between different contexts. Finally, we demonstrate a sample application using our prototype implementation that demonstrates the viability of our approach. / Singapore-MIT Alliance (SMA)
|
130 |
Test items for and misconceptions of competences in the domain of logic programmingLinck, Barbara January 2013 (has links)
Development of competence-oriented curricula is still an important theme in informatics education. Unfortunately informatics curricula, which include the domain of logic programming, are still input-orientated or lack detailed competence descriptions. Therefore, the development of competence model and of learning outcomes' descriptions is essential for the learning process in this domain. A prior research developed both. The next research step is to formulate test items to measure the described learning outcomes. This article describes this procedure and exemplifies test items. It also relates a test in school to the items and shows which misconceptions and typical errors are important to discuss in class. The test result can also confirm or disprove the competence model. Therefore, this school test is important for theoretical research as well as for the concrete planning of lessons. Quantitative analysis in school is important for evaluation and improvement of informatics education.
|
Page generated in 0.1113 seconds