1 |
The formal verification of hard real-time systemsCardell-Oliver, Rachel Mary January 1992 (has links)
No description available.
|
2 |
Learning Comprehensible Theories from Structured DataNg, Kee Siong, kee.siong@rsise.anu.edu.au January 2005 (has links)
This thesis is concerned with the problem of
learning comprehensible theories from
structured data and covers primarily classification and regression learning. The basic knowledge representation language
is set around a polymorphically-typed,
higher-order logic. The general setup is closely related to the learning from propositionalized knowledge and learning from interpretations settings in Inductive Logic Programming. Individuals (also called instances) are represented as terms in the logic. A grammar-like construct called a predicate rewrite system is used to define features in the form of predicates that individuals may or may not satisfy. For learning, decision-tree algorithms of various kinds are adopted.¶
The scope of the thesis spans both theory and practice. On the theoretical side, I study in this thesis¶
1. the representational power of different function classes and relationships between them;¶
2. the sample complexity of some commonly-used predicate classes, particularly those involving sets and multisets;¶
3. the computational complexity of various optimization problems associated with learning and algorithms for solving them; and¶
4. the (efficient) learnability of different function classes in the PAC and agnostic PAC models.¶
On the practical side, the usefulness of the learning system developed is demontrated with applications in two important domains:
bioinformatics and intelligent agents. Specifically, the following are covered in this thesis:¶
1. a solution to a benchmark multiple-instance learning problem and some useful lessons that can be drawn from it;¶
2. a successful attempt on a knowledge discovery problem in predictive toxicology, one that can serve as another proof-of-concept that real chemical knowledge can be obtained using symbolic learning;¶
3. a reworking of an exercise in relational reinforcement learning and some new insights and techniques we learned for this interesting problem; and¶
4. a general approach for personalizing
user agents that takes full advantage of symbolic learning.
|
3 |
Higher-order proof translationSultana, Nikolai January 2015 (has links)
The case for interfacing logic tools together has been made countless times in the literature, but it is still an important research question. There are various logics and respective tools for carrying out formal developments, but practitioners still lament the difficulty of reliably exchanging mathematical data between tools. Writing proof-translation tools is hard. The problem has both a theoretical side (to ensure that the translation is adequate) and a practical side (to ensure that the translation is feasible and usable). Moreover, the source and target proof formats might be less documented than desired (or even necessary), and this adds a dash of reverse-engineering to what should be a system integration task. This dissertation studies proof translation for higher-order logic. We will look at the qualitative benefits of locating the translation close to the source (where the proof is generated), the target (where the proof is consumed), and in between (as an independent tool from the proof producer and consumer). Two ideas are proposed to alleviate the difficulty of building proof translation tools. The first is a proof translation framework that is structured as a compiler. Its target is specified as an abstract machine, which captures the essential features of its implementations. This framework is designed to be performant and extensible. Second, we study proof transformations that convert refutation proofs from a broad class of consistency-preserving calculi (such as those used by many proof-finding tools) into proofs in validity-preserving calculi (the kind used by many proof-checking tools). The basic method is very simple, and involves applying a single transformation uniformly to all of the source calculi's inferences, rather than applying ad hoc (rule specific) inference interpretations.
|
4 |
The standard interpretation of higher-order variables in modern logic and the concept of function in mathematicsConstant, Dimitri 22 January 2016 (has links)
A logic that utilizes higher-order quantification --quantifying over concepts (or relations), not just over the first-order level of individuals-- can be interpreted standardly or nonstandardly depending on whether one takes an intensional or extensional view of concepts. I argue that this decision is connected to how one understands the mathematical notion of function. A function is often understood as a rule that, when given an argument from a set of objects called a "domain," returns a value from a set of objects called a "codomain." Because a concept can be thought of as a two-valued function (that indicates whether or not a given object falls under the concept), having an extensional interpretation of higher-order variables --the standard interpretation-- requires that one adopt an extensional notion of function. Viewed extensionally, however, a function is understood not as a rule but rather as a correlation associating every element in a domain with an element in a codomain. When the domain is finite, the two understandings of function are equivalent (since one can define a rule for any finite correlation), but with an infinite domain, the latter understanding admits arbitrary functions, or correlations not definable by a finitely specifiable rule.
Rejection of the standard interpretation is often motivated by the same reasons used to resist the extensional understanding of function. Such resistance is overt in the pronouncements of Leopold Kronecker, but is also implicit in the work of Gottlob Frege, who used an intensional notion of function in his logic. Looking at the problem historically, I argue that the extensional notion of function has been basic to mathematics since ancient times. Moreover, I claim that Gottfried Wilhelm Leibniz's combination of mathematical and metaphysical ideas helped inaugurate an extensional and ultimately model-theoretical approach to mathematical concepts that led to some of the most important applications of mathematics to science (e.g. the use of non-Euclidean geometry in the theory of general relativity). In logic, Frege's use of an intensional notion of function led to contradiction, while Richard Dedekind and Georg Cantor applied the extensional notion of function to develop mathematically revolutionary theories of the transfinite. / 2025-10-15
|
5 |
Uma introdução à lógica de segunda ordem / An Introduction to Logic Second OrderJúnior, Enéas Alves Nogueira 26 April 2013 (has links)
Neste trabalho investigamos alguns aspectos da Lógica de Segunda Ordem, dividindo o tema em três capítulos. No primeiro capítulo discorremos sobre os conceitos básicos desta Lógica, tais como conjunto de fórmulas, sistemas dedutivos e semânticas. Fazemos também um contraste com a Lógica de Primeira Ordem, que é mais conhecida, para se ter uma espécie de modelo do qual estamos nos diferenciando. Provamos o teorema da completude para a Lógica de Segunda Ordem, devido a L. Henkin em Henkin (1950). No segundo capítulo nós procuramos entender o que acontece com a semântica da teoria de conjuntos ZF C (que é de primeira ordem) se adicionarmos alguns axiomas de segunda ordem, criando uma teoria que chamamos de ZF 2 . Mostramos um teorema devido a Zermelo (Zermelo (1930)) que diz que os modelos desta teoria são essencialmente os mesmos. Tam- bém procuramos investigar a questão da Hipótese do Contínuo com relação à de um metódo de forcing para esta teoria, mostramos que a HC ZF 2 e, através continua sem resposta. No terceiro capítulo, escrevemos sobre três temas diferentes: o primeiro é sobre a relação que existe entre a propriedade da completude, da compacidade e a semântica de Henkin. O teorema de Lindström, que provamos nesta seção, diz essencialmente que não podemos ter completude e compacidade para a Lógica de Segunda Ordem ao menos que usemos esta semântica. Na segunda seção, investigamos o número de Hanf da Lógica de Segunda Ordem com a semântica Padrão e, na terceira seção, mostramos que é possível fazer uma redução das Lógicas de ordem superior à segunda e que o conjunto das fórmulas válidas da Lógica de Segunda Ordem não é denível na estrutura dos números naturais. / In this work we investigate some aspects of Second-Order Logic, splitting the theme in three chapters. In the rst one, we discuss the basic concepts of that Logic, such as set of formulas, deductive systems and semantics. We also make a contrast with First-Order Logic, which is better know, in order to have some kind of model from wich we are dierentiating. We prove the theorem of the completeness for the Second-Order Logic, due to L. Henkin in Henkin (1950). In the second chapter we try to understand what happens with the semantics of the ZF C set theory (which is a First-Order theory) if we add some Second-Order axioms, creating a theory that we call ZF 2 . We prove a theorem due to Zermelo (Zermelo (1930)) which says that the models of this theory are essentially the same. We also investigate the question of the Continuum Hypothesis in relation to theory, we show that the HC ZF 2 and, through a method of forcing for that still has no answer. In the third chapter, we write about three dierent themes: the rst is about the relation that exists between the property of completeness, of compactness and the Henkin semantics. The Lindström\'s theorem, which we prove in this section, says essentially that we can\'t have the completeness and the compactness for the Secon-Order Logic without Henkin semantics. In the second section, we investigate the Hanf Number of Second-Order Logic and, in the third section, we show that it is possible to make a reduction of Logics of order higher than the second to the second and that the set of the Second-Order valid formulas is not denable in the structure of the natural numbers.
|
6 |
Functional Query Languages with Categorical TypesWisnesky, Ryan 25 February 2014 (has links)
We study three category-theoretic types in the context of functional query languages (typed lambda-calculi extended with additional operations for bulk data processing). The types we study are: / Engineering and Applied Sciences
|
7 |
A Flexible, Natural Deduction, Automated Reasoner for Quick Deployment of Non-Classical LogicMukhopadhyay, Trisha 20 March 2019 (has links)
Automated Theorem Provers (ATP) are software programs which carry out inferences over logico-mathematical systems, often with the goal of finding proofs to some given theorem. ATP systems are enormously powerful computer programs, capable of solving immensely difficult problems. Currently, many automated theorem provers exist like E, vampire, SPASS, ACL2, Coq etc. However, all the available theorem provers have some common problems: (1) Current ATP systems tend not to try to find proofs entirely on their own. They need help from human experts to supply lemmas, guide the proof, etc. (2) There is not a single proof system available which provides fully automated platforms for both First Order Logic (FOL) and other Higher Order Logic (HOL). (3) Finally, current proof systems do not have an easy way to quickly deploy and reason over new logical systems, which a logic researcher may want to test.
In response to these problems, I introduce the MATR framework. MATR is a platform-independent, codelet-based (independently operating processes) proof system with an easy-to-use Graphical User Interface (GUI), where multiple codelets can be selected based on the formal system desired. MATR provides a platform for different proof strategies like deduction and backward reasoning, along with different formal systems such as non-classical logics. It enables users to design their own proof system by selecting from the list of codelets without needing to write an ATP from scratch.
|
8 |
[en] ALFRED TARSKI: LOGICAL CONSEQUENCE, LOGICAL NOTIONS, AND LOGICAL FORMS / [pt] ALFRED TARSKI: CONSEQÜÊNCIA LÓGICA, NOÇÕES LÓGICAS E FORMAS LÓGICASSTEFANO DOMINGUES STIVAL 17 September 2004 (has links)
[pt] O tema da presente dissertação é o problema da demarcação
entre os termos lógicos e extralógicos no âmbito das
ciências formais, anunciado primeiramente por Alfred Tarski
em seu artigo de 1936, On the Concept of Logical
Consequence. Depois de expor e discutir o problema em
questão, mostrando seu surgimento a partir da necessidade
de uma definição materialmente adequada do conceito de
conseqüência lógica, analisamos a solução formulada por
Tarski em um artigo publicado postumamente, intitulado What
Are Logical Notions? Algumas discussões subsidiárias,
igualmente importantes para o trabalho como um todo, dizem
respeito à concepção dos conceitos de modelo e
interpretação que se podem depreender dos artigos
supracitados, e de como ela difere da assim chamada
concepção standard em teoria de modelos. Nosso objetivo
principal é mostrar o lugar ocupado pelo conceito de forma
lógica na obra de Tarski, e de como sua concepção acerca
deste conceito implica uma visão ampliada do conceito de
conseqüência lógica, cuja caracterização correta torna
necessária a estratificação das formas lógicas numa
hierarquia de tipos. / [en] The subject of this paper is the problem of demarcation
between logical and extra-logical terms of formal
languages, as formulated for the first time by Tarski in
his 1936 paper On the Concept of Logical Consequence. After
presenting and discussing the demarcation problem, pointing
out how it arises from the need for a materially adequate
definition of the concept of logical consequence, we
analyze the solution presented by Tarski in his
posthumously published paper, entitled What Are Logical
Notions? Some subsidiary issues, that are also important
for the work as a whole, concern the conception of model
and interpretation that springs from the two papers
mentioned, and how this conception differs from the
standard conception in model theory. Our main goal is to
show the place occupied by the concept of logical form in
Tarski`s work, and how his conception of this concept
implies a broader view about the related concept of logical
consequence whose correct characterization makes necessary
the stratification of logical forms into a hierarchy of
types.
|
9 |
From Language to Thought: On the Logical Foundations of Semantic TheorySbardolini, Giorgio 03 July 2019 (has links)
No description available.
|
10 |
Reasoning Using Higher-Order Abstract Syntax in a Higher-Order Logic Proof Environment: Improvements to Hybrid and a Case StudyMartin, Alan J. 24 January 2011 (has links)
We present a series of improvements to the Hybrid system, a formal theory implemented in Isabelle/HOL to support specifying and reasoning about formal systems using higher-order abstract syntax (HOAS). We modify Hybrid's type of terms, which is built definitionally in terms of de Bruijn indices, to exclude at the type level terms with `dangling' indices. We strengthen the injectivity property for Hybrid's variable-binding operator, and develop rules for compositional proof of its side condition, avoiding conversion from HOAS to de Bruijn indices. We prove representational adequacy of Hybrid (with these improvements) for a lambda-calculus-like subset of Isabelle/HOL syntax, at the level of set-theoretic semantics and without unfolding Hybrid's definition in terms of de Bruijn indices. In further work, we prove an induction principle that maintains some of the benefits of HOAS even for open terms. We also present a case study of the formalization in Hybrid of a small programming language, Mini-ML with mutable references, including its operational semantics and a type-safety property. This is the largest case study in Hybrid to date, and the first to formalize a language with mutable references. We compare four variants of this formalization based on the two-level approach adopted by Felty and Momigliano in other recent work on Hybrid, with various specification logics (SLs), including substructural logics, formalized in Isabelle/HOL and used in turn to encode judgments of the object language. We also compare these with a variant that does not use an intermediate SL layer. In the course of the case study, we explore and develop new proof techniques, particularly in connection with context invariants and induction on SL statements.
|
Page generated in 0.0277 seconds