• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 12
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 97
  • 97
  • 46
  • 16
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Proof theory and algorithms for answer set programming

Gebser, Martin January 2011 (has links)
Answer Set Programming (ASP) is an emerging paradigm for declarative programming, in which a computational problem is specified by a logic program such that particular models, called answer sets, match solutions. ASP faces a growing range of applications, demanding for high-performance tools able to solve complex problems. ASP integrates ideas from a variety of neighboring fields. In particular, automated techniques to search for answer sets are inspired by Boolean Satisfiability (SAT) solving approaches. While the latter have firm proof-theoretic foundations, ASP lacks formal frameworks for characterizing and comparing solving methods. Furthermore, sophisticated search patterns of modern SAT solvers, successfully applied in areas like, e.g., model checking and verification, are not yet established in ASP solving. We address these deficiencies by, for one, providing proof-theoretic frameworks that allow for characterizing, comparing, and analyzing approaches to answer set computation. For another, we devise modern ASP solving algorithms that integrate and extend state-of-the-art techniques for Boolean constraint solving. We thus contribute to the understanding of existing ASP solving approaches and their interconnections as well as to their enhancement by incorporating sophisticated search patterns. The central idea of our approach is to identify atomic as well as composite constituents of a propositional logic program with Boolean variables. This enables us to describe fundamental inference steps, and to selectively combine them in proof-theoretic characterizations of various ASP solving methods. In particular, we show that different concepts of case analyses applied by existing ASP solvers implicate mutual exponential separations regarding their best-case complexities. We also develop a generic proof-theoretic framework amenable to language extensions, and we point out that exponential separations can likewise be obtained due to case analyses on them. We further exploit fundamental inference steps to derive Boolean constraints characterizing answer sets. They enable the conception of ASP solving algorithms including search patterns of modern SAT solvers, while also allowing for direct technology transfers between the areas of ASP and SAT solving. Beyond the search for one answer set of a logic program, we address the enumeration of answer sets and their projections to a subvocabulary, respectively. The algorithms we develop enable repetition-free enumeration in polynomial space without being intrusive, i.e., they do not necessitate any modifications of computations before an answer set is found. Our approach to ASP solving is implemented in clasp, a state-of-the-art Boolean constraint solver that has successfully participated in recent solver competitions. Although we do here not address the implementation techniques of clasp or all of its features, we present the principles of its success in the context of ASP solving. / Antwortmengenprogrammierung (engl. Answer Set Programming; ASP) ist ein Paradigma zum deklarativen Problemlösen, wobei Problemstellungen durch logische Programme beschrieben werden, sodass bestimmte Modelle, Antwortmengen genannt, zu Lösungen korrespondieren. Die zunehmenden praktischen Anwendungen von ASP verlangen nach performanten Werkzeugen zum Lösen komplexer Problemstellungen. ASP integriert diverse Konzepte aus verwandten Bereichen. Insbesondere sind automatisierte Techniken für die Suche nach Antwortmengen durch Verfahren zum Lösen des aussagenlogischen Erfüllbarkeitsproblems (engl. Boolean Satisfiability; SAT) inspiriert. Letztere beruhen auf soliden beweistheoretischen Grundlagen, wohingegen es für ASP kaum formale Systeme gibt, um Lösungsmethoden einheitlich zu beschreiben und miteinander zu vergleichen. Weiterhin basiert der Erfolg moderner Verfahren zum Lösen von SAT entscheidend auf fortgeschrittenen Suchtechniken, die in gängigen Methoden zur Antwortmengenberechnung nicht etabliert sind. Diese Arbeit entwickelt beweistheoretische Grundlagen und fortgeschrittene Suchtechniken im Kontext der Antwortmengenberechnung. Unsere formalen Beweissysteme ermöglichen die Charakterisierung, den Vergleich und die Analyse vorhandener Lösungsmethoden für ASP. Außerdem entwerfen wir moderne Verfahren zum Lösen von ASP, die fortgeschrittene Suchtechniken aus dem SAT-Bereich integrieren und erweitern. Damit trägt diese Arbeit sowohl zum tieferen Verständnis von Lösungsmethoden für ASP und ihrer Beziehungen untereinander als auch zu ihrer Verbesserung durch die Erschließung fortgeschrittener Suchtechniken bei. Die zentrale Idee unseres Ansatzes besteht darin, Atome und komposite Konstrukte innerhalb von logischen Programmen gleichermaßen mit aussagenlogischen Variablen zu assoziieren. Dies ermöglicht die Isolierung fundamentaler Inferenzschritte, die wir in formalen Charakterisierungen von Lösungsmethoden für ASP selektiv miteinander kombinieren können. Darauf aufbauend zeigen wir, dass unterschiedliche Einschränkungen von Fallunterscheidungen zwangsläufig zu exponentiellen Effizienzunterschieden zwischen den charakterisierten Methoden führen. Wir generalisieren unseren beweistheoretischen Ansatz auf logische Programme mit erweiterten Sprachkonstrukten und weisen analytisch nach, dass das Treffen bzw. Unterlassen von Fallunterscheidungen auf solchen Konstrukten ebenfalls exponentielle Effizienzunterschiede bedingen kann. Die zuvor beschriebenen fundamentalen Inferenzschritte nutzen wir zur Extraktion inhärenter Bedingungen, denen Antwortmengen genügen müssen. Damit schaffen wir eine Grundlage für den Entwurf moderner Lösungsmethoden für ASP, die fortgeschrittene, ursprünglich für SAT konzipierte, Suchtechniken mit einschließen und darüber hinaus einen transparenten Technologietransfer zwischen Verfahren zum Lösen von ASP und SAT erlauben. Neben der Suche nach einer Antwortmenge behandeln wir ihre Aufzählung, sowohl für gesamte Antwortmengen als auch für Projektionen auf ein Subvokabular. Hierfür entwickeln wir neuartige Methoden, die wiederholungsfreies Aufzählen in polynomiellem Platz ermöglichen, ohne die Suche zu beeinflussen und ggf. zu behindern, bevor Antwortmengen berechnet wurden.
62

Difficulties of secondary three students in writing geometric proofs

Fok, Sui-sum, Selina., 霍遂心. January 2001 (has links)
published_or_final_version / Education / Master / Master of Education
63

Uma investigação acerca das regras para a negação e o absurdo em dedução natural

Sanz, Wagner de Campos 28 July 2006 (has links)
Orientador: Marcelo Esteban Coniglio / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas / Made available in DSpace on 2018-08-07T00:21:55Z (GMT). No. of bitstreams: 1 Sanz_WagnerdeCampos_D.pdf: 2570437 bytes, checksum: 15352759879927665653f4fc165c3703 (MD5) Previous issue date: 2006 / Resumo: O objetivo desta tese é o de propor uma elucidação da negação e do absurdo no âmbito dos sistemas de dedução natural para as lógicas intuicionista e clássica. Nossa investigação pode ser vista como um desenvolvimento de uma proposta apresentada por Russell há mais de cem anos e a qual ele parece ter abandonado posteriormente. Focaremos a atenção, em primeiro lugar, sobre a negação e, depois, como conseqüência das propostas para a negação, sobre a constante de absurdo. Nosso ponto de partida é, na verdade, um problema de natureza conceitual. Questionaremos a correção e a adequação da análise da negação e do absurdo atualmente predominante no meio-ambiente de dedução natural de estilo gentzeniano. O questionamento dessas análises adota como ponto focal o conceito de hipótese. O conceito de hipótese é uma noção central para os sistemas de dedução natural e a nossa proposta de análise desse conceito servirá de esteio para a formulação das propostas elucidatórias para a negação e o absurdo dentro dos sistemas de dedução natural / Abstract: The purpose of this thesis is to present an elucidation of negation and absurd for intuitionist and classical logics in the range of natural deduction systems. Our study could be seen as a development of a proposal presented by Russell over a hundred years ago, which he presumably abandoned later on. First, we will focus on negation and then on the absurd constant, as a consequence of the claims we are making for negation. As a matter of fact, our starting point is a problem of a conceptual nature. We will question the correctness and the adequacy of the analysis of negation and absurd, prevailing nowadays in the Gentzen-style natural deduction circle. The concept of hypothesis is the focus point in questioning these analyses. The concept of hypothesis is a central notion for natural deduction systems and the purpose of our analysis of this concept is to support the formulation of elucidative propositions for negation and absurd in natural deduction systems / Doutorado / Doutor em Filosofia
64

Development of Middle School Teachers' Knowledge and Pedagogy of Justification: Three Studies Linking Teacher Conceptions, Teacher Practice, and Student Learning

James, Carolyn McCaffrey 01 June 2016 (has links)
Justification and argumentation have been identified as important mathematical practices; however, little work has been done to understand the knowledge and pedagogy teachers need to support students in these ambitious practices. Data for this research was drawn from the Justification and Argumentation: Growing Understanding in Algebraic Reasoning (JAGUAR) project. JAGUAR was a multi-year research and professional development project in which 12 middle school math teachers and a group of researchers explored the knowledge and pedagogy needed to support student justifications. This dissertation consists of three case study analyses. The first paper describes the development of teacher conceptions of justification, including their proficiency with justification and purpose of justification in the middle school classroom. The second paper examines the relationship between teacher understanding of empirical reasoning and their use of examples in their classrooms. The final paper describes the relationship between task scaffolding and student forms of reasoning in the context of a justification task. Collectively, this body of work identifies important relationships between teacher knowledge, practice, and student justification activity.
65

[en] BUILDING TABLEAUX FOR INTUITIONISTIC LINEAR LOGIC / [pt] CONSTRUINDO TABLEAUX PARA LÓGICA LINEAR INTUICIONISTA

HUGO HOFFMANN BORGES 25 April 2022 (has links)
[pt] O objetivo desta dissertação é construir um tableaux linear intuicionista a partir de um cálculo de sequentes relevante clássico. Os passos principais dessa construção são: i) tradução das regras do cálculo dos sequentes relevante clássico para regras de tableaux (capítulo 3), usando a estratégia apresentada por D Agostino et al. em Tableau Methods for Substructural Logic. ii) construção de um tableaux linear clássico através da linearização do tableaux clássico relevante (capítulo 4). iii) apresentar um tableau intuicionista ao estilo Fitting, em que são adicionados rótulos T s e F s às fórmulas (capítulo 5). / [en] The main goal of this master tesis is intuitionistic linear tableaux from a relevant sequent calculus. The central steps are: i) Apply D Agostino et al. strategy to translate classical relevant sequent calculus rules to tableaux rules for classical relevant logic (Chapter 3). ii) Use Meyer et al. strategy to linearize the classical relevant tableaux (Chapter 4). iii) Build a new intuicionistic linear tableaux with Fitting labels.
66

Proof theoretical issues in Martin-Löf Type Theory and Homotopy Type Theory

Girardi, Marco 29 June 2022 (has links)
Homotopy Type Theory (HoTT) is a quite recent branch of research in mathematical logic, which provides interesting connections among various areas of mathematics. It was first introduced by Vladimir Voevodsky as a means to develop synthetic homotopy theory, and further advancements suggested that it can be used as a formal foundation to mathematics. Among its notable features, inductive and higher inductive types are of great interest, e.g. allowing for the study of geometric entities (such as spheres) in the setting of HoTT. However, so far in most of the literature higher inductive types are treated in an ad-hoc way; there is no easy general schema stating what an higher inductive type is, thus hindering the study of the related proof theory. Moreover, although Martin-Löf Type Theory has been deeply and widely studied, many proof theoretic results about its specific variant used in HoTT are folklore, and the proofs are missing. In this final talk, we provide an overview on some results we obtained, aiming to address these problems. In the first part of the talk, we will discuss a normalization theorem for the type theory underlying HoTT. In the second part of the talk we will propose a general syntax schema to encapsulate a relevant class of higher inductive types, potentially allowing for future study of the proof theory of HoTT enriched with such types.
67

[en] LOGIC PROOFS COMPACTATION / [pt] COMPACTAÇÃO DE PROVAS LÓGICAS

VASTON GONCALVES DA COSTA 01 June 2007 (has links)
[pt] É um fato conhecido que provas clássicas podem ser demasiadamente grandes. Estudos em teoria da prova descobriram diferenças exponenciais entre provas normais (ou provas livres do corte) e suas respectivas provas não normais. Por outro lado, provadores automáticos de teorema usualmente se baseiam na construção de provas normais, livres de corte ou provas de corte atômico, pois tais procedimento envolvem menos escolhas. Provas de algumas tautologias são conhecidamente grandes quanto realizadas sem a regra do corte e curtas quando a utilizam. Queremos com este trabalho apresentar procedimentos para reduzir o tamanho de provas proposicionais. Neste sentido, apresentamos dois métodos. O primeiro, denominado método vertical, faz uso de axiomas de extensão e alguns casos é possível uma redução considerável no tamanho da prova. Apresentamos um procedimento que gera tais axiomas de extensão. O segundo, denominado método horizontal, adiciona fórmulas máximas por meio de unificação via substituição de variáveis proposicionais. Também apresentamos um método que gera tal unificação durante o processo de construção da prova. O primeiro método é aplicado a dedução natural enquanto o segundo à Dedução Natural e Cálculo de Seqüentes. As provas produzidas correspondem de certo modo a provas não normais (com a regra do corte). / [en] It is well-known that the size of propositional classical proofs can be huge. Proof theoretical studies discovered exponential gaps between normal or cut-free proofs and their respective non-normal proofs. The task of automatic theorem proving is, on the other hand, usually based on the construction of normal, cut-free or only-atomic-cuts proofs, since this procedure produces less alternative choices. There are familiar tautologies such that the cut-free proof is huge while the non-cut-free is small. The aim of this work is to reduce the weight of proposicional deductions. In this sense we present two methods. The fi first, namely vertical method, uses the extension axioms. We present a method that generates a such extension axiom. The second, namely horizontal method, adds suitable (propositional) unifi fications modulo variable substitutions.We also present a method that generates a such unifi fication during the proving process. The proofs produced correspond in a certain way to non normal proofs (non cut-free proofs).
68

Sémantique algébrique des ressources pour la logique classique / Algebraic resource semantics for classical logic

Novakovic, Novak 08 November 2011 (has links)
Le thème général de cette thèse est l’exploitation de l’interaction entre la sémantique dénotationnelle et la syntaxe. Des sémantiques satisfaisantes ont été découvertes pour les preuves en logique intuitionniste et linéaire, mais dans le cas de la logique classique, la solution du problème est connue pour être particulièrement difficile. Ce travail commence par l’étude d’une interprétation concrète des preuves classiques dans la catégorie des ensembles ordonnés et bimodules, qui mène à l’extraction d’invariants significatifs. Suit une généralisation de cette sémantique concrète, soit l’interprétation des preuves classiques dans une catégorie compacte fermée où chaque objet est doté d’une structure d’algèbre de Frobenius. Ceci nous mène à une définition de réseaux de démonstrations pour la logique classique. Le concept de correction, l’élimination des coupures et le problème de la “full completeness” sont abordés au moyen d’un enrichissement naturel dans les ordres sur la catégorie de Frobenius, produisant une catégorie pour l'élimination des coupures et un concept de ressources pour la logique classique. Revenant sur notre première sémantique concrète, nous montrons que nous avons une représentation fidèle de la catégorie de Frobenius dans la catégorie des ensembles ordonnés et bimodules. / The general theme of this thesis is the exploitation of the fruitful interaction between denotational semantics and syntax. Satisfying semantics have been discovered for proofs in intuitionistic and certain linear logics, but for the classical case, solving the problem is notoriously difficult.This work begins with investigations of concrete interpretations of classical proofs in the category of posets and bimodules, resulting in the definition of meaningful invariants of proofs. Then, generalizing this concrete semantics, classical proofs are interpreted in a free symmetric compact closed category where each object is endowed with the structure of a Frobenius algebra. The generalization paves a way for a theory of proof nets for classical proofs. Correctness, cut elimination and the issue of full completeness are addressed through natural order enrichments defined on the Frobenius category, yielding a category with cut elimination and a concept of resources in classical logic. Revisiting our initial concrete semantics, we show we have a faithful representation of the Frobenius category in the category of posets and bimodules.
69

Deep Inference and Symmetry in Classical Proofs

Brünnler, Kai 25 August 2003 (has links) (PDF)
In this thesis we see deductive systems for classical propositional and predicate logic which use deep inference, i.e. inference rules apply arbitrarily deep inside formulas, and a certain symmetry, which provides an involution on derivations. Like sequent systems, they have a cut rule which is admissible. Unlike sequent systems, they enjoy various new interesting properties. Not only the identity axiom, but also cut, weakening and even contraction are reducible to atomic form. This leads to inference rules that are local, meaning that the effort of applying them is bounded, and finitary, meaning that, given a conclusion, there is only a finite number of premises to choose from. The systems also enjoy new normal forms for derivations and, in the propositional case, a cut elimination procedure that is drastically simpler than the ones for sequent systems.
70

Nondeterminism and Language Design in Deep Inference

Kahramanogullari, Ozan 13 April 2007 (has links) (PDF)
This thesis studies the design of deep-inference deductive systems. In the systems with deep inference, in contrast to traditional proof-theoretic systems, inference rules can be applied at any depth inside logical expressions. Deep applicability of inference rules provides a rich combinatorial analysis of proofs. Deep inference also makes it possible to design deductive systems that are tailored for computer science applications and otherwise provably not expressible. By applying the inference rules deeply, logical expressions can be manipulated starting from their sub-expressions. This way, we can simulate analytic proofs in traditional deductive formalisms. Furthermore, we can also construct much shorter analytic proofs than in these other formalisms. However, deep applicability of inference rules causes much greater nondeterminism in proof construction. This thesis attacks the problem of dealing with nondeterminism in proof search while preserving the shorter proofs that are available thanks to deep inference. By redesigning the deep inference deductive systems, some redundant applications of the inference rules are prevented. By introducing a new technique which reduces nondeterminism, it becomes possible to obtain a more immediate access to shorter proofs, without breaking certain proof theoretical properties such as cutelimination. Different implementations presented in this thesis allow to perform experiments on the techniques that we developed and observe the performance improvements. Within a computation-as-proof-search perspective, we use deepinference deductive systems to develop a common proof-theoretic language to the two fields of planning and concurrency.

Page generated in 0.0665 seconds