• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 6
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 8
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Formalizing Combinatorial Matrix Theory

Fernandez, Ariel German G. 10 1900 (has links)
<p>In this thesis we are concerned with the complexity of formalizing reasoning in Combinatorial Matrix Theory (CMT). We are interested in the strength of the bounded arithmetic theories necessary in order to prove the fundamental results of this field. Bounded Arithmetic can be seen as the uniform counterpart of Propositional Proof Complexity.</p> <p>Perhaps the most famous and fundamental theorem in CMT is the K{\"o}nig's Min-Max Theorem $(\KMM)$ which arises naturally in all areas of combinatorial algorithms. As far as we know, in this thesis we give the first feasible proof of $\KMM$. Our results show that Min-Max reasoning can be formalized with uniform Extended Frege.</p> <p>We show, by introducing new proof techniques, that the first order theory $\LA$ with induction restricted to $\Sigma_1^B$ formulas---i.e., restricted to bounded existential matrix quantification---is sufficient to formalize a large portion of CMT, in particular $\KMM$. $\Sigma_1^B$-$\LA$ corresponds to polynomial time reasoning, also known as $\ELA$.</p> <p>While we consider matrices over $\{0,1\}$, the underlying ring is $\mathbb{Z}$, since we require that $\Sigma A$ compute the number of 1s in the matrix $A$ (which for a 0-1 matrix is simply the sum of all entries---meaning $\Sigma A$). Thus, over $\mathbb{Z}$, $\LA$ translates to $\TC^0$-Frege, while, as mentioned before, $\ELA$ translates into Extended Frege.</p> <p>In order to prove $\KMM$ in $\ELA$, we need to restrict induction to $\Sigma_1^B$ formulas. The main technical contribution is presented in Claim~4.3.4, ~Section~4.3.3. Basically, we introduce a polynomial time procedure, whose proof of correctness can be shown with $\ELA$, that works as follow: given a matrix of size $e \times f$ such that $e\leq f$, where the minimum cover is of size $e$, our procedure computes a maximum selection of size $e$, row by row.</p> <p>Furthermore, we show that Menger's Theorem, Hall's Theorem, and Dilworth's Theorem---theorems related to $\KMM$---can also be proven feasibly; in fact, all these theorems are equivalent to KMM, and the equivalence can be shown in $\LA$. We believe that this captures the proof complexity of Min-Max reasoning rather completely.</p> <p>We also present a new Permutation-Based algorithm for computing a Minimum Vertex Cover from a Maximum Matching in a bipartite graph. Our algorithm is linear-time and computationally very simple: it permutes the rows and columns of the matrix representation of the bipartite graph in order to extract the vertex cover from a maximum matching in a recursive fashion. Our Permutation-Based algorithm uses properties of $\KMM$ Theorem and it is interesting for providing a new permutation---and CMT---perspective on a well-known problem.</p> / Doctor of Philosophy (PhD)
12

[en] LOGIC PROOFS COMPACTATION / [pt] COMPACTAÇÃO DE PROVAS LÓGICAS

VASTON GONCALVES DA COSTA 01 June 2007 (has links)
[pt] É um fato conhecido que provas clássicas podem ser demasiadamente grandes. Estudos em teoria da prova descobriram diferenças exponenciais entre provas normais (ou provas livres do corte) e suas respectivas provas não normais. Por outro lado, provadores automáticos de teorema usualmente se baseiam na construção de provas normais, livres de corte ou provas de corte atômico, pois tais procedimento envolvem menos escolhas. Provas de algumas tautologias são conhecidamente grandes quanto realizadas sem a regra do corte e curtas quando a utilizam. Queremos com este trabalho apresentar procedimentos para reduzir o tamanho de provas proposicionais. Neste sentido, apresentamos dois métodos. O primeiro, denominado método vertical, faz uso de axiomas de extensão e alguns casos é possível uma redução considerável no tamanho da prova. Apresentamos um procedimento que gera tais axiomas de extensão. O segundo, denominado método horizontal, adiciona fórmulas máximas por meio de unificação via substituição de variáveis proposicionais. Também apresentamos um método que gera tal unificação durante o processo de construção da prova. O primeiro método é aplicado a dedução natural enquanto o segundo à Dedução Natural e Cálculo de Seqüentes. As provas produzidas correspondem de certo modo a provas não normais (com a regra do corte). / [en] It is well-known that the size of propositional classical proofs can be huge. Proof theoretical studies discovered exponential gaps between normal or cut-free proofs and their respective non-normal proofs. The task of automatic theorem proving is, on the other hand, usually based on the construction of normal, cut-free or only-atomic-cuts proofs, since this procedure produces less alternative choices. There are familiar tautologies such that the cut-free proof is huge while the non-cut-free is small. The aim of this work is to reduce the weight of proposicional deductions. In this sense we present two methods. The fi first, namely vertical method, uses the extension axioms. We present a method that generates a such extension axiom. The second, namely horizontal method, adds suitable (propositional) unifi fications modulo variable substitutions.We also present a method that generates a such unifi fication during the proving process. The proofs produced correspond in a certain way to non normal proofs (non cut-free proofs).
13

Short Proofs May Be Spacious : Understanding Space in Resolution

Nordström, Jakob January 2008 (has links)
Om man ser på de bästa nu kända algoritmerna för att avgöra satisfierbarhet hos logiska formler så är de allra flesta baserade på den så kallade DPLL-metoden utökad med klausulinlärning. De två viktigaste gränssättande faktorerna för sådana algoritmer är hur mycket tid och minne de använder, och att förstå sig på detta är därför en fråga som har stor praktisk betydelse. Inom området beviskomplexitet svarar tids- och minnesåtgång mot längd och minne hos resolutionsbevis för formler i konjunktiv normalform (CNF-formler). En lång rad arbeten har studerat dessa mått och även jämfört dem med bredden av bevis, ett annat mått som visat sig höra nära samman med både längd och minne. Mer formellt är längden hos ett bevis antalet rader, dvs. klausuler, bredden är storleken av den största klausulen, och minnet är maximala antalet klausuler som man behöver komma ihåg samtidigt om man under bevisets gång bara får dra nya slutsatser från klausuler som finns sparade. För längd och bredd har man lyckats visa en rad starka resultat men förståelsen av måttet minne har lämnat mycket i övrigt att önska. Till exempel så är det känt att minnet som behövs för att bevisa en formel är minst lika stort som den nödvändiga bredden, men det har varit en öppen fråga om minne och bredd kan separeras eller om de två måtten mäter "samma sak" i den meningen att de alltid är asymptotiskt lika stora för en formel. Det har också varit okänt om det faktum att det finns ett kort bevis för en formel medför att formeln också kan bevisas i litet minne (motsvarande påstående är sant för längd jämfört med bredd) eller om det tvärtom kan vara så att längd och minne är "helt orelaterade" på så sätt att även korta bevis kan kräva maximal mängd minne. I denna avhandling presenterar vi först ett förenklat bevis av trade-off-resultatet för längd jämfört med minne i (Hertel och Pitassi 2007) och visar hur samma idéer kan användas för att visa ett par andra exponentiella avvägningar i relationerna mellan olika beviskomplexitetsmått för resolution. Sedan visar vi att det finns formler som kan bevisas i linjär längd och konstant bredd men som kräver en mängd minne som växer logaritmiskt i formelstorleken, vilket vi senare förbättrar till kvadratroten av formelstorleken. Dessa resultat separerar således minne och bredd. Genom att använda andra men besläktade idéer besvarar vi därefter frågan om hur minne och längd förhåller sig till varandra genom att separera dem på starkast möjliga sätt. Mer precist visar vi att det finns CNF-formler av storlek O(n) som har resolutionbevis i längd O(n) och bredd O(1) men som kräver minne minst Omega(n/log n). Det gemensamma temat för dessa resultat är att vi studerar formler som beskriver stenläggningsspel, eller pebblingspel, på riktade acykliska grafer. Vi bevisar undre gränser för det minne som behövs för den så kallade pebblingformeln över en graf uttryckt i det svart-vita pebblingpriset för grafen i fråga. Slutligen observerar vi att vår optimala separation av minne och längd i själva verket är ett specialfall av en mer generell sats. Låt F vara en CNF-formel och f:{0,1}^d-&gt;{0,1} en boolesk funktion. Ersätt varje variabel x i F med f(x_1, ..., x_d) och skriv om denna nya formel på naturligt sätt som en CNF-formel F[f]. Då gäller, givet att F och f har rätt egenskaper, att F[f] kan bevisas i resolution i väsentligen samma längd och bredd som F, men att den minimala mängd minne som behövs för F[f] är åtminstone lika stor som det minimala antalet variabler som måste förekomma samtidigt i ett bevis för F. / Most state-of-the-art satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The two main bottlenecks for such algorithms are the amounts of time and memory used. Thus, understanding time and memory requirements for clause learning algorithms, and how these requirements are related to one another, is a question of considerable practical importance. In the field of proof complexity, these resources correspond to the length and space of resolution proofs for formulas in conjunctive normal form (CNF). There has been a long line of research investigating these proof complexity measures and relating them to the width of proofs, another measure which has turned out to be intimately connected with both length and space. Formally, the length of a resolution proof is the number of lines, i.e., clauses, the width of a proof is the maximal size of any clause in it, and the space is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. While strong results have been established for length and width, our understanding of space has been quite poor. For instance, the space required to prove a formula is known to be at least as large as the needed width, but it has remained open whether space can be separated from width or whether the two measures coincide asymptotically. It has also been unknown whether the fact that a formula is provable in short length implies that it is also provable in small space (which is the case for length versus width), or whether on the contrary these measures are "completely unrelated" in the sense that short proofs can be maximally complex with respect to space. In this thesis, as an easy first observation we present a simplified proof of the recent length-space trade-off result for resolution in (Hertel and Pitassi 2007) and show how our ideas can be used to prove a couple of other exponential trade-offs in resolution. Next, we prove that there are families of CNF formulas that can be proven in linear length and constant width but require space growing logarithmically in the formula size, later improving this exponentially to the square root of the size. These results thus separate space and width. Using a related but different approach, we then resolve the question about the relation between space and length by proving an optimal separation between them. More precisely, we show that there are families of CNF formulas of size O(n) that have resolution proofs of length O(n) and width O(1) but for which any proof requires space Omega(n/log n). All of these results are achieved by studying so-called pebbling formulas defined in terms of pebble games over directed acyclic graphs (DAGs) and proving lower bounds on the space requirements for such formulas in terms of the black-white pebbling price of the underlying DAGs. Finally, we observe that our optimal separation of space and length is in fact a special case of a more general phenomenon. Namely, for any CNF formula F and any Boolean function f:{0,1}^d-&gt;{0,1}, replace every variable x in F by f(x_1, ..., x_d) and rewrite this new formula in CNF in the natural way, denoting the resulting formula F[f]. Then if F and f have the right properties, F[f] can be proven in resolution in essentially the same length and width as F but the minimal space needed for F[f] is lower-bounded by the number of variables that have to be mentioned simultaneously in any proof for F. / QC 20100831
14

O síle slabých rozšíření teorie V0 / On the Power of Weak Extensions of V0

Müller, Sebastian Peter January 2013 (has links)
Název práce: O síle slabých rozšírení teorie V0 Autor: Sebastian Müller Katedra: Katedra Algebry Vedoucí disertační práce: Prof. RNDr. Jan Krajíček, DrSc., Katedra Algebry. Abstrakt: V predložené disertacní práci zkoumáme sílu slabých fragmentu arit- metiky. Činíme tak jak z modelově-teoretického pohledu, tak z pohledu důkazové složitosti. Pohled skrze teorii modelu naznačuje, že malý iniciální segment libo- volného modelu omezené aritmetiky bude modelem silnější teorie. Jako příklad ukážeme, že každý polylogaritmický řez modelu V0 je modelem VNC. Užitím známé souvislosti mezi fragmenty omezené aritmetiky a dokazatelností v ro- zličných důkazových systémech dokážeme separaci mezi rezolucí a TC0 -Frege systémem na náhodných 3CNF-formulích s jistým poměrem počtu klauzulí vůci počtu proměnných. Zkombinováním obou výsledků dostaneme slabší separační výsledek pro rezoluci a Fregeho důkazové systémy omezené hloubky. Klíčová slova: omezená aritmetika, důkazová složitost, Fregeho důkazový systém, Fregeho důkazový systém omezené hloubky, rezoluce Title: On the Power of Weak Extensions of V0 Author: Sebastian Müller Department: Department of Algebra Supervisor: Prof. RNDr. Jan Krajíček, DrSc., Department of Algebra....
15

Towards a Theory of Proofs of Classical Logic

Straßburger, Lutz 07 January 2011 (has links) (PDF)
Les questions <EM>"Qu'est-ce qu'une preuve?"</EM> et <EM>"Quand deux preuves sont-elles identiques?"</EM> sont fondamentales pour la théorie de la preuve. Mais pour la logique classique propositionnelle --- la logique la plus répandue --- nous n'avons pas encore de réponse satisfaisante. C'est embarrassant non seulement pour la théorie de la preuve, mais aussi pour l'informatique, où la logique classique joue un rôle majeur dans le raisonnement automatique et dans la programmation logique. De même, l'architecture des processeurs est fondée sur la logique classique. Tous les domaines dans lesquels la recherche de preuve est employée peuvent bénéficier d'une meilleure compréhension de la notion de preuve en logique classique, et le célèbre problème NP-vs-coNP peut être réduit à la question de savoir s'il existe une preuve courte (c'est-à-dire, de taille polynomiale) pour chaque tautologie booléenne. Normalement, les preuves sont étudiées comme des objets syntaxiques au sein de systèmes déductifs (par exemple, les tableaux, le calcul des séquents, la résolution, ...). Ici, nous prenons le point de vue que ces objets syntaxiques (également connus sous le nom d'arbres de preuve) doivent être considérés comme des représentations concrètes des objets abstraits que sont les preuves, et qu'un tel objet abstrait peut être représenté par un arbre en résolution ou dans le calcul des séquents. Le thème principal de ce travail est d'améliorer notre compréhension des objets abstraits que sont les preuves, et cela se fera sous trois angles différents, étudiés dans les trois parties de ce mémoire: l'algèbre abstraite (chapitre 2), la combinatoire (chapitres 3 et 4), et la complexité (chapitre 5).
16

[en] SOME RESULTS IN A PROOF-THEORY BASED ON GRAPHS / [pt] ALGUNS RESULTADOS EM TEORIA DE PROVA BASEADO EM GRAFOS

MARCELA QUISPE CRUZ 19 January 2017 (has links)
[pt] A teoria da prova tradicional da lógica proposicional trata provas cujos tamanhos podem ser demasiado grandes. Estudos teóricos de prova descobriram diferenças exponenciais entre provas normais ou livres de corte e suas respectivas provas não-normais. Assim, o uso de grafos-de-prova, ao invés de árvores ou listas, para representar provas está se tornando mais popular entre teóricos da prova. Os grafos-de-prova servem como uma forma de proporcionar uma melhor simetria para a semântica de provas e uma maneira de estudar a complexidade das provas proposicionais. O objetivo deste trabalho é reduzir o peso/tamanho de deduções. Apresentamos formalismos de grafos de prova que visam capturar a estrutura lógica de uma dedução e uma forma de facilitar a visualização das propriedades. A vantagem destes formalismos é que as fórmulas e sub-deduções em dedução natural, preservadas na estrutura de grafo, podem ser compartilhadas eliminando sub-deduções desnecessárias resultando na prova reduzida. Neste trabalho, damos uma definição precisa de grafos de prova para a lógica puramente implicacional, logo estendemos esse resultado para a lógica proposicional completa e mostramos como reduzir (eliminando fórmulas máximas) essas representações de tal forma que um teorema de normalização pode ser provado através da contagem do número de fórmulas máximas na derivação original. A normalização forte será uma consequência direta desta normalização, uma vez que qualquer redução diminui as medidas correspondentes da complexidade da derivação. Continuando com o nosso objetivo de estudar a complexidade das provas, a abordagem atual também fornece representações de grafo para lógica de primeira ordem, a inferência profunda e lógica bi-intuitionista. / [en] Traditional proof theory of Propositional Logic deals with proofs which size can be huge. Proof theoretical studies discovered exponential gaps between normal or cut free proofs and their respective non-normal proofs. Thus, the use of proof-graphs, instead of trees or lists, for representing proofs is getting popular among proof-theoreticians. Proof-graphs serve as a way to provide a better symmetry to the semantics of proofs and a way to study complexity of propositional proofs and to provide more efficient theorem provers, concerning size of propositional proofs. The aim of this work is to reduce the weight/size of deductions. We present formalisms of proof-graphs that are intended to capture the logical structure of a deduction and a way to facilitate the visualization. The advantage of these formalisms is that formulas and subdeductions in Natural Deduction, preserved in the graph structure, can be shared deleting unnecessary sub-deductions resulting in the reduced proof. In this work, we give a precise definition of proof-graphs for purely implicational logic, then we extend this result to full propositional logic and show how to reduce (eliminating maximal formulas) these representations such that a normalization theorem can be proved by counting the number of maximal formulas in the original derivation. The strong normalization will be a direct consequence of such normalization, since that any reduction decreases the corresponding measures of derivation complexity. Continuing with our aim of studying the complexity of proofs, the current approach also give graph representations for first order logic, deep inference and bi-intuitionistic logic.

Page generated in 0.1484 seconds