• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 14
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 64
  • 46
  • 39
  • 36
  • 30
  • 29
  • 26
  • 25
  • 24
  • 22
  • 21
  • 19
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Tractable Inference Relations

Givan, Robert, McAllester, David 01 December 1991 (has links)
We consider the concept of local sets of inference rules. Locality is a syntactic condition on rule sets which guarantees that the inference relation defined by those rules is polynomial time decidable. Unfortunately, determining whether a given rule set is local can be difficult. In this paper we define inductive locality, a strengthening of locality. We also give a procedure which can automatically recognize the locality of any inductively local rule set. Inductive locality seems to be more useful that the earlier concept of strong locality. We show that locality, as a property of rule sets, is undecidable in general.
72

Natural Language Based Inference Procedures Applied to Schubert's Steamroller

Givan, Robert, McAllester, David, Shalaby, Sameer 01 December 1991 (has links)
We have previously argued that the syntactic structure of natural language can be exploited to construct powerful polynomial time inference procedures. This paper supports the earlier arguments by demonstrating that a natural language based polynomial time procedure can solve Schubert's steamroller in a single step.
73

Observations on Cognitive Judgments

McAllester, David 01 December 1991 (has links)
It is obvious to anyone familiar with the rules of the game of chess that a king on an empty board can reach every square. It is true, but not obvious, that a knight can reach every square. Why is the first fact obvious but the second fact not? This paper presents an analytic theory of a class of obviousness judgments of this type. Whether or not the specifics of this analysis are correct, it seems that the study of obviousness judgments can be used to construct integrated theories of linguistics, knowledge representation, and inference.
74

Grammar Rewriting

McAllester, David 01 December 1991 (has links)
We present a term rewriting procedure based on congruence closure that can be used with arbitrary equational theories. This procedure is motivated by the pragmatic need to prove equations in equational theories where confluence can not be achieved. The procedure uses context free grammars to represent equivalence classes of terms. The procedure rewrites grammars rather than terms and uses congruence closure to maintain certain congruence properties of the grammar. Grammars provide concise representations of large term sets. Infinite term sets can be represented with finite grammars and exponentially large term sets can be represented with linear sized grammars.
75

Applications of Games to Propositional Proof Complexity

Hertel, Alexander 19 January 2009 (has links)
In this thesis we explore a number of ways in which combinatorial games can be used to help prove results in the area of propositional proof complexity. The results in this thesis can be divided into two sets, the first being dedicated to the study of Resolution space (memory) requirements, whereas the second is centered on formalizing the notion of `dangerous' reductions. The first group of results investigate Resolution space measures by asking questions of the form, `Given a formula F and integer k, does F have a [Type of Resolution] proof with [Type of Resource] at most k?'. We refer to this as a proof complexity resource problem, and provide comprehensive results for several forms of Resolution as well as various resources. These results include the PSPACE-Completeness of Tree Resolution clause space (and the Prover/Delayer game), the PSPACE-Completeness of Input Resolution derivation total space, and the PSPACE-Hardness of Resolution variable space. This research has theoretical as well as practical motivations: Proof complexity research has focused on the size of proofs, and Resolution space requirements are an interesting new theoretical area of study. In more practical terms, the Resolution proof system forms the underpinnings of all modern SAT-solving algorithms, including clause learning. In practice, the limiting factor on these algorithms is memory space, so there is a strong motivation for better understanding it as a resource. With the second group of results in this thesis we investigate and formalize what it means for a reduction to be `dangerous'. The area of SAT-solving necessarily employs reductions in order to translate from other domains to SAT, where the power of highly-optimized algorithms can be brought to bear. Researchers have empirically observed that it is unfortunately possible for reductions to map easy instances from the input domain to hard SAT instances. We develop a non-Hamiltonicity proof system and combine it with additional results concerning the Prover/Delayer game from the first part of this thesis as well as proof complexity results for intuitionistic logic in order to provide the first formal examples of harmful and beneficial reductions, ultimately leading to the development of a framework for studying and comparing translations from one language to another.
76

Applications of Games to Propositional Proof Complexity

Hertel, Alexander 19 January 2009 (has links)
In this thesis we explore a number of ways in which combinatorial games can be used to help prove results in the area of propositional proof complexity. The results in this thesis can be divided into two sets, the first being dedicated to the study of Resolution space (memory) requirements, whereas the second is centered on formalizing the notion of `dangerous' reductions. The first group of results investigate Resolution space measures by asking questions of the form, `Given a formula F and integer k, does F have a [Type of Resolution] proof with [Type of Resource] at most k?'. We refer to this as a proof complexity resource problem, and provide comprehensive results for several forms of Resolution as well as various resources. These results include the PSPACE-Completeness of Tree Resolution clause space (and the Prover/Delayer game), the PSPACE-Completeness of Input Resolution derivation total space, and the PSPACE-Hardness of Resolution variable space. This research has theoretical as well as practical motivations: Proof complexity research has focused on the size of proofs, and Resolution space requirements are an interesting new theoretical area of study. In more practical terms, the Resolution proof system forms the underpinnings of all modern SAT-solving algorithms, including clause learning. In practice, the limiting factor on these algorithms is memory space, so there is a strong motivation for better understanding it as a resource. With the second group of results in this thesis we investigate and formalize what it means for a reduction to be `dangerous'. The area of SAT-solving necessarily employs reductions in order to translate from other domains to SAT, where the power of highly-optimized algorithms can be brought to bear. Researchers have empirically observed that it is unfortunately possible for reductions to map easy instances from the input domain to hard SAT instances. We develop a non-Hamiltonicity proof system and combine it with additional results concerning the Prover/Delayer game from the first part of this thesis as well as proof complexity results for intuitionistic logic in order to provide the first formal examples of harmful and beneficial reductions, ultimately leading to the development of a framework for studying and comparing translations from one language to another.
77

An Attempt to Automate <i>NP</i>-Hardness Reductions via <i>SO</i>&#8707; Logic

Nijjar, Paul January 2004 (has links)
We explore the possibility of automating <i>NP</i>-hardness reductions. We motivate the problem from an artificial intelligence perspective, then propose the use of second-order existential (<i>SO</i>&#8707;) logic as representation language for decision problems. Building upon the theoretical framework of J. Antonio Medina, we explore the possibility of implementing seven syntactic operators. Each operator transforms <i>SO</i>&#8707; sentences in a way that preserves <i>NP</i>-completeness. We subsequently propose a program which implements these operators. We discuss a number of theoretical and practical barriers to this task. We prove that determining whether two <i>SO</i>&#8707; sentences are equivalent is as hard as GRAPH ISOMORPHISM, and prove that determining whether an arbitrary <i>SO</i>&#8707; sentence represents an <i>NP</i>-complete problem is undecidable.
78

A linearized DPLL calculus with learning

Arnold, Holger January 2007 (has links)
This paper describes the proof calculus LD for clausal propositional logic, which is a linearized form of the well-known DPLL calculus extended by clause learning. It is motivated by the demand to model how current SAT solvers built on clause learning are working, while abstracting from decision heuristics and implementation details. The calculus is proved sound and terminating. Further, it is shown that both the original DPLL calculus and the conflict-directed backtracking calculus with clause learning, as it is implemented in many current SAT solvers, are complete and proof-confluent instances of the LD calculus. / Dieser Artikel beschreibt den Beweiskalkül LD für aussagenlogische Formeln in Klauselform. Dieser Kalkül ist eine um Klausellernen erweiterte linearisierte Variante des bekannten DPLL-Kalküls. Er soll dazu dienen, das Verhalten von auf Klausellernen basierenden SAT-Beweisern zu modellieren, wobei von Entscheidungsheuristiken und Implementierungsdetails abstrahiert werden soll. Es werden Korrektheit und Terminierung des Kalküls bewiesen. Weiterhin wird gezeigt, dass sowohl der ursprüngliche DPLL-Kalkül als auch der konfliktgesteuerte Rücksetzalgorithmus mit Klausellernen, wie er in vielen aktuellen SAT-Beweisern implementiert ist, vollständige und beweiskonfluente Spezialisierungen des LD-Kalküls sind.
79

Using theorem proving and algorithmic decision procedures for large-scale system verification

Ray, Sandip 28 August 2008 (has links)
Not available / text
80

A verified framework for symbolic execution in the ACL2 theorem prover

Swords, Sol Otis 11 February 2011 (has links)
Mechanized theorem proving is a promising means of formally establishing facts about complex systems. However, in applying theorem proving methodologies to industrial-scale hardware and software systems, a large amount of user interaction is required in order to prove useful properties. In practice, the human user tasked with such a verification must gain a deep understanding of the system to be verified, and prove numerous lemmas in order to allow the theorem proving program to approach a proof of the desired fact. Furthermore, proofs that fail during this process are a source of confusion: the proof may either fail because the conjecture was false, or because the prover required more help from the user in order to reach the desired conclusion. We have implemented a symbolic execution framework inside the ACL2 theorem prover in order to help address these issues on certain problem domains. Our framework introduces a proof strategy that applies bit-level symbolic execution using BDDs to finite-domain problems. This proof strategy is a fully verified decision procedure for such problems, and on many useful problem domains its capacity vastly exceeds that of exhaustive testing. Our framework also produces counterexamples for conjectures that it determines to be false. Our framework seeks to reduce the amount of necessary user interaction in proving theorems about industrial-scale hardware and software systems. By increasing the automation available in the prover, we allow the user to complete useful proofs while understanding less of the detailed implementation of the system. Furthermore, by producing counterexamples for falsified conjectures, our framework reduces the time spent by the user in trying to determine why a proof failed. / text

Page generated in 0.0808 seconds