1 |
A Probabilistic Study of 3-SATISFIABILITYAytemiz, Tevfik 06 July 2001 (has links)
Discrete optimization problems are defined by a finite set of solutions together with an objective function value assigned to each solution. Local search algorithms provide useful tools for addressing a wide variety of intractable discrete optimization problems. Each such algorithm offers a distinct set of rules to intelligently exploit the solution space with the hope of finding an optimal/near optimal solution using a reasonable amount of computing time.
This research studies and analyses randomly generated instances of 3-SATISFIABILITY to gain insights into the structure of the underlying solution space. Two random variables are defined and analyzed to assess the probability that a fixed solution will be assigned a particular objective function value in a randomly generated instance of 3-SATISFIABILITY. Then, a random vector is defined and analyzed to investigate how the solutions in the solution space are distributed over their objective function values. These results are then used to define a stopping criterion for local search algorithms applied to MAX 3-SATISFIABILITY.
This research also analyses and compares the effectiveness of two local search algorithms, tabu search and random restart local search, on MAX 3-SATISFIABILITY. Computational results with tabu search and random restart local search on randomly generated instances of 3-SATISFIABILITY are reported. These results suggest that, given a limited computing budget, tabu search offers an effective alternative to random restart local search. On the other hand, these two algorithms yield similar results in terms of the best solution found. The computational results also suggest that for randomly generated instances of 3-SATISFIABILITY (of the same size), the globally optimal solution objective function values are typically concentrated over a narrow range. / Ph. D.
|
2 |
Exact Algorithms for Exact Satisfiability ProblemsDahllöf, Vilhelm January 2006 (has links)
<p>This thesis presents exact means to solve a family of NP-hard problems. Starting with the well-studied Exact Satisfiability problem (XSAT) parents, siblings and daughters are derived and studied, each with interesting practical and theoretical properties. While developing exact algorithms to solve the problems, we gain new insights into their structure and mutual similarities and differences.</p><p>Given a Boolean formula in CNF, the XSAT problem asks for an assignment to the variables such that each clause contains exactly one true literal. For this problem we present an <em>O</em>(1.1730<sup>n</sup>) time algorithm, where n is the number of variables. XSAT is a special case of the General Exact Satisfiability problem which asks for an assignment such that in each clause exactly i literals be true. For this problem we present an algorithm which runs in <em>O</em>(2<sup>(1-</sup><em>ε</em><sup>) </sup><em>n</em>) time, with 0 < <em>ε</em> < 1 for every fixed <em>i</em>; for <em>i</em>=2, 3 and 4 we have running times in <em>O</em>(1.4511<sup>n</sup>), <em>O</em>(1.6214<sup>n</sup>) and <em>O</em>(1.6848<sup>n</sup>) respectively.</p><p>For the counting problems we present an O(1.2190<sup>n</sup>) time algorithm which counts the number of models for an XSAT instance. We also present algorithms for #2SAT<em>w</em><em> </em>and #3SAT<em>w</em><em>,</em> two well studied Boolean problems. The algorithms have running times in O(1.2561<sup>n</sup>) and <em>O</em>(1.6737<sup>n</sup>) respectively.</p><p>Finally we study optimisation problems: As a variant of the Maximum Exact Satisfiability problem, consider the problem of finding an assignment exactly satisfying a maximum number of clauses while the rest are left with no true literal. This problem is reducible to #2SAT<em>w</em> without the addition of new variables and thus is solvable in time <em>O</em>(1.2561<sup>n</sup>). Another interesting optimisation problem is to find two XSAT models which differ in as many variables as possible. This problem is shown to be solvable in O(1.8348<sup>n</sup>) time.</p>
|
3 |
Exact Algorithms for Exact Satisfiability ProblemsDahllöf, Vilhelm January 2006 (has links)
This thesis presents exact means to solve a family of NP-hard problems. Starting with the well-studied Exact Satisfiability problem (XSAT) parents, siblings and daughters are derived and studied, each with interesting practical and theoretical properties. While developing exact algorithms to solve the problems, we gain new insights into their structure and mutual similarities and differences. Given a Boolean formula in CNF, the XSAT problem asks for an assignment to the variables such that each clause contains exactly one true literal. For this problem we present an O(1.1730n) time algorithm, where n is the number of variables. XSAT is a special case of the General Exact Satisfiability problem which asks for an assignment such that in each clause exactly i literals be true. For this problem we present an algorithm which runs in O(2(1-ε) n) time, with 0 < ε < 1 for every fixed i; for i=2, 3 and 4 we have running times in O(1.4511n), O(1.6214n) and O(1.6848n) respectively. For the counting problems we present an O(1.2190n) time algorithm which counts the number of models for an XSAT instance. We also present algorithms for #2SATw and #3SATw, two well studied Boolean problems. The algorithms have running times in O(1.2561n) and O(1.6737n) respectively. Finally we study optimisation problems: As a variant of the Maximum Exact Satisfiability problem, consider the problem of finding an assignment exactly satisfying a maximum number of clauses while the rest are left with no true literal. This problem is reducible to #2SATw without the addition of new variables and thus is solvable in time O(1.2561n). Another interesting optimisation problem is to find two XSAT models which differ in as many variables as possible. This problem is shown to be solvable in O(1.8348n) time.
|
4 |
On the bridge between constraint satisfaction and Boolean satisfiabilityPetke, Justyna January 2012 (has links)
A wide range of problems can be formalized as a set of constraints that need to be satisfied. In fact, such a model is called a constraint satisfaction problem (CSP). Another way to represent a problem is to express it as a formula in propositional logic, or, in other words, a Boolean satisfiability problem (SAT). In the quest to find efficient algorithms for solving instances of CSP and SAT specialised software has been developed. It is, however, not clear when should we choose a SAT-solver over a constraint solver (and vice versa). CSP-solvers are known for their domain-specific reasoning, whereas SAT-solvers are considered to be remarkably fast on Boolean instances. In this thesis we tackle these issues by investigating the connections between CSP and SAT. In order to answer the question why SAT-solvers are so efficient on certain classes of CSP instances, we first present the various ways one can encode a CSP instance into SAT. Next, we show that with some encodings SAT-solvers simulate the effects of enforcing a form of local consistency, called k-consistency, in expected polynomial-time. Thus SAT-solvers are able to solve CSP instances of bounded-width structure efficiently in contrast to conventional constraint solvers. By considering the various ways one can encode CSP domains into SAT, we give theoretical reasons for choosing a particular SAT encoding for several important classes of CSP instances. In particular, we show that with this encoding many problem instances that can be solved in polynomial-time will still be easily solvable once they are translated into SAT. Furthermore, we show that this is not true for several other encodings. Finally, we compare the various ways one can use a SAT-solver to solve the classical problem of the pigeonhole principle. We perform both theoretical and empirical comparison of the various encodings. We conclude that none of the known encodings for the classical representation of the problem will result in an efficiently-solvable SAT instance. Thus in this case constraint solvers are a much better choice.
|
5 |
Nogood Processing in CSPsKatsirelos, George 19 January 2009 (has links)
The constraint satisfaction problem is an NP-complete problem that provides a convenient framework for expressing many computationally hard problems. In addition, domain knowledge can be efficiently integrated into CSPs, providing a potentially exponential speedup in some cases.
The CSP is closely related to the satisfiability problem and many of the techniques developed for one have been transferred to the other. However, the recent dramatic improvements in SAT solvers that result from learning clauses during search have not been transferred successfully to CSP solvers. In this thesis we propose that this failure is due to a fundamental restriction of \newtext{nogood
learning, which is intended to be the analogous to clause learning in CSPs}. This restriction means that nogood learning can exhibit a superpolynomial slowdown compared to clause learning in some cases. We show that the restriction can be lifted, delivering promising results.
Integration of nogood learning in a CSP solver, however, presents an additional challenge, as a large body of domain knowledge is typically encoded in the form of domain specific propagation algorithms called global constraints. Global constraints often completely eliminate the advantages of nogood learning. We demonstrate generic methods that partially alleviate the problem irrespective of the type of global constraint. We also show that
more efficient methods can be integrated into specific global constraints and demonstrate the feasibility of this approach on several widely used global constraints.
|
6 |
SAT-based Automated Design Debugging: Improvements and Application to Low-power DesignLe, Bao 20 November 2012 (has links)
With the growing complexity of modern VLSI designs, design errors become increasingly common. Design debugging today emerges as a bottleneck in the design flow, consuming up to 30% of the overall design effort. Unfortunately, design debugging is still a predominantly manual process in the industry. To tackle this problem, we enhance existing automated debugging tools and extend their applications to different design domains.
The first contribution improves the performance of automated design debugging tools by using structural circuit properties, namely dominance relationships and non-solution implications. Overall, a 42% average reduction in solving run-time demonstrates the efficacy of this approach.
The second contribution presents an automated debugging methodology for clock-gating design. Using clock-gating properties, we optimize existing debugging techniques to localizes and rectifies the design errors introduced by clock-gating implementations. Experiments show a 6% average reduction in debugging time and 80% of the power-savings retained.
|
7 |
SAT-based Automated Design Debugging: Improvements and Application to Low-power DesignLe, Bao 20 November 2012 (has links)
With the growing complexity of modern VLSI designs, design errors become increasingly common. Design debugging today emerges as a bottleneck in the design flow, consuming up to 30% of the overall design effort. Unfortunately, design debugging is still a predominantly manual process in the industry. To tackle this problem, we enhance existing automated debugging tools and extend their applications to different design domains.
The first contribution improves the performance of automated design debugging tools by using structural circuit properties, namely dominance relationships and non-solution implications. Overall, a 42% average reduction in solving run-time demonstrates the efficacy of this approach.
The second contribution presents an automated debugging methodology for clock-gating design. Using clock-gating properties, we optimize existing debugging techniques to localizes and rectifies the design errors introduced by clock-gating implementations. Experiments show a 6% average reduction in debugging time and 80% of the power-savings retained.
|
8 |
Nogood Processing in CSPsKatsirelos, George 19 January 2009 (has links)
The constraint satisfaction problem is an NP-complete problem that provides a convenient framework for expressing many computationally hard problems. In addition, domain knowledge can be efficiently integrated into CSPs, providing a potentially exponential speedup in some cases.
The CSP is closely related to the satisfiability problem and many of the techniques developed for one have been transferred to the other. However, the recent dramatic improvements in SAT solvers that result from learning clauses during search have not been transferred successfully to CSP solvers. In this thesis we propose that this failure is due to a fundamental restriction of \newtext{nogood
learning, which is intended to be the analogous to clause learning in CSPs}. This restriction means that nogood learning can exhibit a superpolynomial slowdown compared to clause learning in some cases. We show that the restriction can be lifted, delivering promising results.
Integration of nogood learning in a CSP solver, however, presents an additional challenge, as a large body of domain knowledge is typically encoded in the form of domain specific propagation algorithms called global constraints. Global constraints often completely eliminate the advantages of nogood learning. We demonstrate generic methods that partially alleviate the problem irrespective of the type of global constraint. We also show that
more efficient methods can be integrated into specific global constraints and demonstrate the feasibility of this approach on several widely used global constraints.
|
9 |
Testing and Security Related Considerations in Embedded SoftwarePierce, Luke 01 December 2016 (has links)
The continued increasing use of microprocessors in embedded systems has caused a proliferation of embedded software in small devices. In practice, many of these devices are difficult to update to fix security flaws and software errors. This brings an emphasis on ensuring the secure and reliable software prior to the release of the device to ensure the optimal user experience. With the growing need to enable test and diagnostic capabilities into embedded devices the use of the JTAG interface has grown. While the intentions of the interface was originally to give the ability to shift in data into and out of chip’s scan chains for test, the generic framework has allowed for its features to expand. For embedded microprocessor’s the interface allows for halting execution, insertion of instructions, reprogramming the software, and reading from memory. While it creates a powerful debugging system, it also allows unlimited access to a malicious users. In turn such a user has the ability to either copy the intellectual property on the device, disable digital rights management routines, or alter the devices behavior. A novel method to secure JTAG access through the use of a multi-tiered permission system is presented in this paper. The use of static code analysis can be used to verify the functionality of embedded software code. Ideally, a software code should be tested in a way that guarantees correct behavior across all possible execution paths. While in practices this is typically infeasible due to the innumerable number of paths in the system, the use of automated test systems can help maximize the amount of code covered. In addition, such methods can also identify non-executable software statements that can be an indication of software issues, or sections of software that should not be targeted for testing. New static code analysis methods are presented in this dissertation. One technique uses supersets of software solution spaces to correctly identify unreachable software code in complex systems. Another presented technique automatically generates a set of test vectors to quickly maximize the number of code blocks executed by the set of test vectors. It is shown that such a method can be significantly faster than traditional methods.
|
10 |
On the automated verification of symmetric-key cryptographic algorithms: an approach based on SAT-solversLafitte, Frédéric 19 September 2017 (has links)
A cryptographic protocol is a structured exchange of messages protected by means of cryptographic algorithms. Computer security in general relies heavily on these protocols and algorithms; in turn, these rely absolutely on smaller components called primitives. As technology advances, computers have reached a cost and a degree of miniaturisation conducive to their proliferation throughout society in the form of software-controlled network-enabled things. As these things find their way into environments where security is critical, their protection ultimately relies on primitives; if a primitive fails, all security solutions (protocols, policies, etc.) that are built on top of it are likely to offer no security at all. Lightweight symmetric-key primitives, in particular, will play a critical role.The security of protocols is frequently verified using formal and automated methods. Concerning algorithms and public-key primitives, formal proofs are often used, although they are somewhat error prone and current efforts aim to automate them. On the other hand, symmetric-key primitives are still currently analysed in a rather ad-hoc manner. Since their security is only guaranteed by the test-of-time, they traditionally have a built-in security margin. Despite being paramount to the security of embedded devices, lightweight primitives appear to have a smaller security margin and researchers would greatly benefit from automated tools in order to strengthen tests-of-time.In their seminal work back in 2000, Massacci and Marraro proposed to formulate primitives in propositional logic and to use SAT solvers to automatically verify their properties. At that time, SAT solvers were quite different from what they have become today; the continuous improvement of their performance makes them an even better choice for a verification back-end. The performance of SAT solvers improved so much that starting around 2006, some cryptanalysts started to use them, but mostly in order to speedup their attacks. This thesis introduces the framework CryptoSAT and shows its advantages for the purpose of verification. / La sécurité informatique repose en majeure partie sur des mécanismes cryptographiques, qui à leur tour dépendent de composants encore plus fondamentaux appelés primitives ;si une primitive échoue, toute la sécurité qui en dépend est vouée à l'échec. Les ordinateurs ont atteint un coût et un degré de miniaturisation propices à leur prolifération sous forme de systèmes embarqués (ou enfouis) qui offrent généralement peu de ressources calculatoires, notamment dans des environnements où la sécurité est primordiale. Leur sécurité repose donc lourdement sur les primitives dites à clé symétrique, puisque ce sont celles qui sont le mieux adaptées aux ressources limitées dont disposent les systèmes embarqués. Il n'est pas mathématiquement prouvé que les primitives à clé symétrique soient dépourvues de failles de sécurité, contrairement à tous les autres mécanismes cryptographiques :alors que la protection qu'offre la cryptographie peut, en général, être prouvée de façon formelle (dans un modèle limité) et parfois au moyen de méthodes automatisées qui laissent peu de place à l'erreur, la protection qu'offrent les primitives à clé symétrique n'est garantie que par “l'épreuve du temps”, c.-à-d. par la résistance (durable) de ces primitives face aux attaques conçues par la communauté des chercheurs en cryptologie. Pour compenser l'absence de garanties formelles, ces primitives sont traditionnellement pourvues d'une “marge de sécurité”, c.-à-d. de calculs supplémentaires, juste au cas où, dont le coût est difficile à justifier lorsque les ressources calculatoires sont rares.Afin de pallier à l'insuffisance de l'épreuve du temps et à la diminution des marges de sécurité, cette thèse revient sur les travaux de Massacci et Marraro qui, en 2000, avaient proposé de formuler les primitives en logique propositionnelle de sorte que leurs propriétés puissent être vérifiées automatiquement au moyen d'algorithmes SAT. A cette époque, les algorithmes SAT étaient très différents de ce qu'ils sont devenus aujourd'hui ;l'amélioration de leur performance, continuelle au fil des années, en fait un choix encore plus judicieux comme moteur de vérification. Dans le cadre de cette thèse, une méthode a été développée pour permettre à un cryptologue de facilement vérifier les propriétés d'une primitive à clé symétrique de façon formelle et automatique à l'aide d'algorithmes SAT, tout en lui permettant de faire abstraction de la logique propositionnelle. L'utilité de la méthode a ensuite été mise en évidence en obtenant des réponses formelles à des questions, posées dans la littérature en cryptanalyse, concernant des failles potentielles tant au niveau de la conception qu'au niveau de la mise en oeuvre de certaines primitives. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
Page generated in 0.081 seconds