The constraint satisfaction problem is an NP-complete problem that provides a convenient framework for expressing many computationally hard problems. In addition, domain knowledge can be efficiently integrated into CSPs, providing a potentially exponential speedup in some cases.
The CSP is closely related to the satisfiability problem and many of the techniques developed for one have been transferred to the other. However, the recent dramatic improvements in SAT solvers that result from learning clauses during search have not been transferred successfully to CSP solvers. In this thesis we propose that this failure is due to a fundamental restriction of \newtext{nogood
learning, which is intended to be the analogous to clause learning in CSPs}. This restriction means that nogood learning can exhibit a superpolynomial slowdown compared to clause learning in some cases. We show that the restriction can be lifted, delivering promising results.
Integration of nogood learning in a CSP solver, however, presents an additional challenge, as a large body of domain knowledge is typically encoded in the form of domain specific propagation algorithms called global constraints. Global constraints often completely eliminate the advantages of nogood learning. We demonstrate generic methods that partially alleviate the problem irrespective of the type of global constraint. We also show that
more efficient methods can be integrated into specific global constraints and demonstrate the feasibility of this approach on several widely used global constraints.
Identifer | oai:union.ndltd.org:TORONTO/oai:tspace.library.utoronto.ca:1807/16737 |
Date | 19 January 2009 |
Creators | Katsirelos, George |
Contributors | Bacchus, Fahiem |
Source Sets | University of Toronto |
Language | en_ca |
Detected Language | English |
Type | Thesis |
Format | 992279 bytes, application/pdf |
Page generated in 0.0016 seconds