• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 28
  • 14
  • 11
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 209
  • 209
  • 96
  • 85
  • 48
  • 47
  • 47
  • 35
  • 32
  • 32
  • 31
  • 28
  • 21
  • 19
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Optimization Approaches to Protein Folding

Yoon, Hyun-suk 20 November 2006 (has links)
This research shows optimization approaches to protein folding. The protein folding problem is to predict the compact three dimensional structure of a protein based on its amino acid sequence. This research focuses on ab-initio mathematical models to find provably optimal solutions to the 2D HP-lattice protein folding model. We built two integer programming (IP) models and five constraint programming (CP) models. All the models give provably optimal solutions. We also developed some CP techniques to solve the problem faster and then compared their computational times. We tested the models with several protein instances. My models, while they are probably too slow to use in practice, are significantly faster than the alternatives, and thus are mathematically relevant. We also provided reasons why protein folding is hard using complexity analysis. This research will contribute to showing whether CP can be an alternative to or a complement of IP in the future. Moreover, figuring out techniques combining CP and IP is a prominent research issue and our work will contribute to that literature. It also shows which IP/CP strategies can speed up the running time for this type of problem. Finally, it shows why a mathematical approach to protein folding is especially hard not only mathematically, i.e. NP-hard, but also practically.
62

On algorithm selection, with an application to combinatorial search problems

Kotthoff, Lars January 2012 (has links)
The Algorithm Selection Problem is to select the most appropriate way for solving a problem given a choice of different ways. Some of the most prominent and successful applications come from Artificial Intelligence and in particular combinatorial search problems. Machine Learning has established itself as the de facto way of tackling the Algorithm Selection Problem. Yet even after a decade of intensive research, there are no established guidelines as to what kind of Machine Learning to use and how. This dissertation presents an overview of the field of Algorithm Selection and associated research and highlights the fundamental questions left open and problems facing practitioners. In a series of case studies, it underlines the difficulty of doing Algorithm Selection in practice and tackles issues related to this. The case studies apply Algorithm Selection techniques to new problem domains and show how to achieve significant performance improvements. Lazy learning in constraint solving and the implementation of the alldifferent constraint are the areas in which we improve on the performance of current state of the art systems. The case studies furthermore provide empirical evidence for the effectiveness of using the misclassification penalty as an input to Machine Learning. After having established the difficulty, we present an effective technique for reducing it. Machine Learning ensembles are a way of reducing the background knowledge and experimentation required from the researcher while increasing the robustness of the system. Ensembles do not only decrease the difficulty, but can also increase the performance of Algorithm Selection systems. They are used to much the same ends in Machine Learning itself. We finally tackle one of the great remaining challenges of Algorithm Selection -- which Machine Learning technique to use in practice. Through a large-scale empirical evaluation on diverse data taken from Algorithm Selection applications in the literature, we establish recommendations for Machine Learning algorithms that are likely to perform well in Algorithm Selection for combinatorial search problems. The recommendations are based on strong empirical evidence and additional statistical simulations. The research presented in this dissertation significantly reduces the knowledge threshold for researchers who want to perform Algorithm Selection in practice. It makes major contributions to the field of Algorithm Selection by investigating fundamental issues that have been largely ignored by the research community so far.
63

Other Things Besides Number : Abstraction, Constraint Propagation, and String Variable Types

Scott, Joseph January 2016 (has links)
Constraint programming (CP) is a technology in which a combinatorial problem is modeled declaratively as a conjunction of constraints, each of which captures some of the combinatorial substructure of the problem. Constraints are more than a modeling convenience: every constraint is partially implemented by an inference algorithm, called a propagator, that rules out some but not necessarily all infeasible candidate values of one or more unknowns in the scope of the constraint. Interleaving propagation with systematic search leads to a powerful and complete solution method, combining a high degree of re-usability with natural, high-level modeling. A propagator can be characterized as a sound approximation of a constraint on an abstraction of sets of candidate values; propagators that share an abstraction are similar in the strength of the inference they perform when identifying infeasible candidate values. In this thesis, we consider abstractions of sets of candidate values that may be described by an elegant mathematical formalism, the Galois connection. We develop a theoretical framework from the correspondence between Galois connections and propagators, unifying two disparate views of the abstraction-propagation connection, namely the oft-overlooked distinction between representational and computational over-approximations. Our framework yields compact definitions of propagator strength, even in complicated cases (i.e., involving several types, or unknowns with internal structure); it also yields a method for the principled derivation of propagators from constraint definitions. We apply this framework to the extension of an existing CP solver to constraints over strings, that is, words of finite length. We define, via a Galois connection, an over-approximation for bounded-length strings, and demonstrate two different methods for implementing this overapproximation in a CP solver. First we use the Galois connection to derive a bounded-length string representation as an aggregation of existing scalar types; propagators for this representation are obtained by manual derivation, or automated synthesis, or a combination. Then we implement a string variable type, motivating design choices with knowledge gained from the construction of the over-approximation. The resulting CP solver extension not only substantially eases modeling for combinatorial string problems, but also leads to substantial efficiency improvements over prior CP methods.
64

Using integer programming and constraint programming to solve sports scheduling problems

Easton, Kelly King 12 1900 (has links)
No description available.
65

A formal analysis of the MLS LAN: TCB-to-TCBE, Session Status, and TCBE-to-Session Server Protocols

Craven, Daniel Shawn 09 1900 (has links)
Approved for public release; distribution is unlimited. / This thesis presents a formal analysis process and the results of applying that process to the MLS LAN: TCB-to- TCBE, Session Status, and TCBE-to-Session Server Protocols. The formal analysis process consists of several distinct stages: the creation of a detailed informal protocol description, analyzing that description to reveal assumptions and areas of interest not directly addressed in the protocol description, the transformation of that description and the related assumptions into a formal Strand Space representation, analyzing that representation to reveal assumptions and areas of interest, and concluding with an application of John Millen's automated Constraint Checker analysis tool to the Strand Space representations under an extremely limited set of conditions to prove certain protocol secrecy properties.
66

RNA inverse folding and synthetic design

Garcia Martin, Juan Antonio January 2016 (has links)
Thesis advisor: Welkin E. Johnson / Thesis advisor: Peter G. Clote / Synthetic biology currently is a rapidly emerging discipline, where innovative and interdisciplinary work has led to promising results. Synthetic design of RNA requires novel methods to study and analyze known functional molecules, as well as to generate design candidates that have a high likelihood of being functional. This thesis is primarily focused on the development of novel algorithms for the design of synthetic RNAs. Previous strategies, such as RNAinverse, NUPACK-DESIGN, etc. use heuristic methods, such as adaptive walk, ensemble defect optimization (a form of simulated annealing), genetic algorithms, etc. to generate sequences that minimize specific measures (probability of the target structure, ensemble defect). In contrast, our approach is to generate a large number of sequences whose minimum free energy structure is identical to the target design structure, and subsequently filter with respect to different criteria in order to select the most promising candidates for biochemical validation. In addition, our software must be made accessible and user-friendly, thus allowing researchers from different backgrounds to use our software in their work. Therefore, the work presented in this thesis concerns three areas: Create a potent, versatile and user friendly RNA inverse folding algorithm suitable for the specific requirements of each project, implement tools to analyze the properties that differentiate known functional RNA structures, and use these methods for synthetic design of de-novo functional RNA molecules. / Thesis (PhD) — Boston College, 2016. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Biology.
67

Integrating artificial neural networks and constraint logic programming.

January 1995 (has links)
by Vincent Wai-leuk Tam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 74-80). / Chapter 1 --- Introduction and Summary --- p.1 / Chapter 1.1 --- The Task --- p.1 / Chapter 1.2 --- The Thesis --- p.2 / Chapter 1.2.1 --- Thesis --- p.2 / Chapter 1.2.2 --- Antithesis --- p.3 / Chapter 1.2.3 --- Synthesis --- p.5 / Chapter 1.3 --- Results --- p.6 / Chapter 1.4 --- Contributions --- p.6 / Chapter 1.5 --- Chapter Summaries --- p.7 / Chapter 1.5.1 --- Chapter 2: An ANN-Based Constraint-Solver --- p.8 / Chapter 1.5.2 --- Chapter 3: A Theoretical Framework of PROCLANN --- p.8 / Chapter 1.5.3 --- Chapter 4: The Prototype Implementation --- p.8 / Chapter 1.5.4 --- Chapter 5: Benchmarking --- p.9 / Chapter 1.5.5 --- Chapter 6: Conclusion --- p.9 / Chapter 2 --- An ANN-Based Constraint-Solver --- p.10 / Chapter 2.1 --- Notations --- p.11 / Chapter 2.2 --- Criteria for ANN-based Constraint-solver --- p.11 / Chapter 2.3 --- A Generic Neural Network: GENET --- p.13 / Chapter 2.3.1 --- Network Structure --- p.13 / Chapter 2.3.2 --- Network Convergence --- p.17 / Chapter 2.3.3 --- Energy Perspective --- p.22 / Chapter 2.4 --- Properties of GENET --- p.23 / Chapter 2.5 --- Incremental GENET --- p.27 / Chapter 3 --- A Theoretical Framework of PROCLANN --- p.29 / Chapter 3.1 --- Syntax and Declarative Semantics --- p.30 / Chapter 3.2 --- Unification in PROCLANN --- p.33 / Chapter 3.3 --- PROCLANN Computation Model --- p.38 / Chapter 3.4 --- Soundness and Weak Completeness of the PROCLANN Compu- tation Model --- p.40 / Chapter 3.5 --- Probabilistic Non-determinism --- p.46 / Chapter 4 --- The Prototype Implementation --- p.48 / Chapter 4.1 --- Prototype Design --- p.48 / Chapter 4.2 --- Implementation Issues --- p.52 / Chapter 5 --- Benchmarking --- p.58 / Chapter 5.1 --- N-Queens --- p.59 / Chapter 5.1.1 --- Benchmarking --- p.59 / Chapter 5.1.2 --- Analysis --- p.59 / Chapter 5.2 --- Graph-coloring --- p.63 / Chapter 5.2.1 --- Benchmarking --- p.63 / Chapter 5.2.2 --- Analysis --- p.64 / Chapter 5.3 --- Exceptionally Hard Problem --- p.66 / Chapter 5.3.1 --- Benchmarking --- p.67 / Chapter 5.3.2 --- Analysis --- p.67 / Chapter 6 --- Conclusion --- p.68 / Chapter 6.1 --- Contributions --- p.68 / Chapter 6.2 --- Limitations --- p.70 / Chapter 6.3 --- Future Work --- p.71 / Chapter 6.3.1 --- Parallel Implementation --- p.71 / Chapter 6.3.2 --- General Constraint Handling --- p.72 / Chapter 6.3.3 --- Other ANN Models --- p.73 / Chapter 6.3.4 --- Other Domains --- p.73 / Bibliography --- p.74 / Appendix A The Hard Graph-coloring Problems --- p.81 / Appendix B An Exceptionally Hard Problem (EHP) --- p.182
68

A value estimation approach to Iri-Imai's method for constrained convex optimization.

January 2002 (has links)
Lam Sze Wan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 93-95). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background --- p.4 / Chapter 3 --- Review of Iri-Imai Algorithm for Convex Programming Prob- lems --- p.10 / Chapter 3.1 --- Iri-Imai Algorithm for Convex Programming --- p.11 / Chapter 3.2 --- Numerical Results --- p.14 / Chapter 3.2.1 --- Linear Programming Problems --- p.15 / Chapter 3.2.2 --- Convex Quadratic Programming Problems with Linear Inequality Constraints --- p.17 / Chapter 3.2.3 --- Convex Quadratic Programming Problems with Con- vex Quadratic Inequality Constraints --- p.18 / Chapter 3.2.4 --- Summary of Numerical Results --- p.21 / Chapter 3.3 --- Chapter Summary --- p.22 / Chapter 4 --- Value Estimation Approach to Iri-Imai Method for Con- strained Optimization --- p.23 / Chapter 4.1 --- Value Estimation Function Method --- p.24 / Chapter 4.1.1 --- Formulation and Properties --- p.24 / Chapter 4.1.2 --- Value Estimation Approach to Iri-Imai Method --- p.33 / Chapter 4.2 --- "A New Smooth Multiplicative Barrier Function Φθ+,u" --- p.35 / Chapter 4.2.1 --- Formulation and Properties --- p.35 / Chapter 4.2.2 --- "Value Estimation Approach to Iri-Imai Method by Us- ing Φθ+,u" --- p.41 / Chapter 4.3 --- Convergence Analysis --- p.43 / Chapter 4.4 --- Numerical Results --- p.46 / Chapter 4.4.1 --- Numerical Results Based on Algorithm 4.1 --- p.46 / Chapter 4.4.2 --- Numerical Results Based on Algorithm 4.2 --- p.50 / Chapter 4.4.3 --- Summary of Numerical Results --- p.59 / Chapter 4.5 --- Chapter Summary --- p.60 / Chapter 5 --- Extension of Value Estimation Approach to Iri-Imai Method for More General Constrained Optimization --- p.61 / Chapter 5.1 --- Extension of Iri-Imai Algorithm 3.1 for More General Con- strained Optimization --- p.62 / Chapter 5.1.1 --- Formulation and Properties --- p.62 / Chapter 5.1.2 --- Extension of Iri-Imai Algorithm 3.1 --- p.63 / Chapter 5.2 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.1 for More General Constrained Optimization --- p.64 / Chapter 5.2.1 --- Formulation and Properties --- p.64 / Chapter 5.2.2 --- Value Estimation Approach to Iri-Imai Method --- p.67 / Chapter 5.3 --- Extension of Value Estimation Approach to Iri-Imai Algo- rithm 4.2 for More General Constrained Optimization --- p.69 / Chapter 5.3.1 --- Formulation and Properties --- p.69 / Chapter 5.3.2 --- Value Estimation Approach to Iri-Imai Method --- p.71 / Chapter 5.4 --- Numerical Results --- p.72 / Chapter 5.4.1 --- Numerical Results Based on Algorithm 5.1 --- p.73 / Chapter 5.4.2 --- Numerical Results Based on Algorithm 5.2 --- p.76 / Chapter 5.4.3 --- Numerical Results Based on Algorithm 5.3 --- p.78 / Chapter 5.4.4 --- Summary of Numerical Results --- p.86 / Chapter 5.5 --- Chapter Summary --- p.87 / Chapter 6 --- Conclusion --- p.88 / Bibliography --- p.93 / Chapter A --- Search Directions --- p.96 / Chapter A.1 --- Newton's Method --- p.97 / Chapter A.1.1 --- Golden Section Method --- p.99 / Chapter A.2 --- Gradients and Hessian Matrices --- p.100 / Chapter A.2.1 --- Gradient of Φθ(x) --- p.100 / Chapter A.2.2 --- Hessian Matrix of Φθ(x) --- p.101 / Chapter A.2.3 --- Gradient of Φθ(x) --- p.101 / Chapter A.2.4 --- Hessian Matrix of φθ (x) --- p.102 / Chapter A.2.5 --- Gradient and Hessian Matrix of Φθ(x) in Terms of ∇xφθ (x) and∇2xxφθ (x) --- p.102 / Chapter A.2.6 --- "Gradient of φθ+,u(x)" --- p.102 / Chapter A.2.7 --- "Hessian Matrix of φθ+,u(x)" --- p.103 / Chapter A.2.8 --- "Gradient and Hessian Matrix of Φθ+,u(x) in Terms of ∇xφθ+,u(x)and ∇2xxφθ+,u(x)" --- p.103 / Chapter A.3 --- Newton's Directions --- p.103 / Chapter A.3.1 --- Newton Direction of Φθ (x) in Terms of ∇xφθ (x) and ∇2xxφθ(x) --- p.104 / Chapter A.3.2 --- "Newton Direction of Φθ+,u(x) in Terms of ∇xφθ+,u(x) and ∇2xxφθ,u(x)" --- p.104 / Chapter A.4 --- Feasible Descent Directions for the Minimization Problems (Pθ) and (Pθ+) --- p.105 / Chapter A.4.1 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ) --- p.105 / Chapter A.4.2 --- Feasible Descent Direction for the Minimization Prob- lems (Pθ+) --- p.107 / Chapter B --- Randomly Generated Test Problems for Positive Definite Quadratic Programming --- p.109 / Chapter B.l --- Convex Quadratic Programming Problems with Linear Con- straints --- p.110 / Chapter B.l.1 --- General Description of Test Problems --- p.110 / Chapter B.l.2 --- The Objective Function --- p.112 / Chapter B.l.3 --- The Linear Constraints --- p.113 / Chapter B.2 --- Convex Quadratic Programming Problems with Quadratic In- equality Constraints --- p.116 / Chapter B.2.1 --- The Quadratic Constraints --- p.117
69

A lagrangian reconstruction of a class of local search methods.

January 1998 (has links)
by Choi Mo Fung Kenneth. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 105-112). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Constraint Satisfaction Problems --- p.2 / Chapter 1.2 --- Constraint Satisfaction Techniques --- p.2 / Chapter 1.3 --- Motivation of the Research --- p.4 / Chapter 1.4 --- Overview of the Thesis --- p.5 / Chapter 2 --- Related Work --- p.7 / Chapter 2.1 --- Min-conflicts Heuristic --- p.7 / Chapter 2.2 --- GSAT --- p.8 / Chapter 2.3 --- Breakout Method --- p.8 / Chapter 2.4 --- GENET --- p.9 / Chapter 2.5 --- E-GENET --- p.9 / Chapter 2.6 --- DLM --- p.10 / Chapter 2.7 --- Simulated Annealing --- p.11 / Chapter 2.8 --- Genetic Algorithms --- p.12 / Chapter 2.9 --- Tabu Search --- p.12 / Chapter 2.10 --- Integer Programming --- p.13 / Chapter 3 --- Background --- p.15 / Chapter 3.1 --- GENET --- p.15 / Chapter 3.1.1 --- Network Architecture --- p.15 / Chapter 3.1.2 --- Convergence Procedure --- p.18 / Chapter 3.2 --- Classical Optimization --- p.22 / Chapter 3.2.1 --- Optimization Problems --- p.22 / Chapter 3.2.2 --- The Lagrange Multiplier Method --- p.23 / Chapter 3.2.3 --- Saddle Point of Lagrangian Function --- p.25 / Chapter 4 --- Binary CSP's as Zero-One Integer Constrained Minimization Prob- lems --- p.27 / Chapter 4.1 --- From CSP to SAT --- p.27 / Chapter 4.2 --- From SAT to Zero-One Integer Constrained Minimization --- p.29 / Chapter 5 --- A Continuous Lagrangian Approach for Solving Binary CSP's --- p.33 / Chapter 5.1 --- From Integer Problems to Real Problems --- p.33 / Chapter 5.2 --- The Lagrange Multiplier Method --- p.36 / Chapter 5.3 --- Experiment --- p.37 / Chapter 6 --- A Discrete Lagrangian Approach for Solving Binary CSP's --- p.39 / Chapter 6.1 --- The Discrete Lagrange Multiplier Method --- p.39 / Chapter 6.2 --- Parameters of CSVC --- p.43 / Chapter 6.2.1 --- Objective Function --- p.43 / Chapter 6.2.2 --- Discrete Gradient Operator --- p.44 / Chapter 6.2.3 --- Integer Variables Initialization --- p.45 / Chapter 6.2.4 --- Lagrange Multipliers Initialization --- p.46 / Chapter 6.2.5 --- Condition for Updating Lagrange Multipliers --- p.46 / Chapter 6.3 --- A Lagrangian Reconstruction of GENET --- p.46 / Chapter 6.4 --- Experiments --- p.52 / Chapter 6.4.1 --- Evaluation of LSDL(genet) --- p.53 / Chapter 6.4.2 --- Evaluation of Various Parameters --- p.55 / Chapter 6.4.3 --- Evaluation of LSDL(max) --- p.63 / Chapter 6.5 --- Extension of LSDL --- p.66 / Chapter 6.5.1 --- Arc Consistency --- p.66 / Chapter 6.5.2 --- Lazy Arc Consistency --- p.67 / Chapter 6.5.3 --- Experiments --- p.70 / Chapter 7 --- Extending LSDL for General CSP's: Initial Results --- p.77 / Chapter 7.1 --- General CSP's as Integer Constrained Minimization Problems --- p.77 / Chapter 7.1.1 --- Formulation --- p.78 / Chapter 7.1.2 --- Incompatibility Functions --- p.79 / Chapter 7.2 --- The Discrete Lagrange Multiplier Method --- p.84 / Chapter 7.3 --- A Comparison between the Binary and the General Formulation --- p.85 / Chapter 7.4 --- Experiments --- p.87 / Chapter 7.4.1 --- The N-queens Problems --- p.89 / Chapter 7.4.2 --- The Graph-coloring Problems --- p.91 / Chapter 7.4.3 --- The Car-Sequencing Problems --- p.92 / Chapter 7.5 --- Inadequacy of the Formulation --- p.94 / Chapter 7.5.1 --- Insufficiency of the Incompatibility Functions --- p.94 / Chapter 7.5.2 --- Dynamic Illegal Constraint --- p.96 / Chapter 7.5.3 --- Experiments --- p.97 / Chapter 8 --- Concluding Remarks --- p.100 / Chapter 8.1 --- Contributions --- p.100 / Chapter 8.2 --- Discussions --- p.102 / Chapter 8.3 --- Future Work --- p.103 / Bibliography --- p.105
70

On compressing and parallelizing constraint satisfaction problems / Compression et parallélisation des problèmes de satisfaction de contraintes

Gharbi, Nebras 04 December 2015 (has links)
La programmation par contraintes est un cadre puissant utilisé pour modéliser et résoudre des problèmes combinatoires, employant des techniques d'intelligence artificielle, de la recherche opérationnelle, de théorie des graphes,..., etc. L'idée de base de la programmation par contraintes est que l'utilisateur exprime ses contraintes et qu'un solveur de contraintes cherche une ou plusieurs solutions.Les problèmes de satisfaction de contraintes (CSP), sont au cœur de la programmation par contraintes. Ce sont des problèmes de décision où nous recherchons des états ou des objets satisfaisant un certain nombre de contraintes ou de critères. Ces problèmes de décision revoient vrai, si le problème admet une solution, faux, sinon. Les problèmes de satisfaction de contraintes sont le sujet de recherche intense tant en recherche opérationnelle qu'en intelligence artificielle. Beaucoup de CSPs exigent la combinaison d'heuristiques et de méthode d'inférences combinatoires pour les résoudre dans un temps raisonnable.Avec l'amélioration des ordinateurs, la résolution de plus grands problèmes devient plus facile. Bien qu'il y ait plus de capacités offertes par la nouvelle génération de machines, les problèmes industriels deviennent de plus en plus grand ce qui implique un espace _norme pour les stocker et aussi plus de temps pour les résoudre.Cette thèse s'articule autour des techniques d'optimisation de la résolution des CSPs en raisonnant sur plusieurs axes.Dans la première partie, nous traitons la compression des contraintes table. Nous proposons deux méthodes différentes pour la compression des contraintes de table. Les deux approches sont basées sur la recherche des motifs fréquents pour éviter la redondance. Cependant, la façon de définir un motif, la détection des motifs fréquents et la nouvelle représentation compacte diffère significativement. Nous présentons pour chacune des approches un algorithme de filtrage.La seconde partie est consacrée à une autre façon d'optimiser la résolution de CSP qui est l'utilisation d'une architecture parallèle. Nous proposons une méthode où nous utilisons une architecture parallèle pour améliorer le processus de résolution en établissant des cohérences parallèles. En fait, les esclaves envoient à leur maître le résultat obtenu après avoir établi la cohérence partielle en tant que nouveaux faits. Le maître, à son tour essaye de profiter d'eux en enlevant les valeurs correspondantes. / Constraint Programming (CP) is a powerful paradigm used for modelling and solving combinatorial constraint problems that relies on a wide range of techniques coming from artificial intelligence, operational research, graph theory,..., etc. The basic idea of constraint programming is that the user expresses its constraints and a constraint solver seeks a solution. Constraint Satisfaction Problems (CSP), is a framework at the heart of CP problems. They correspond to decision problems where we seek for states or objects satisfying a number of constraints or criteria. These decision problems have two answers to the question they encode: true, if the problem admits a solution, false, otherwise. CSPs are the subject of intense research in both artificial intelligence and operations research. Many CSPs require the combination of heuristics and combinatorial optimization methods to solve them in a reasonable time.With the improvement of computers, larger and larger problems can be solved. However, the size of industrial problems grow faster which requires a vast amount of memory space to store them and entail great difficulties to solve them. In this thesis, our contributions can be divided into two main parts. In the first part, we deal with the most used kind of constraints, which are table constraints. We proposed two compressed forms of table constraints. Both of them are based on frequent patterns search in order to avoid redundancy. However, the manner of defining pattern, the patterns-detecting process and the new compact representation differ significantly. For each form, we propose a filtering algorithm. In the second part, we explore another way to optimize CSP solving which is the use of a parallel architecture. In fact, we enhance the solving process by establishing parallel consistencies. Different workers send to their master the result of establishing partial consistencies as new discovered facts. The master, in its turns tries to benefit from them by removing corresponding values.

Page generated in 0.1037 seconds